Job Description:
Position Description:
Performs software development in public Cloud environments -- Amazon Web Services (AWS) and Azure. Operates and implements distributed and highly concurrent service-based architectures, including microservices, containerized services, and serverless architectures. Uses AWS as the Cloud platform, Python as the main development language, SQL as the data query language, Postgres as the Operational Data Store, AWS Lambda as the Compute Framework, Pydantic for data validation, Snowflake for Data Warehousing, and Microsoft PowerBI for reporting and analytics. Independently collects, analyzes, and gathers large amounts of data to propose and build advanced Machine Learning (ML) models.
Primary Responsibilities:
Works closely with developers and architects across the firm to influence the development of reliable products and the standardization of common solutions and best practices.
Crafts, develops, and supports software solutions in multiple technology platforms, frameworks, and languages.
Uses statistical modeling principles to understand incidents and identify new product opportunities.
Creates statistical experiments to test products.
Implements complex ML use cases using Big Data to drive production loads.
Prototypes and implements ML models to support organizational needs.
Develops unit and integration tests, validation procedures, programs, and documentation to ensure the quality of the products being created and data being processed and stored.
Designs data models and data architecture used to support both reporting and data science use cases.
Drives operational readiness discussions and reviews new software solutions and products.
Facilitates discussions among component owners to improve end-to-end understanding of transaction paths.
Designs and develops frameworks for self-assessment of applications on various stability and dependability pillars.
Education and Experience:
Bachelor’s degree (or foreign education equivalent) in Computer Science, Engineering, Information Technology, Information Systems, Information Management, Business Administration, or a closely related field and six (6) years of experience as Director, Site Reliability Engineering (or closely related occupation) performing full-stack software development using Java, Python, SQL, and JavaScript within a financial services environment.
Or, alternatively, Master’s degree (or foreign education equivalent) in Computer Science, Engineering, Information Technology, Information Systems, Information Management, Business Administration, or a closely related field and four (4) years of experience as a Director, Site Reliability Engineering (or closely related occupation) performing full-stack software development using Java, Python, SQL, and JavaScript within a financial services environment.
Skills and Knowledge:
Candidate must also possess:
Demonstrated Expertise (“DE”) coordinating resources in AWS using Infrastructure vas Code (IaC) Technologies and Terraform; architecting secure and authenticated systems using AWS IAM; building middleware APIs using AWS Lambda and API Gateway; building backend (using S3 and AWS RDS), mid-tier (using Python), and front-end (using HTML, CSS, and JavaScript) full stack applications.
DE building data harvesting pipelines to collect and collate data for further processing, using AWS Serverless Technologies; preparing data for feature engineering (before the data is uploaded into the data warehouse) in data processing pipelines, using Apache NiFi, AWS EMR, and Spark; querying data for reporting and analysis using Snowflake; conducting Data Visualization using Microsoft PowerBI; building and maintaining software APIs and prototyping ML models using Python; building ML models using data science libraries (NLTK, SciPy, Scikit-learn, NumPy, and Pandas); performing graphical analysis using NetworkX; and creating network graph visualizations using visualization libraries--- D3 and Graphviz.
DE architecting and implementing data pipelines and constructing feature-engineering routines to prepare analytic ready datasets (for supervised and unsupervised ML), using advanced data transformation techniques and Natural Language Processing (NLP) techniques; collecting and curating Big Data using Snowflake; and performing NLP to create data sets appropriate for statistical models and ML.
DE executing for source code control, using Git; deploying applications according to Continuous Integration/Continuous Development (CI/CD) methodologies, using Jenkins; writing testing code using Pytest and Unittest libraries; and ensuring operational security using automated Secure Code Scanning Software.
#PE1M2
#LI-DNI
Certifications:
Category:
Information TechnologyFidelity’s hybrid working model blends the best of both onsite and offsite work experiences. Working onsite is important for our business strategy and our culture. We also value the benefits that working offsite offers associates. Most hybrid roles require associates to work onsite every other week (all business days, M-F) in a Fidelity office.
Please be advised that Fidelity’s business is governed by the provisions of the Securities Exchange Act of 1934, the Investment Advisers Act of 1940, the Investment Company Act of 1940, ERISA, numerous state laws governing securities, investment and retirement-related financial activities and the rules and regulations of numerous self-regulatory organizations, including FINRA, among others. Those laws and regulations may restrict Fidelity from hiring and/or associating with individuals with certain Criminal Histories.