Job Description:
Position Description:
Inventories, analyzes, recommends, and advocates for solutions to optimize workload automation and batch practices. Utilizes observability SaaS providers (such as Datadog and Splunk) and Cloud Providers (such as Amazon Web Services (AWS) and Azure). Incorporates and streamlines test automation into software application builds using Continuous Integration/Continuous Delivery (CI/CD) pipeline tools. Draws on in-depth knowledge of the business or function to provide business unit-wide solutions by developing complex, multi-faceted software applications. Researches and recommends new technologies in support of the strategic direction of the business unit and participates in the research and recommendation of appropriate models, methods, tools, and technologies to achieve business-unit-wide solutions.
Primary Responsibilities:
Creates and drives workload orchestration solutions that integrates both on-premises and cloud-based batch processes and jobs.
Works on alternative, cost-effective workload orchestration solutions using cloud-native services (aside from Control-M and AutoSys).
Creates and drives the observability integrations roadmap that blends scalable automation and security within a multi-hybrid cloud environment.
Accelerates development and drives down costs through modern computing paradigms, cloud computing, and open-source software.
Works in agile, delivery-oriented teams to build, configure, and sustain internal and external cloud platforms with development ecosystems.
Collaborates with other product and engineering leads and senior management on product and delivery requirements, roadmaps and vision.
Investigates, proves out, and incorporates new technologies and capabilities to drive the technology vision for the enterprise’s secure cloud platforms.
Determines system performance standards.
Monitors equipment functionalities to ensure system operates in conformance with specifications.
Analyzes information to determine, recommend, and plan installation of a new system or modification of an existing system.
Education and Experience:
Bachelor’s degree (or foreign education equivalent) in Computer Science, Engineering, Information Technology, Information Systems, Information Management, Business Administration, or a closely related field and six (6) years of experience as a Director, Cloud Engineering (or closely related occupation) facilitating architecture practices and engineering methodologies in a financial services environment.
Or, alternatively, Master’s degree (or foreign education equivalent) in Computer Science, Engineering, Information Technology, Information Systems, Information Management, Business Administration, or a closely related field and four (4) years of experience as a Director, Cloud Engineering (or closely related occupation) facilitating architecture practices and engineering methodologies in a financial services environment.
Skills and Knowledge:
Candidate must also possess:
Demonstrated Expertise (“DE”) designing and developing workload orchestration layers using cloud-native services including Amazon managed workflows (Apache Airflow (MWAA), Eventbridge, AWS EKS, and AWS Lambda); and creating reusable Airflow templates to orchestrate batch ETL jobs using Python.
DE designing and developing high volume batch Extract, Transform, and Load/Extract, Load, and Transform (ETL/ELT) data flow pipelines using Spark Scala and PySpark; creating common reusable components (including user defined functions and libraries) to develop complex ETL/ELT data transformations and batch processes written in Scala and Python; and authoring Spark SQL for complex calculations, Spark RDD (Resilient Distributed Dataset) and DataFrames to store data and invoke Spark native libraries to build batch pipelines to meet financial services requirement.
DE solutioning, designing, architecting, and building scalable and resilient software solutions following DevOps practices using GitHub and Jenkins CI/CD; prototyping and building infrastructure frameworks and highly scalable CI/CD solutions and pipelines using Jenkins and GitHub; and conducting issue tracking (to support enterprise Agile application teams in rolling out application features) using Jira.
DE defining and implementing cloud-based data strategies to modernize legacy operational and analytical datastores (Oracle RDS, AWS Aurora, and Snowflake); providing architectural guidance on relational database technologies (Oracle) and cloud data services (AWS Aurora and Snowflake); designing data warehouse and data lake ETL/ELT pipelines using data integration frameworks (AWS Batch, SnapLogic, AWS EMR, Informatica, and Control-M).
#PE1M2
#LI-DNI
Certifications:
Category:
Information TechnologyMost roles at Fidelity are Hybrid, requiring associates to work onsite every other week (all business days, M-F) in a Fidelity office. This does not apply to Remote or fully Onsite roles.
Please be advised that Fidelity’s business is governed by the provisions of the Securities Exchange Act of 1934, the Investment Advisers Act of 1940, the Investment Company Act of 1940, ERISA, numerous state laws governing securities, investment and retirement-related financial activities and the rules and regulations of numerous self-regulatory organizations, including FINRA, among others. Those laws and regulations may restrict Fidelity from hiring and/or associating with individuals with certain Criminal Histories.