Operations AI Automation Lead - Low Code - Vice president
Location
Secunderabad | India
Job description
As an AI/ML Ops Engineer - Vice President within JP Morgan Chase's Corporate & Investment Bank, you will be an integral part of our dynamic AI/ML team. Your role will involve solving intriguing business problems specifically in the area of Client Onboarding and Documentation. We are seeking an team member with a robust engineering background, profound ML knowledge, and experience in cloud automation and DevOps. You will collaborate closely with our Data Scientists/ML Engineers to construct model pipelines, ranging from data processing to model development/testing, and ultimately, deployment in a production environment.
Job responsibilities
- Collaborate with operations, technology, transformation and other key stakeholders in understanding key business / operational challenges and come up with ideas for process optimizations.
- Analyze current business process to understand the business needs and to determine how to best fulfil those needs via AI/ML solutions.
- Conduct Roadshows to create awareness on AI/ML capabilities and demonstrate few solutions deployed.
- Build and train production grade ML models on large-scale datasets to solve various business use cases for Client Onboarding and Documentation.
- Design a reusable model life cycle pipeline to make model development and testing a repeatable process.
- Deploy and maintain models in production environment by providing versioning control of model and hyperparameters and provide interface to troubleshoot issues in production environment Able to retrain and redeploy the model without interrupt business applications.
- Automate model metadata capture, model governance artifacts creation processes.
- Bring automation to labeling, experiment tracking, model testing and deployment frameworks.
- Develop appropriate functional, non-functional and performance testing frameworks for models
- Provide centralized dashboard to monitor our AI/ML cloud usage in both dev and production environments to avoid unnecessary cost monitor model performance in production environment and trigger re-training process if needed.
- Build, test, and improve/maintain ETL model data pipelines.
Required qualifications, capabilities, and skills:
- Bachelor's or master's degree in computer science, Information Technology, or equivalent technical field.
- Minimum10 years of working experience as a software developer, DevOps or relevant software/cloud engineering domains. Minimum 3 years of relevant experience in ML engineering
- Fluent in at least one programming language e.g. Python, Java, C or C++
- Fluent in query languages like SQL, Cypher, HiveQL etc.
- Knowledge of CI/CD is mandatory.
- Hands on experience on terraform for infrastructure development is mandatory.
- Hands on experience on Jenkins or similar tools is mandatory.
- Experience on AWS services like Sagemaker, Step function, EKS, ECR, lambda functions, KMS, IAM, Rout53, ALB is mandatory for this role
- Past experience in working under Big Data engineering ecosystem (i.e. Hadoop/DataLake, spark, ETL pipeline)
- Experience with Enterprise Cloud infrastructure (AWS, Azure, GCP) in a mission critical environment
- Hands-on experience with cloud-based technologies and tools especially in deployment, monitoring and operations, such as Data Dog, Prometheus, Splunk, Apica, Dynatrace Elasticsearch, Grafana
- Hands on experience with some of the common cloud databases like RedShift, Postgres, Elasticsearch, Neo4j/Neptune are expected.
- Experience working with end-to-end ML pipeline orchestrators using frameworks like MLFlow, KubeFlow, TensorFlow, Apache Airflow, Step Function
Preferred qualifications, capabilities, and skills
- Familiar with NLP frameworks and libraries such as HuggingFace, gensim, stanza, fastText
- Familiar with web-dev framework such as Flask, REACT, AngularJS
- Common text processing techniques such as tokenization/lemmatization, part-of-speech tagging, chunking, segmentation, text similarity metrics, and regular expressions
- Familiar with any of the AI/machine learning frameworks, statistical packages, and libraries: TensorFlow, Amazon Machine Learning, Apache Spark, PyTorch, Scikit-learn etc.
- Familiarity with Sagemaker toolchain i.e. Studio, Groundtruth, Clarify, Comprehend, Pipeline, Bedrock etc.
- Familiarity with advanced MLOps toolchains like Comet, Weights & Biases, Tensorbaord, etc.
- Familiarity with dashboarding tools like Qlik, Tableau, Grafana, Kibana etc.
- Familiar with machine learning techniques and advanced analytics (e.g. regression, classification, clustering, time series, econometrics, causal inference, mathematical optimization) is a plus.
- Prior experience of designing, developing, and maintaining Machine Learning solution through its Life Cycle is highly advantageous.
Job tags
Salary