logo

JobNob

Your Career. Our Passion.

Solutions Architect


EXL INDIA


Location

Gurgaon | India


Job description

Total exp-5 yrs + Location-Gurugram, Bangalore Must have skills- Solution Architect , Data Architect, Role: Solution Architect

Qualifications:

Prior experience building Big Data product / platform using Spark and Python Data Science stack 3+ years of prior experience with Continuous Integration/ Continuous Deployment (CI/CD) tools & working with at least one cloud provider (AWS/GCP/Azure), preferably multiple of these Solid background in database systems (such as Snowflake, SQL Server, and Redshift) Demonstrated leadership ability and willingness to take initiative Good to have - Prior experience building ML/AI based products / platforms

Good to Have but not necessary:

Ability to design pipelines for analytics / ML / AI workflows and deploy models / solutions to production Familiarity / Experience in SWE best practices for unit-testing, code modularization, QA Familiarity with Databricks Lakehouse Platform – Delta Tables, Structured Streaming, Change Data Feed Experience working with open-source pipeline orchestration frameworks like Airflow, Prefect, Luigi Familiarity with application containerization tools like Docker, Kubernetes Coursework / Past Projects / Github Repos illustrating familiarity with data warehousing best practices

Job Responsibilities Responsible for the design, development, and implementation of data integration processes using SQL / Pyspark scripting / Azure Data Flow/ Other Cloud Platform & Tools Responsible for setting up end-to-end data pipelines which includes steps for importing, cleaning, transforming, aggregating, validating, and analyzing data Design and support CI/CD stack, infrastructure as code, monitoring, testing and logging Identify data quality issues & inconsistencies and design efficient systems to deal with / resolve them Analyze data volumes, data types and content types to support the design of data architecture solutions Collaborate with business users, gather user requirements, translate business rules & logic through data transformations Maintain project codebase & work with developers to design and implement delivery pipelines Review PySpark scripts (ETL, Analytics, ML), unit tests and integrate with codebase Develop and maintain design and API documentation. Create functional/technical documentation – e.g. data integration architecture flows, source to target mappings, ETL specification documents, run logs, test plans Lead and mentor junior developers to ensure best coding practices are followed Explore & evaluate cost & process efficient open-source alternatives to existing tools


Job tags



Salary

All rights reserved