logo

JobNob

Your Career. Our Passion.

Solution Engineer


CNH Industrial


Location

Gurgaon | India


Job description

About CNHI

At CNH Industrial we're building the world's most intelligent precision farming platform and applications to provide data-based services to our customers and partners on top of data gathered from machines (IOT), sensors and satellite/public sources. Our technology is revolutionizing agriculture and changing the lives of farmers and agribusinesses globally. We are a close-knit team of digital innovators, committed to delivering cutting-edge solutions to help farmers feed the world. CNH Industrial - a global leader in the delivery of power, technology and innovation to farmers, builders, and drivers all around the world. Each of its brands, including Case IH, New Holland Agriculture, Case and New Holland Construction, is a major international force in its specific sector.

Our Technology

At CNHI we harness the power of the Internet-of-Things, cloud computing and predictive analytics to deliver actionable insights that maximize equipment utilization, increase yield, and reduce the operating costs of farming operations.

Our Culture

Our culture sets us apart from the competition and allows our team of developers, free-thinkers, and problem solvers to connect the dots before others even see them. We're looking for like-minded, motivated, and talented professionals that want to have a meaningful impact on global agriculture and help us shape the future of farming.

Job Description

Primary focus would be to do development work within the platform on Azure Data Lake environment and other related ETL technologies, satisfying project requirements, while adhering to enterprise architecture standards.

Responsibilities:

Assess and recommend architecture frameworks, design and implement high-performance solutions to support data and analytical products. Act as a subject matter expert across different digital data projects. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Strong technical background in data science, business intelligence or data engineering and ETL best practices. Great Knowledge of Databricks Lakehouse and Azure DataLake concept Knowledge of Data Bricks delta concept– Delta live tables (DLT) Strong hands-on experience in ELT– pipeline development using Azure Data factory and Databricks Autoloader. Strong knowledge of metadata-driven data pipeline, metadata management, dynamic logic In-depth knowledge of data storage solutions, including Azure Data Lake Storage (ADLS), and Azure Serverless SQL Pool. Experience with data transformation using Spark, and SQL technologies. Solid understanding of design patterns, and best practices of the cloud stack. Experience with code management and version control using Git or similar tools. Strong problem-solving and debugging skills in ETL workflows and data pipelines. Strong understanding of Azure Data bricks– features and capabilities. Knowledge of Azure DevOps and continuous integration and deployment (CI/CD) process. Knowledge of data quality and data profiling techniques, with experience in data validation and data cleansing.

Hands-on Duties:

Conducting technical sessions, design reviews, code reviews, and demos of pipelines and their functionality Developing technical specification for Data pipelines and workflow and getting sign-off from Architect. Developing, deploying, and maintaining workflows and data pipelines using Azure Data bricks. Collaborating with data architects, data analysts, and other stakeholders to design and implement ETL solutions that meet business requirements. Writing efficient and high-performing ETL code using PySpark, and SQL technologies. Building and testing data pipelines using Azure Data bricks. Ensuring the accuracy, completeness, and timeliness of data being processed and integrated. Troubleshooting and resolving issues related to data pipelines and notebooks. Performance benchmarking of data ingestion and Data flow pipeline/notebook and ensuring consistency.


Job tags



Salary

All rights reserved