logo

JobNob

Your Career. Our Passion.

Azure Data Engineer- Databricks/Data Factory/SQL and PySpark


Hexaware Technologies


Location

Hyderabad | India


Job description

Experience: 4 to 12 years

Role Description: In the Azure Data Engineer capacity, you will be member of our Business Intelligence development team. You will help design, implement, and maintain data pipelines with complex data transformations. In addition, you will support our data architect & modeler, and BI Analysts on data initiatives to ensure an optimal solution architecture. You will foster a positive working culture of continuous improvement and modernized delivery of data and AI solutions. Create and maintain optimal and scalable data pipeline architecture. Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources. Drive design, model, implement, and operate large, structured, and unstructured datasets. Evaluate and implement efficient distributed storage and query techniques. Design and implement monitoring of data services platform. Design and implement Data Lakes and Data Warehouse solutions. Keeps track of industry best practices and trends and through acquired knowledge, takes advantage of process and system improvement opportunities. Develop and maintain technical documentation and operational procedures.

Qualifications: Bachelor's degree in fields like Computer Science, Computer Engineering, Business Analytics, or a related field. Proven experience building, optimizing, and monitoring big data pipelines, architectures, and data sets. Must have a production experience in building metadata driven ELT/ETL frameworks for data ingestion and processing. Must have a production experience working with Data lakes, ETL/ELT, Data warehousing using Azure ADLS Gen2, Databricks, Azure Data Factory, & Synapse Analytics Working knowledge stream processing pipelines and highly scalable big data stores using Spark, Azure Stream Analytics, Event Hubs, Azure Data Explorer etc. Experience with relational SQL, NoSQL databases, and data lakes: Azure SQL Database, Azure Data Lake. Experience with data management and data governance tools e.g., Azure Purview Experience with programming languages and scripting: Python, Scala, Java, bash, Azure CLI, PowerShell etc. Must have a strong understanding of CI/CD practices and technologies specifically Azure DevOps Experience using SAP Hana and/or SAP BODS is a strong plus. Must have substantial background in data extraction and transformation, developing data pipelines using MS SSIS and Azure Data Factory. Strong understanding of business processes, requirements analysis, and collaboration with business system owners to design and build data products. Previous experience working with different business systems including ERP, HRMS, Supply Chain or financial solutions is required. Required Experience: Must have: - 4+ years hands-on ETL development experience, - Data pipeline design & development, - Data lakes & Warehouse project development experience, - MS Azure Data Factory/SSIS. - Azure Data Lake, Azure storage - Data lakes, ETL/ELT, Data warehousing using Azure ADLS Gen2, Databricks-Mandate, - Azure Data Factory. -CI/CD practices and technologies specifically Azure DevOps -Scripting- Python/Scala.


Job tags



Salary

All rights reserved