logo

JobNob

Your Career. Our Passion.

AWS Data Engineer


eInfochips (An Arrow Company)


Location

Ahmedabad | India


Job description

Role:

AWS Data Engineer Experience:

5+ years Location:

Pune/Ahmedabad/Indore Company Overview: eInfochips, an Arrow company, is a global leader in product engineering and semiconductor design services. Renowned for our technological innovations, we've developed over 500 products, achieving 40M deployments across 140 countries. Our expertise spans digital transformation, IoT solutions, cloud platforms like AWS and Azure, and a range of engineering services in silicon, embedded systems, and software. At eInfochips, we're committed to excellence, fueled by our experienced team and a culture of innovation.

Job Description: We are looking for a talented and experienced Data Engineer with a strong background in AWS with technology Glue, S3, Lambda, DynamoDB, Athena, and RedShift. The primary responsibilities will include Develop and deploy data processing and transformation frameworks to support both real-time and batch processing requirements. The ideal candidate will have at least 5 years of working experience in the field.

Key Responsibilities: Collaborate with stakeholders to understand business requirements and data needs, and translate them into scalable and efficient data engineering solutions using AWS Data Services. Design, develop, and maintain data pipelines using AWS serverless technologies such as Glue, S3, Lambda, DynamoDB, Athena, and RedShift. Implement data modelling techniques to optimize data storage and retrieval processes. Develop and deploy data processing and transformation frameworks to support both real-time and batch processing requirements. Ensure data pipelines are scalable, reliable, and performant to handle large-scale data sizes. Implement data documentation and observability tools and practices to monitor and troubleshoot data pipeline performance issues. Adhere to privacy and security development best practices to ensure data integrity and compliance with regulatory requirements. Collaborate with the DevOps team to automate deployment processes using AWS Code Pipeline.

Requirements: Bachelor's degree in Computer Science, Engineering, or a related field. 5+ years of experience working in data modelling and building real-time and batch processing data pipelines for large-scale data sizes. Strong proficiency in Python programming language. Extensive experience with AWS serverless or managed services such as S3, Glue, EMR, Lambda, DynamoDB, Athena, and RedShift. Solid understanding of privacy and security development best practices. Excellent problem-solving skills and ability to troubleshoot complex data pipeline issues. Strong communication and collaboration skills, with the ability to work effectively in a cross-functional team environment. Experience with Agile development methodologies is a plus. . What We Offer: Competitive Salary Work Life Balance Professional Development Innovative Work Environment. Recognition and Rewards


Job tags



Salary

All rights reserved