Location
Pune | India
Job description
Job Description
Experience - 4 - 7 years Role Description - Designing, implementing, and managing highly scalable, fault-tolerant, and secure data pipelines on AWS.
- Developing and maintaining data processing solutions using AWS technologies such as Amazon S3, AWS Glue, AWS Lambda, AWS EMR, AWS Redshift, and AWS Athena.
- Writing complex ETL (Extract, Transform, Load) jobs to ingest data from various sources, cleanse and transform the data, and load it into the target systems.
- Building data models and data warehouses to support reporting, analytics, and business intelligence initiatives.
- Developing data visualization and reporting solutions using tools such as Tableau, Power BI, or AWS QuickSight.
- Ensuring data security and compliance with industry standards such as GDPR, HIPAA, and PCI DSS.
- Troubleshooting and optimizing data pipelines and infrastructure to ensure high performance and reliability.
- Continuously monitoring and improving the data infrastructure to ensure scalability, availability, and cost-effectiveness.
- Staying up-to-date with the latest AWS technologies, trends, and best practices in data engineering.
Required Qualifications - Degree in computer science, information technology, or a related field
- Experience with technologies such as Data Lake, SQL, PostgreSQL, Spark, and Kafka
- Proficiency in programming languages such as Scala, Java or Python, and experience with distributed computing and database systems.
- Strong problem-solving and analytical skills, as well as excellent communication and collaboration skills, are also important for success in this role.
Skills: aws,etl,tableau,power bi
Job tags
Salary