Location
Bangalore | India
Job description
At TE, you will unleash your potential working with people from diverse backgrounds and industries to create a safer, sustainable and more connected world.
Job Overview
- Design and develop data lakes, manage data flows that integrate information from various sources into a common data lake platform through an ETL Tool
- Code and manage delta lake implementations on S3 using technologies like Apache Hoodie or Databricks
- Triage, debug and fix technical issues related to Data Lakes
- Design and Develop Data warehouses for Scale
- Design and Evaluate Data Models (Star, Snowflake and Flattened)
- Design data access patterns for OLTP and OLAP based transactions
- Coordinate with Business and Technical teams through all the phases in the software development life cycle
- Participate in making major technical and architectural decisions
- Maintain and Manage Code repositories like Git
What your background should look like:
- 5+ Years of Experience operating on AWS Cloud with building Data Lake architectures
- 3+ Years of Experience with AWS Data services like S3, Glue, Lake Formation, EMR, Kinesis, RDS, DMS and Redshift
- 3+ Years of Experience building Data Warehouses on Snowflake, Redshift, HANA, Teradata, Exasol etc.
- 3+ Years of working knowledge in Spark
- 3+ Years of Experience in building Delta Lakes using technologies like Apache Hoodie or Data bricks
- 3+ Years of Experience working on any ETL tools and technologies
- 3+ Years of Experience in any programming language (Python, R, Scala, Java)
- Experience working on Agile projects and Agile methodology
Competencies
Values: Integrity, Accountability, Inclusion, Innovation, Teamwork
Job tags
Salary