Talentgigs
Location
Bangalore | India
Job description
Job Responsibilities:
· Perform business requirements analysis and translate requirements into technical requirements.
· Analyse, design, develop, test, optimize & deploy complex ETL data pipelines using open source bigdata technologies.
· Deep understanding of distributed ecosystems including Map-Reduce, Hive, Spark
· Expert in Hive, SQL, Spark, Shell Scripting and be able to use these technologies in solving complex analytical problems.
· Good data modelling experience. Comfortable writing complex queries to solve data problems.
· Comfortable working in UNIX/Linux based environments.
· Hand on experience on working with relational databases like MySQL, PostgreSQL etc.
· Experience with version control tools like GitHub
· Work independently as well as collaborate effectively with cross functional teams on case-by-case basis.
· Work with Product team, other data engineers and DevOps to deliver features on time.
· Take part in code reviews and help the team to produce quality code.
· Work in a geographically distributed team setup.
· Identify opportunities for further enhancements and refinements to standards and processes.
· Fine tune the existing application with new ideas and optimization opportunities to reduce the latency.
Qualifications:
· Work Experience:
o 3+ years of experience in software engineering and data related services.
· Academic
o Bachelor's degree in a technical field such as computer science, computer engineering or related field required. Advanced degree preferred.
Preferred Qualification:
· Hands on experience on Nifi, Hive, SQL, Spark, Python, and Shell Scripting, Data analytics
· Awareness on CI/CD tools viz. Jenkins, Git, XLR
Job tags
Salary