Senior Data Engineer(Databricks/Spark)
Location
Pune | India
Job description
You will work on:
We are looking for self driven Data Professional to be a key member of our Data Practice. We help many of our clients make sense of their large investments in data – be it building analytics solutions or machine learning applications. You will work on cutting edge cloud-native technologies to crunch terabytes of data into meaningful insights.
Job Location: Pune/Remote
What you will do (Responsibilities):
- Collaborate with product management & engineering to build highly efficient data pipelines for large datasets.
- Perform Data quality analysis on disparate data sources and define, implement data quality rules
- Design and develop data models analyzing end to end data requirements
- Troubleshooting data loss, data inconsistency, and other data-related issues
- Product development environment delivering stories in a scaled agile delivery methodology
- Create high quality project level technical artifacts such as requirement specs, high/detailed level design documents, test plans etc.
- Supporting in technical POCs, Presales activities for newer opportunities
- Contributing to maintaining frameworks, guidelines, standards for applicable project technologies and strengthening engineering practices
- Supporting research, POCs in emerging technologies
- Mentoring associates, team members
What you bring (Skills):
- 5+ years of experience in hands-on data engineering & medium to large-scale distributed applications
- Sound understanding of data warehousing, Data Lake and Lakehouse concepts
- Experience in big data processing technologies such as Databricks, Spark, Kafka etc.
- Extensive experience in Python or object-oriented programming languages such as Java or Scala
- Extensive experience in RDBMS such as MySQL, Oracle, SQLServer, etc and.or Cloud appliances as Snowflake, Redshift, BigQuery, Azure Synapse etc.
- Experience in Cloud-based services such as AWS, Microsoft Azure, or Google Cloud Platform
- Experience in developing and deploying applications in Linux OS
- Experience with Scrum and/or other Agile development processes
- Ability to create WBS for project tasks and contribute to planning
- Strong analytical and problem-solving skills
- Team player with self-drive to work independently
- Strong communication and interpersonal skills
Great if you know (Skills):
- Exposure to containerization technologies such as Docker, Kubernetes, or Amazon ECS/EKS
- Exposure to NoSQL data stores such as Couchbase, Solr, etc.
- Exposure shell scripting or orchestration tools like Airflow
- Ability to lead R&D and POC efforts
Advantage Cognologix:
- A higher degree of autonomy, startup culture & small teams
- Opportunities to become an expert in emerging technologies
- Remote working options for the right maturity level
- Competitive salary & family benefits
- Performance based career advancement
About Cognologix:
Cognologix helps companies disrupt by reimagining their business models and innovate like a Startup. We are at the forefront of digital disruption and take a business-first approach to help meet our client's strategic goals.
We are a Data focused organization helping our clients to deliver their next generation of products in the most efficient, modern, and cloud-native way.
Job tags
Salary