logo

JobNob

Your Career. Our Passion.

Data Engineer


Qubrid


Location

Delhi | India


Job description

Please read carefully before applying.

Work from Home! Compensation with 1 Year Experience - 3.6 LPA; 2 Years Experience - 5 LPA; 3+ Years experience - 6 LPA. Plus Annual Bonus. Please do not apply if compensation is not acceptable.

You'll work with our founders and US members plus off-shore cloud and AI team.

Must have development experience.

Responsibilities:

Design, develop, and maintain scalable data pipelines and ETL processes to support data warehousing and analytics needs. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and translate them into technical solutions. Implement robust and efficient data storage solutions, including data lakes, data warehouses, and databases. Extract, transform, and load data from various sources, ensuring data quality, consistency, and integrity. Optimize data pipelines for performance, scalability, and reliability, considering factors such as volume, velocity, and variety of data. Monitor and troubleshoot data pipelines, identifying and resolving issues to ensure data availability and integrity. Stay up to date with the latest trends and technologies in data engineering, proposing innovative solutions to enhance our data infrastructure. Collaborate with cross-functional teams to integrate data pipelines with business applications and analytics platforms.

Requirements: Bachelor's degree in computer science, Engineering, or a related field (or equivalent experience). 1-3 years of professional experience in data engineering or related roles. Proficiency in programming languages such as Python, Java, or Scala. Strong understanding of database technologies, including relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra). Experience with distributed data processing frameworks such as Apache Spark, Apache Flink, or Hadoop. Familiarity with cloud platforms and services (e.g., AWS, Azure, Google Cloud) for data storage, processing, and analytics. Solid understanding of data modeling, ETL principles, and data integration techniques. Excellent problem-solving skills and attention to detail. Persuasive communication and collaboration abilities, with the ability to work effectively in a team environment.

Skills Required: Proficiency in programming languages such as Python, Java, or Scala. Experience with database technologies, including relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra). Familiarity with distributed data processing frameworks such as Apache Spark, Apache Flink, or Hadoop. Knowledge of cloud platforms and services (e.g., AWS, Azure, Google Cloud) for data storage, processing, and analytics. Understanding of data modeling, ETL principles, and data integration techniques.


Job tags



Salary

All rights reserved