Location
Hyderabad | India
Job description
Dew Software is seeking a skilled Data Bricks Engineer to join our team. As a Data Bricks Engineer at Dew Software, you will be responsible for designing, implementing, and managing data processing workflows and analytics using Databricks. You will work closely with data scientists and analysts to optimize data pipelines and ensure efficient and accurate data processing. If you have a passion for big data technologies and a strong background in data engineering, this is an exciting opportunity to contribute to cutting-edge data solutions.
Responsibilities
- Design, build, and maintain scalable data processing workflows using Databricks
- Collaborate with data scientists and analysts to optimize data pipelines for performance and accuracy
- Implement data governance and security measures to ensure data integrity and compliance
- Monitor and troubleshoot data pipelines to identify and resolve issues
- Create and maintain documentation of data workflows and processes
- Stay up-to-date with the latest trends and technologies in big data and analytics
Requirements
- Bachelor's degree in Computer Science, Engineering, or a related field
- 3+ years of experience in data engineering or a related role
- Expertise in data processing and analytics using Databricks
- Proficiency in SQL and scripting languages like Python or Scala
- Strong understanding of data modeling and schema design
- Experience with data governance and security
- Knowledge of big data technologies and frameworks, such as Hadoop and Spark
- Excellent problem-solving and analytical skills
- Strong attention to detail and ability to work in a fast-paced environment
Benefits
- Azure Infrastructure Experience: Proficiency in managing Azure infrastructure components, including virtual machines, storage, and networking, to support AI model development and deployment.
- CI/CD Pipeline Experience: Experience with Continuous Integration/Continuous Deployment (CI/CD) pipelines, including the automation of model deployment processes.
- Containerization in the Cloud: Strong knowledge of containerization technologies in the cloud, such as Docker and Kubernetes, for efficient deployment and scaling of machine learning models.
- Machine Learning Expertise: Proficient in building and optimizing machine learning models, with a deep understanding of various ML algorithms and frameworks.
- Programming Skills: Proficiency in programming languages commonly used in machine learning, such as Python and libraries like TensorFlow and PyTorch.
- Data Management: Experience in data preprocessing, feature engineering, and data pipeline development for machine learning.
- Collaborative Team Player: Excellent communication skills and the ability to work collaboratively with cross-functional teams, including AI engineers and SREs.
- Documentation: Effective documentation skills to maintain clear and organized records of models, infrastructure configurations, and incident responses.
Job tags
Salary