Location
Santa Cruz, CA | United States
Job description
Job Description:
In this role, you will Design and deploy scalable, highly available, and fault-tolerant AWS data processes using AWS data services (Glue, Lambda, Step, Redshift)
Responsibilities
- Monitor and optimize the performance of cloud resources to ensure efficient utilization and cost-effectiveness.
- Implement and maintain security measures to protect data and systems within the AWS environment, including IAM policies, security groups, and encryption mechanisms.
- Migrate the application data from legacy databases to Cloud-based solutions (Redshift, DynamoDB, etc) for high availability with low cost.
- Develop application programs using Big Data technologies like Apache Hadoop, Apache Spark, etc with appropriate cloud-based services like Amazon AWS, etc.
- Build data pipelines by building ETL processes (Extract-Transform-Load)
- Implement backup, disaster recovery, and business continuity strategies for cloud-based applications and data.
- Responsible for analyzing business and functional requirements which involves a review of existing system configurations and operating methodologies as well as understanding evolving business needs.
- Analyse requirements/User stories at the business meetings and strategize the impact of requirements on different platforms/applications, converting the business requirements into technical requirements.
- Participating in design reviews to provide input on functional requirements, product designs, schedules, and/or potential problems.
- Understand current application infrastructure and suggest Cloud-based solutions that reduce operational costs, require minimal maintenance, and provides high availability with improved security.
- Perform unit testing on the modified software to ensure that the new functionality is working as expected while existing functionalities continue to work in the same way.
- Coordinate with release management, and other supporting teams to deploy changes in production environment.
Minimum Qualifications
- Bachelor's degree in computer science, engineering, or a related field (or equivalent work experience).
- Strong experience in designing, implementing, and managing AWS data services.
- Working experiences in implementing data lake using services like Glue, Lambda, Step, Redshift
- Experience with Databricks will be an added advantage.
- Strong experience in Python and SQL
- Strong understanding of security principles and best practices for cloud-based environments.
- Experience with monitoring tools and implementing proactive measures to ensure system availability and performance.
- Excellent problem-solving skills and ability to troubleshoot complex issues in a distributed, cloud-based environment.
- Strong communication and collaboration skills to work effectively with cross-functional teams.
Preferred Qualifications/skills
- Master's Degree-Computer Science, Electronics, Electrical.
- AWS Data Engineering & Cloud certifications, Databricks certifications
- Experience with multiple data integration technologies and cloud platforms.
- Knowledge of Change & Incident Management process
Job tags
Salary