logo

JobNob

Your Career. Our Passion.

AWS Data Engineer


PETADATA


Location

Reston, VA | United States


Job description

Job Title: AWS Data Engineer

Location: Reston, VA

Experience: 10+ years

Work Type: Fulltime (Hybrid)

PETADATA is hiring for multiple positions: AWS DATA ENGINEER for one of our clients.

Roles & Responsibilities:

  • The Ideal candidate must drive end-to-end data pipeline development efforts for on-time delivery of high-quality data solutions that conform to requirements and comply with all applicable standards.

  • Should have to design, and implement solutions that are automated, scalable, and sustainable while minimizing defects and technical debt.

  • Must have to evaluate and analyze the current system architecture to improve uptime and responsiveness. Provides recommendations and executes activities including database tuning, data structure optimization, and server resource scaling.

  • Need to proactively find opportunities for data acquisition to enrich our data environment.

  • Should actively participate in code reviews and effectively communicate issues and risks to stakeholders

  • Have to be involved in all areas of database design, performance, and reliability.

  • Must be able to investigate and troubleshoot complicated analytic applications and stability issues.

  • Need to ensure databases are operational and provide valid and relevant data.

  • Should have to do unit testing and regression testing to ensure defect-free builds and releases.

  • Need to identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, data acquisition, data enrichment, re-designing infrastructure for greater scalability, etc.

  • Perform database performance tuning methods to ensure uptime and data refresh rate SLAs are met.

  • Should be a part of data engineering team code reviews and offer recommendations regarding data architecture.

  • Must have to create and maintain data structures and data models to support data at all stages of the data pipeline development lifecycle.

Required Skills:

  • The candidate should be an expert in Python, Spark, and Scala, as well as database technology Source Code Control, and Able to perform data validation, delivery, quality, and integrity.

  • Should be well-versed in Datamart (or) Data Warehousing.

  • Familiarity in building scalable production systems with high availability using Elastic load balancers and auto Scaling on different regions and availability zones.

  • Experience in Warehouse: SQL, Amazon Redshift, EMR, etc.

  • Experience in RDBMS and able to work with databases (PostgreSQL, Aurora)

  • Should be able to work with serverless functions (Lambda, step functions) and OLAP (Multidimensional).

  • Have experience in AWS Cloud (SNS, SQS, Redshift, Lambda).

  • Good to be a constructive communicator and capable of discussing difficult issues effectively with team members and customers.

  • Bachelor or Master's degree in Computer Science or a relevant degree is a must.

Note:

Candidates are required to attend Phone/Video Call / In-person interviews and after Selection, the candidate (He/She) should go through all background checks on Education and Experience.

Please email your resume to: manik

After carefully reviewing your experience and skills one of our HR team members will contact you on the next steps.


Job tags

Full time


Salary

All rights reserved