Synechron Technologies Private Limited
Location
Pune | India
Job description
Job Title : Data Architect
Experience : 15-20 Years
Location : Pune /Bangalore /Chennai /Hyderabad /Mumbai
Notice : max 30 Days
Skills : Big Data / Hadoop + Data Solutions + Architect ( Min 4 Years) + Python + Spark + Hive
Job Description :
- Design and implement a data strategy that aligns with business processes and approved enterprise architecture patterns. This includes high-level design (HLD), low-level design (LLD), data model designs, database development standards, and the implementation and management of data lakes and data analytics solutions.
- Manage end-to-end data architecture, design the technical architecture, devise a DevOps strategy, and develop data orchestration frameworks for the proposed solution.
- Coordinate and collaborate with cross-functional teams, stakeholders, and vendors for the smooth functioning of the enterprise data lake.
- Execute planning and implementation of big data solutions using technologies such as Hadoop and Azure Cloud.
- Liaise with the architecture review board to present data architecture and seek approval.
- Define and manage the flow of data and dissemination of information within the organization.
- Integrate technical functionality, ensuring data accessibility, accuracy, and security.
- Use a technical approach to create solution options documents with pros and cons, design documents, and present them to the technical governance board.
- Conduct capacity planning for the platform.
- Utilize hands-on experience in Python, Spark, Hive, SQL, Azure, and other related technologies.
- Experience in solutioning using Hortonworks or Cloudera platforms.
Good to have / desirable Skills :
- A minimum of 15 years of total experience in data solutions and architecture, with at least 4 years in a similar role.
- Proven experience in handling at least 3 data solutions into production as an Architect.
- Hands-on experience in solution design, data ingestion patterns for complex data analytic solutions.
- Proficiency in Python, Spark, Hive, SQL, Azure, Hortonworks, and Cloudera.
- Strong communication and collaboration skills.
- Excellent problem-solving skills and attention to detail.
Competency for the roles :
- Technical Proficiency: The candidate should have an in-depth knowledge and experience in working with Python, Scala, SQL, and Unix scripting. They should be proficient in coding, debugging, and testing these languages efficiently.
- Big Data Expertise: The candidate should have a strong understanding of Big Data technologies such as Hadoop, Hive, Hbase, and Spark. They should be capable of managing and manipulating large datasets using these technologies.
Job tags
Salary