logo

JobNob

Your Career. Our Passion.

Data Architect - Google Cloud Platform


Quess IT Staffing


Location

Bangalore | India


Job description

We are Having Openings GCP Data Architect for one of our MNC Client in Bangalore and Gurgaon location.Payroll : Quess. Experience : 8 to 12 Years. Relevant Experience : Minimum should be 5 Years. Notice period : Immediate Joiner or currently serving. Location : Bangalore and Gurgaon only. Work Mode : Hybrid. Interview Mode : Virtual ( skype). Education Qualification : B. Tech / B.E.Must Skills :. Job Description :- GCP BigQuery ,Cloud Storage, Bigtable, Dataflow, Dataproc, Cloud composer, Cloud PubSub, and Data Fusion etc.- Expertise in one of the programming languageScala/Java/Python.- Hadoop, Hive, HDFS, HBase, Spark.Responsibilities :.- Lead GCP solution and scoping.- Create detailed target state technical architecture and design blueprints.- Conduct full technical discovery, identifying pain points, business and technical requirements, "as is" and "to be" scenarios.- Data migration, Data Pipeline migration to GCP, setting up the best practices, guidelines and right set of architectures.Job Overview : We are looking for a highly skilled and experienced GCP Data Architect to join our team. The ideal candidate will have a robust background in Google Cloud Platform (GCP) services, including but not limited to BigQuery, Cloud Storage, Bigtable, Dataflow, Dataproc, Cloud Composer, Cloud PubSub, and Data Fusion. In addition to GCP expertise, proficiency in at least one programming language (Scala, Java, or Python) and extensive experience with Hadoop ecosystem technologies such as Hive, HDFS, HBase, and Spark are required. The GCP Data Architect will lead the design and implementation of scalable, efficient data solutions that meet our business requirements and drive insights and analytics. Key Responsibilities : GCP Solutions Design and Implementation : - Architect and implement scalable data processing and analytics solutions within the Google Cloud Platform. - Leverage GCP services such as BigQuery, Dataflow, and Cloud PubSub to meet data ingestion, processing, storage, and analysis needs. Data Modeling and Warehousing : - Design data models and warehouse solutions that support both operational and analytical use cases. - Ensure solutions are optimized for performance and cost. Programming and Automation : - Develop and maintain scalable and efficient data pipelines using preferred programming languages (Scala, Java, Python) and GCP services. - Automate data workflows using Cloud Composer and other automation tools. Integration of Hadoop Ecosystem Technologies : - Leverage Hadoop ecosystem technologies (Hadoop, Hive, HDFS, HBase, Spark) for processing large datasets. - Integrate these technologies with GCP solutions to enhance data processing and analytics capabilities. Performance Tuning and Optimization : - Monitor, tune, and optimize data processing and storage solutions to improve efficiency and reduce costs. - Use best practices in data modeling, ETL development, and query optimization. Security and Compliance : - Ensure data solutions comply with data security, privacy policies, and regulations. - Implement data governance and security measures within GCP and Hadoop environments. Stakeholder Collaboration : - Work closely with data scientists, business analysts, and other stakeholders to understand data needs and deliver solutions that enable data-driven decision-making. Best Practices and Innovation : - Stay updated with the latest in GCP and big data technologies. Advocate for and implement best practices in data architecture, data management, and data security. - Explore innovative data solutions to continuously improve data analytics capabilities. Requirements : - Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. - 8 - 12 years of experience in data architecture and data engineering, with a strong focus on GCP and big data technologies. - Deep expertise in GCP services related to data and analytics (BigQuery, Dataflow, Dataproc, Cloud Storage, etc.). - Proficiency in programming with Scala, Java, or Python. - Extensive experience with the Hadoop ecosystem (Hadoop, Hive, HDFS, HBase, Spark). Solid understanding of data modeling, ETL processes, and data warehousing principles. Experience with data security and governance in cloud and big data environments. - Strong analytical, problem-solving, and communication skills. (ref:hirist.tech)


Job tags



Salary

All rights reserved