logo

JobNob

Your Career. Our Passion.

Senior Data Engineer


Tiger Analytics


Location

Chennai | India


Job description

Role -Senior Data Engineer

Location - Chennai || Bangalore || Hyderabad

Notice Period - Immediate to 30 Days

Mandatory Skill Required - Python, SQL, Spark, Pyspark, AWS Cloud

Tiger Analytics is a global AI and analytics consulting firm. With data and technology at the core of our solutions, our 3900+ tribe is solving problems that eventually impact the lives of millions globally. Our culture is modeled around expertise and respect with a team-first mindset. Headquartered in Silicon Valley, you'll find our delivery centers across the globe and offices in multiple cities across India, the U.S., the UK, Canada, and Singapore, including a substantial remote global workforce.

We're Great Place to Work-Certified™. Working at Tiger Analytics, you'll be at the heart of an AI revolution. You'll work with teams that push the boundaries of what is possible and build solutions that energize and inspire.

About the role:

We are looking for a Senior Data Engineer to be based out at our Chennai, Bangalore and Hyderabad offices.

● This role involves a combination of hands-on contribution, customer engagement, and technical team management.

● As a Senior Data Engineer, you will design and build solutions for near real-time stream processing as well as batch processing on the Big Data platform.

● Set up and run Hadoop development frameworks.

● Collaborate with a team of business domain experts, data scientists, and application developers to identify relevant data for analysis and develop the BigData solution.

● Explore and learn new technologies for creative business problem-solving.

Required Experience, Skills & Competencies:

● Ability to develop and manage scalable Hadoop cluster environments.

● Ability to design solutions for Big Data applications.

● Experience in Big Data technologies like HDFS, Hadoop, Hive, Yarn, Pig, HBase, Sqoop,Flume,etc.

● Working experience on Big Data services in any cloud-based environment.

● Experience in Spark, Pyspark, Python or Scala, Kafka, Akka, core or advanced Java, and Databricks

● Knowledge of how to create and debug Hadoop and Spark jobs.

● Experience in NoSQL technologies like HBase, Cassandra, MongoDB, Cloudera, or Hortonworks Hadoop distribution.

● Familiar with data warehousing concepts, distributed systems, data pipelines, and ETL.

● Familiar with data visualization tools like Tableau.

● Good communication and interpersonal skills.

● Minimum 6+ years of Professional experience with 3+ years of Big Data project experience.

● B.Tech/B.E from reputed institute preferred.


Job tags



Salary

All rights reserved