logo

JobNob

Your Career. Our Passion.

Hadoop


Keyseries


Location

Jacksonville, FL | United States


Job description

JoB Title : Hadoop

Location : Jacksonville, Florida

Job Type : Full Time

Reference Code : Key 501

Description

You will be working with cutting-edge Big Data technology assisting the team in building to acquire, process and distribute massive amounts of near real-time VoIP metrics. You will be a key team member responsible for the construction of the overall framework and tooling.

Responsibilities:
Develop solutions to Big Data problems utilizing common tools found in the Hadoop ecosystem.
Develop solutions to real-time and off line event collecting from various systems.

Develop, maintain, and perform analysis within a real-time architecture supporting large amounts of data from various sources.
Analyze massive amounts of data and help drive prototype ideas for new tools and products.
Design, build and support APIs and services that are exposed to other internal teams.
Employ rigorous continuous delivery practices managed under an agile software development approach.
Ensure a quality transition to production and solid production operation of the software.
Technologies:
Hadoop
Flume
Kafka
Storm
MemSQL
Java
Maven
Git
Jenkins
Splunk/Hunk
Apache Pig
Unix/Linux
Additional Skills:
5 years overall programming experience, 3+ years’ experience with big data technologies
Bachelors  in Engineering or related discipline
Experience in software development of large-scale distributed systems – including proven track record of delivering backend systems that participate in a complex ecosystem.
Knowledge in Big Data related technologies and open source frameworks preferred.
Extensive experience programming in Java as well as experience in code optimization and high performance computing.
Experience with Java servlet containers or application servers such as JBoss, Tomcast, Glassfish, WebLogic, or Jetty.
Good current knowledge of Unix/Linux environments
Test-driven development/test automation, continuous integration, and deployment automation
Enjoy working with data – data analysis, data quality, reporting, and visualization
Good communicator, able to analyze and clearly articulate complex issues and technologies understandably and engagingly.
Great design and problem solving skills, with a strong bias for architecting at scale.
Adaptable, proactive and willing to take ownership.
Keen attention to detail and high level of commitment.
Comfortable working in a fast-paced agile environment. Requirements change quickly and our team needs to constantly adapt to moving targets.
Candidate should be passionate about technology and the Big Data domain in particular.
Experience working in an Agile environment - specifically with Scrum.

Nice to haves:
Collection, transformation and enrichment frameworks such as Flume.
Messaging middleware or distributed queuing technologies such as Kafka
MapReduce experience in Hadoop utilizing Pig, Hive, or other query/scripting technology.
Distributed (HBase or Cassandra or equivalent) or NoSQL (e.g. Mongo) database experience.
Expertise in data warehousing and business intelligence.
Scripting tools such as Python.
Git, Maven, Jenkins, Sonar, Nexus, Puppet.
Understanding and/or experience with serialization frameworks such as Thrift, Avro, Google. Protocol Buffers, and Kyro preferred.
Visualization tools & libraries, reporting tools, etc. Splunk (Hunk), Tableau, d3.js.
Good understanding in any: advanced mathematics, statistics, and probability.


Job tags

Full time


Salary

All rights reserved