logo

JobNob

Your Career. Our Passion.

Data engineer Roji


Coders Brain Technology Pvt. Ltd.


Location

Bangalore | India


Job description

Introducing Tookitaki

A leading Regtech company Tookitaki has developed advanced machine learning

powered solutions in risk and compliance to help the banking and financial

services (BFS) industry achieve sustainability in their compliance programs. Our

offerings are deployed in production across global reputed financial institutions.

Incorporated in November 2014 in Singapore the company is led by a core team with

cumulative 150years experience in financial crime AI and big data analytics.

Tookitaki s client portfolio expands across AsiaPacific North America and Europe

markets including a Japanese multinational investment bank a large European bank

a leading Southeast Asian bank and such other global reputed financial institutions.

Backed by institutional investors such as Jungle Ventures Viola Fintech Illuminate

Financial and Enterprise Singapore (a subsidiary of the Singapore Government) the

company s accolades include:

We won the first place in the MAS FinTech Awards (Singapore SME) in the

regulatory compliance space from the Monetary Authority of Singapore for our

approach to make the workflows in AML and Reconciliation scalable and highly

auditable (beyond ML based black box approach)

We are accredited by IMDA as Innovative Tech Company in 2017 and 2019

We are among the 56 growthstage Companies from around the world

recognised by World Economic Forum as a Technology Pioneers 2019

We are the winner in the Asian Private Banker Technology Awards 2019 for the

Best AML/CTF Solution

We bagged the AI Award for the banking category at Singapore Business

Review s inaugural Technology Excellence Awards 2019

We won the Most Promising Innovation by the SG:D Techblazer Awards 2019

Jointly organised by Singapore Digital (SG:D) IMDA and SGTECH

We have participated and won in various Client Innovation Engagement

Programs Second Runnerup of the UBS Future of Finance Challenge

First Runnerup of the FinTech Challenge Vietnam 2019 Alumni of ING Fintech

Village Cohort 2019 and many more

In regulatory compliance we focus on antimoney laundering and reconciliation and

our products AntiMoney Laundering Suite (AMLS) and Reconciliation Suite (RS)

cater to these areas respectively.

Today regulatory compliance processes have become more complex and fluidic

increasing the chances for rulebased models to fail. Banks need to move beyond

static rulebased systems and adopt a new approach to improve efficiency

effectiveness at optimal cost ensuring sustainable compliance programs across the

BFS industry. Tookitaki bridges the gap with its innovative software products AMLS

and RS.

Job Title: SSE DE

Tookitaki is looking for a Data Engineer who is familiar with the Hadoop platform and is able

to design implement and maintain optimal data/machine learning (ML) pipelines in the platform.

The following are the main responsibilities of the role:

Responsibilities

Designing and implementing finetuned production ready data/ML pipelines in Hadoop

platform.

Driving optimization testing and tooling to improve quality.

Reviewing and approving high level & detailed design to ensure that the solution

delivers to the business needs and align to the data & analytics architecture principles

and roadmap.

Understanding business requirement and solution design to develop and implement

solutions that adhere to big data architectural guidelines and address business requirements.

Following proper SDLC (Code review sprint process).

Identifying designing and implementing internal process improvements: automating

manual processes optimizing data delivery etc.

Building robust and scalable data infrastructure (both batch processing and realtime) to

support needs from internal and external users

Understanding various data security standards and using secure data security tools to

apply and adhere to the required data controls for user access in Hadoop platform.

Supporting and contributing to development guidelines and standards for data ingestion

Working with data scientist and business analytics team to assist in data ingestion and

data related technical issues.

Designing and documenting the development & deployment flow.

Requirements

Experience in developing rest API services using one of the Scala frameworks

Ability to troubleshoot and optimize complex queries on the Spark platform

Expert in building and optimizing big data data/ML pipelines architectures and data sets

Knowledge in modelling unstructured to structured data design.

Experience in Big Data access and storage techniques.

Experience in doing cost estimation based on the design and development.

Excellent debugging skills for the technical stack mentioned above which even includes

analyzing server logs and application logs.

Highly organized selfmotivated proactive and ability to propose best design solutions.

Good time management and multitasking skills to work to deadlines by working

independently and as a part of a team.

Ability to analyse and understand complex problems.

Ability to explain technical information in business terms.

Ability to communicate clearly and effectively both verbally and in writing.

Strong in user requirements gathering maintenance and support

Excellent understanding of Agile Methodology.

Good experience in Data Architecture Data Modelling Data Security.

Experience Must have:

a) Scala: Minimum 2 years of experience

b) Spark: Minimum 2 years of experience

c) Hadoop: Minimum 2 years of experience (Security Spark on yarn Architectural

knowledge)

d) Hbase: Minimum 2 years of experience

e) Hive Minimum 2 years of experience

f) RDBMS (MySql / Postgres / Maria) Minimum 2 years of experience

g) CI/CD Minimum 1 year of experience

Experience (Good to have):

a) Kafka

b) Spark Streaming

c) Apache Phoenix

d) Caching layer (Memcache / Redis)

e) Spark ML

f) FP (Scala cats / scalaz)

Qualifications

Bachelors degree in IT Computer Science Software Engineering Business Analytics or

equivalent with atleast 2 years of experience in big data systems such as Hadoop as well as

cloudbased solutions.

Job Perks

Attractive variable compensation package

Flexible working hours everything is resultsoriented

Opportunity to work with an awardwinning organization in the hottest space in tech

artificial intelligence and advanced machine learning

scala,big data,hadoop,elasticsearch,hive,spark,kafka


Job tags



Salary

All rights reserved