Location
Noida | India
Job description
- Lead a team of data scientists and ML engineers for program executions
- Teach, lead, and counsel colleagues on new techniques or solutions
- Collaborate with data and software engineers to enable deployment of sciences and technologies that will scale across the company s ecosystem
- Provide support to data science team with high-level expertise in an open-source language (e.g., R, Python, Julia, etc.)
- Spot and evaluate emerging/cutting edge, open source, data science/machine learning libraries/big data platforms (e.g. XGBoost, H2Oai, Spark, Hadoop, etc.)
- Work on papers to publish problem solving approach with ML Models, NLP and Computer vision techniques
- Responsible for building analytic systems and predictive models as well as experimenting with new models and techniques
- Utilize data visualization tools to deliver insights to stakeholders.
- Prepare, maintain and present clear and coherent communication, both verbal and written, to understand data needs and report results
- Working with the Navikenz Solution Architect, build/operationalize the defined solution to be scalable, maintainable and production-ready
- Conduct research and develop prototypes and proof of concepts
Qualifications/Experience The ideal candidate will have a Bachelors, Masters degree or PhD in Statistics, Mathematics, Computer Science or another quantitative field, with at least 5 years of professional experience manipulating data sets and building statistical models, and familiar with the following software/tools/techniques (overall 10 years of experience):
- Combined knowledge of computer science and applications, modeling, statistics, analytics and maths to solve problems
- Knowledge and experience in statistical, machine learning algorithms and techniques: Regression, Classification, Random Forest, Boosting, Decision Trees, text mining, social network analysis, neural networks, etc.
- Coding knowledge and experience with several languages: Python/R/Julia, Java/Javascript, REST, etc.
- Experience with distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, Databricks, H2O.ai, etc.
- Experience visualizing/presenting data for stakeholders using: Matplotlib, ggplot, D3, etc.
- Experience with cloud based platforms/products - AWS, Azure, GCP, Kubeflow, AutoML etc.
Job tags
Salary