logo

JobNob

Your Career. Our Passion.

Engineer


ARK Infotech Spectrum


Location

New York, NY | United States


Job description

Responsibilities:

- Designing and developing machine learning systems, implementing appropriate ML algorithms, conducting experiments, and staying updated with the latest developments in the field.

- Create data models, perform statistical analysis, and train and retrain systems to optimize performance.

- build efficient self-learning applications and contribute to advancements in artificial intelligence.

- Run machine learning tests and experiments

- Implement appropriate ML algorithms

- Use GPU for training, distributed computing pyspark, and parallel compute in libraries in python

- Provide understanding of how components and processes work together and communicate with each other using library calls, REST APIs queueing/messaging systems and database queries - Provide system design to avoid bottlenecks to let algorithms scale well with increasing volumes of data

Basic Qualifications: 5+ years of experience in the following:

- PyTorch, NLTK, SciPy, Scikit

- Learn, Numpy, OpenCV or equivalent for image preprocessing

- SQL/NoSql databases and queries

- One or more ML toolkits or Python frameworks

- Deep Learning concepts

- Apply standard implementations of machine learning algorithms effectively by choosing a suitable model such as decision tree, knn, neural net, or an ensemble of multiple models

Nice to have:

- Understanding of probability and statistics and machine learning concepts such as precision, recall, optimization, hyperparameter tuning, overfitting, and interpretability

- Coding best practices, OOD/OOP, modular design, SOA, and systems architecture

Technical Skills:

- Python, pyspark, or R programming language for coding - Kubernetes and docker for deployment

- AWS sagemaker, or EC2 instances for cloud

- MYSQL, Oracle, mongodb, or Redshift DB for database

- Cloudera Distributed Platform for computing & deployment deep learning/neural networks packages like pytorch, tensorflow in python, and use GPU for training distributed computing pyspark and parallel compute in libraries in python

Report this job


Job tags

Contract work


Salary

All rights reserved