logo

JobNob

Your Career. Our Passion.

AI Tech Cofounder


Vaanee


Location

Delhi | India


Job description

Company Description

Vaanee AI is a complete generative voice AI toolkit that offers an all-in-one video platform where you can turn your ideas into action. Our AI voice engine can create realistic human-like voiceovers in seconds and our highly expressive voice cloning cross-language product is currently working on generating output for low-resource Indian languages. With our state-of-the-art Vocoder and our model that renders human intonation and inflections with unrivaled fidelity, we deliver realistic output that preserves background and accent. We are committed to the unification of diversity of global languages, and offer voices for every kind.

Role Description

This is a full-time remote role for an MlOps Engineer at Vaanee AI. The MlOps Engineer will maintain and develop the machine learning pipeline and ensure that the production deployment environment is up to industry standards. They will also work with data scientists and engineers to create scalable solutions for model deployment, as well as collaborate with the development team to enrich production features.

Competencies

Technical depth: Necessary technical knowledge in web technologies for real time just in time processing. Basic knowledge on Audio/Video formats.

• Explore new technologies: Research and stay up to date with the latest machine learning technologies, frameworks and inference engines.

• Experience with performance optimization of AI on GPU, NPU, CPU a plus

• Invent & Innovate: Develop short and long-term technologies, algorithms and software tools that will help make Dolby a world leader in enhancing the sight and sound associated with digital content consumption. Then influence and collaborate with BG partners to put the technology into production.

• Work with a sense of urgency: Respond aggressively to changing trends and new technologies and create new approaches to capitalize on them. Take appropriate risks to be ahead of the competition and the market.

• Collaborate: Collaborate with and influence peers in developing industry-leading technologies. Work with external trendsetters and technology drivers in academia and in partner enterprises.

• Experience with optimizing ML model inference time performance in constrained deployment environments is a plus.

Key Responsibilities:

• Optimize models for deployment in a wide range of target architectures including desktop, cloud, browsers and mobile devices

• Develop and implement algorithms and software for efficient real-time and offline inference

• Monitor and evaluate the performance of models in a production context and optimize them for accuracy, speed and compute resource efficiency

• Design and implement custom tooling and strategies for the development and deployment of optimized deep learning models

• Research and implement appropriate techniques to optimize deployment for product

• Work closely with cross-functional teams, including product managers, engineers and researchers, to understand their workflows and design and implement optimized model deployment techniques

Qualifications


Job tags



Salary

All rights reserved