logo

JobNob

Your Career. Our Passion.

Research Scientist-Speech Synthesis - Text-to-Speech


CyberCoders


Location

Glendale, CA | United States


Job description

We are an energetic and award winning film technology compony pioneering the utilization of generative AI models within filmmaking. Taking on some of the most challenging problems in the industry and allowing on screen dialogue to be visually altered without the need to reshoot or voiceover. If you're an experienced Researcher in audio synthesis and human speech, please apply today!

*This role is in Los Angeles. We do offer relocation.

What You Will Be Doing

As a Research Scientist on the science team, you will work with and lead a close-knit, passionate group of world-class individuals to tackle some of the most challenging problems in generative speech synthesis. Our work in automated visual translation is just the beginning, were developing countless exciting products based on the application of our proprietary, cornerstone research.

What You Need for this Position

Minimum Qualifications
-Ph.D. or Postdoc researcher in audio synthesis, speech processing, or related field.
-3+ years of experience with TTS(Text-to-Speech), SST(Speech-to-Speech translation), or voice conversion methods.
-Strong publication record in venues such as ICASSP, Interspeech, or NeurIPS.
-Python, PyTorch, Tensorflow
-Proficiency in GCP or AWS

Preferred Qualifications
-Experience with audio identiy embedding, accent modeling, style-transfer, multi-language audio synthesis
-Experience with attention, diffusion models and speech signal processing

What's In It for You

The annual salary range for this role is $150,000 $230,000
Stock Options
Comprehensive medical, dental, and vision insurance
401(k) plan

So, if you are a Research Scientist-Speech Synthesis with experience, please apply today!

Applicants must be authorized to work in the U.S.

Preferred Skills

TTS

SST

Python

Audio identity embedding

PyTorch

Tensorflow


Job tags

Full time


Salary

$150k - $230k

All rights reserved