logo

JobNob

Your Career. Our Passion.

AIML - Staff Software Engineer, On-Device Machine Learning


Apple Inc.


Location

Cupertino, CA | United States


Job description

At Apple, the AIML - On-Device Machine Learning group is responsible for accelerating the creation of amazing on-device ML experiences, and we are looking for a tenured software engineer to help define and implement features that accelerate and compress large state of the art (SoTA) models (e.g., LLMs) in our on-device inference stack. We are a dedicated team working on ground breaking technology in the field of natural language processing, computer vision and artificial intelligence. We are designing, developing, and optimizing large-scale language/vision/multi-modal models that power on-device inference capabilities across various Apple products and services. This is a unique opportunity to work on powerful new technologies and contribute to Apple's ecosystem, with a commitment to privacy and user experience impacting millions of users worldwide. Are you someone who can write high-quality, well-tested code and collaborate cross-functionally with partner HW, SW and ML teams across the company? If so, come join us and be part of the team that is helping Machine Learning developers innovate and ship enriching experiences on Apple devices!

Key Qualifications

Description

As a member of this team, the successful candidate will: - Build features for our on-device inference stack to support the most relevant accuracy preserving, general purpose techniques that empower model developers to compress and accelerate SoTA models (e.g., LLMs) in apps - Convert models from a high-level ML framework to a target device (CPU, GPU, Neural Engine) for optimal functional accuracy and performance - Write unit and system integration tests to ensure functional correctness and avoid performance regressions - Diagnose performance bottlenecks and work with HW Arch teams to co-design solutions that further improve latency, power, and memory footprint of neural network workloads - Analyze impact of model optimization (compression/quantization etc) on model quality by partnering with modeling and adaptation teams across diverse product use cases.

Education & Experience

Bachelor’s, Master's, or PhD in Computer Science, Machine Learning or a related field

Additional Requirements

Pay & Benefits

As a member of this team, the successful candidate will: - Build features for our on-device inference stack to support the most relevant accuracy preserving, general purpose techniques that empower model developers to compress and accelerate SoTA models (e.g., LLMs) in apps - Convert models from a high-level ML framework to a target device (CPU, GPU, Neural Engine) for optimal functional accuracy and performance - Write unit and system integration tests to ensure functional correctness and avoid performance regressions - Diagnose performance bottlenecks and work with HW Arch teams to co-design solutions that further improve latency, power, and memory footprint of neural network workloads - Analyze impact of model optimization (compression/quantization etc) on model quality by partnering with modeling and adaptation teams across diverse product use cases. Bachelor’s, Master's, or PhD in Computer Science, Machine Learning or a related field


Job tags

WorldwideRelocation


Salary

All rights reserved