logo

JobNob

Your Career. Our Passion.

AI Safety & Robustness Analysis Manager - System Intelligent and Machine Learning - ISE


Apple Inc.


Location

Cupertino, CA | United States


Job description

Are you passionate about inclusion, fairness and safety in AI powered features that ship on 1.5B Apple products across the globe? Are you excited about Generative AI and motivated to build out robust and safety capabilities of generative models? We are the Intelligence System Experience (ISE) team within Apple’s software organization. The team works at the intersection between multimodal machine learning and system experiences. System Experience (Springboard, Settings), Keyboards, Pencil & Paper, Shortcuts are some of the experiences that the team oversees. These experiences that our users enjoy are backed by production scale ML workflows. Visual Understanding of People, Text, Handwriting & Scenes, multilingual NLP for writing workflows & knowledge extraction, behavioral modeling for proactive suggestions, and privacy preserving learning are areas our multi disciplinary ML teams focus on. We have multiple on-going efforts involving generative models, and we are looking for talented candidates to led the Robustness Analysis effort in ISE to ensure that features built on top of generative models are safe for deployment, and perform equally well for diverse customers within Apple's global user base. This is an exciting time to join us: grow fast, and have a positive impact on multiple key features on your first day at Apple!

Key Qualifications

Description

In this position, you will manage a team of people passionate about leading RA operations for key future facing Apple features with focus on ensuring safety and robustness for generative models. Apple’s dedication to deliver incredible experiences to a global and diverse set of users, in full respect of their privacy, has led to the development of a dedicated Robustness Analysis function. With the generative experience, creating a safe and robust platform is vital to our mission. Team’s responsibilities include monitoring ML model performance on relevant axes, and surfacing, measuring and mitigating ML failure modes, in order to improve overall user experience and reduce risks, with specific attention given to safety, inclusion and fairness. THE TEAM’S RESPONSIBILITIES INCLUDE: - research and develop approaches to mitigate harmful and risk behaviors in generative models - define product-centered axes of analysis relevant to target feature, in collaboration with model DRI and feature DRI - develop processes (models, tools and data) to identify other potential biases or failure modes - implement automated pipelines based on advanced ML technology and humans/models in the loop to create test sets covering the various axes of investigation - report progress and issues found in technical and sponsor meetings - suggest mitigation options (data and/or model) and lead mitigation experiments, when issues are found - become key contact within our organization, in company wide efforts related to safety, fairness and inclusion, robustness analysis, interpretability

Education & Experience

M.S. or PhD in Computer Science, Data Science, Mathematics, Physics, or a related field; or equivalent practical experience

Additional Requirements

Pay & Benefits

In this position, you will manage a team of people passionate about leading RA operations for key future facing Apple features with focus on ensuring safety and robustness for generative models. Apple’s dedication to deliver incredible experiences to a global and diverse set of users, in full respect of their privacy, has led to the development of a dedicated Robustness Analysis function. With the generative experience, creating a safe and robust platform is vital to our mission. Team’s responsibilities include monitoring ML model performance on relevant axes, and surfacing, measuring and mitigating ML failure modes, in order to improve overall user experience and reduce risks, with specific attention given to safety, inclusion and fairness. THE TEAM’S RESPONSIBILITIES INCLUDE: - research and develop approaches to mitigate harmful and risk behaviors in generative models - define product-centered axes of analysis relevant to target feature, in collaboration with model DRI and feature DRI - develop processes (models, tools and data) to identify other potential biases or failure modes - implement automated pipelines based on advanced ML technology and humans/models in the loop to create test sets covering the various axes of investigation - report progress and issues found in technical and sponsor meetings - suggest mitigation options (data and/or model) and lead mitigation experiments, when issues are found - become key contact within our organization, in company wide efforts related to safety, fairness and inclusion, robustness analysis, interpretability M.S. or PhD in Computer Science, Data Science, Mathematics, Physics, or a related field; or equivalent practical experience


Job tags

Relocation


Salary

All rights reserved