We design Spotify’s consumer experience—end to end, moment to moment, across every screen, platform, and partner integration. Our mission is to make listening feel effortless, personal, and joyful for billions of users around the world. That means turning complexity into clarity across hundreds of touchpoints—from our mobile and desktop apps to the smart speakers, TVs, cars, and integrations where Spotify shows up every day. If it touches a consumer, we shape it. We bring deep insight into human behavior, design, and technology to craft experiences that feel intuitive, expressive, and unmistakably Spotify.
About the Team
The Policy & Safety team sits within the Content Platform domain and builds the systems that keep Spotify safe and trustworthy at scale. We own the infrastructure behind content moderation, including detection models, policy enforcement systems, compliance pipelines, and the safety-by-default platform.
Our work is critical to every new content type and product experience—from messaging and comments to collaborative and emerging AI-driven features. We partner closely with Trust & Safety, Legal, and Public Affairs to ensure that safety is built into Spotify experiences from the start.
What You Will Do
Build and scale machine learning systems for proactive content detection, classification, and pre-publish safety scanning
Design and implement policy evaluation frameworks, including standardized datasets, offline and online metrics, and continuous improvement loops
Develop multimodal models that combine text, audio, image, and video signals for safety and policy enforcement
Architect feedback loops that turn reviewer input into structured training data for continuous model improvement
Translate regulatory requirements into scalable ML system designs, including accuracy and reporting expectations
Partner with cross-functional teams across Trust & Safety, Legal, Public Affairs, and Product to deliver safe user experiences
Drive technical direction in ambiguous problem spaces and contribute to long-term platform architecture
Mentor and support other machine learning engineers, helping grow technical capability across the team
Who You Are
You have experience building and shipping production-grade machine learning systems at scale
You are experienced with ML evaluation, including dataset design, metrics, and model performance monitoring
You have worked with multimodal machine learning across text, audio, image, or video domains
You have experience with human-in-the-loop systems, active learning, or feedback-driven model improvement
You are comfortable translating complex requirements into technical solutions, including policy or regulatory constraints
You are experienced working across teams and influencing technical direction in large systems
You are comfortable navigating ambiguity and making thoughtful trade-offs between speed, quality, and risk
You communicate clearly and collaborate effectively with both technical and non-technical partners
Where You Will Be
This role is based in London or Stockholm
We offer you the flexibility to work where you work best! There will be some in person meetings, but still allows for flexibility to work from home.