Research Scientist, World Models, DeepMind
As an organization, Google maintains a portfolio of research projects driven by fundamental research, new product innovation, product contribution and infrastructure goals, while providing individuals and teams the freedom to emphasize specific types of work. As a Research Scientist, you'll setup large-scale tests and deploy promising ideas quickly and broadly, managing deadlines and deliverables while applying the latest theories to develop new and improved products, processes, or technologies. From creating experiments and prototyping implementations to designing new architectures, our research scientists work on real-world problems that span the breadth of computer science, such as machine (and deep) learning, data mining, natural language processing, hardware and software performance analysis, improving compilers for mobile platforms, as well as core search and much more.
As a Research Scientist, you'll also actively contribute to the wider research community by sharing and publishing your findings, with ideas inspired by internal projects as well as from collaborations with research programs at partner universities and technical institutes all over the world.
We believe World Models will power numerous domains, such as visual reasoning, simulation, planning for embodied agents, and real-time interactive entertainment.
Artificial intelligence will be one of humanity’s most transformative inventions. At Google DeepMind, we are a pioneering AI lab with exceptional interdisciplinary teams focused on advancing AI development to solve complex global challenges and accelerate high-quality product innovation for billions of users. We use our technologies for widespread public benefit and scientific discovery, ensuring safety and ethics are always our highest priority.
Minimum qualifications:
- PhD in Computer Science, Machine Learning, a related field, or equivalent practical experience.
- Experience with transformer models or data pipelines.
- Experience in releases, publications, or open source projects related to video generation, world models, multimodal language models, or transformer architectures.
- Experience with systems and engineering in deep learning frameworks such as JAX or PyTorch.
- One or more scientific publication submission(s) for conferences, journals, or public repositories.
Preferred qualifications:
- Experience building training codebases for large-scale video or multimodal transformers.
- Experience with distillation of diffusion models.
- Expertise optimizing efficiency of distributed training systems or inference systems.