We've just announced IQ AI.
Jack Rae is a distinguished scientist known for his work on large language models (LLMs), compression, and reinforcement learning. He is currently part of Meta's Superintelligence Labs team.
Jack Rae completed a Doctor of Philosophy (Ph.D.) in Computer Science at University College London (UCL) between 2016 and 2020. His academic work explored mechanisms for lifelong reasoning, with a focus on memory models utilizing sparse and compressive structures.
In 2013, he obtained a Master of Science in Statistics from Carnegie Mellon University, where his academic performance was evaluated with a GPA of 4.1 on a 4.3 scale. During his time at the institution, he participated in student groups, including the Explorer’s Society and the CMU Cycling Club.
Earlier in his academic trajectory, Rae earned a Master of Science in Mathematics and Computer Science from the University of Bristol. He studied there from 2008 to 2012 and graduated with First Class Honours. While enrolled, he was also affiliated with the university’s cycling club. [10] [11]
Jack Rae has held positions at several prominent technology companies focusing on artificial intelligence research and development. He previously worked at Quora before spending approximately seven and a half years at Google DeepMind. During his tenure at Google DeepMind, he served as a pre-training technical lead for the Gemini model and spearheaded the development of reasoning capabilities for Gemini 2.5. His work at Google also included contributions to models such as Gopher and Chinchilla.
In July 2022, Rae announced he was joining OpenAI. He later moved to Meta, announcing his excitement about joining the company in June, where he is a Distinguished Scientist focusing on LLMs, Compression, and Reinforcement Learning within the Superintelligence Labs.
Throughout his career, Rae has contributed to the development of several significant AI models. At Google DeepMind, he was involved with the Gopher and Chinchilla LLMs and played a key role in the Gemini project, specifically leading pre-training efforts and reasoning development for Gemini 2.5. He also commented on updates to Gemini 2.0 Flash Thinking, noting improved performance and capabilities like long-context and code execution. [1] [2] [3] [4] [5]
Rae has publicly shared his perspectives on trends and developments in artificial intelligence. He has commented on the recurring notion that "deep learning is hitting a wall," often doing so in a celebratory tone on specific dates. He has also discussed the "bitter lesson," suggesting that much of the research from decades of dialogue publications did not directly lead to models like ChatGPT, highlighting a focus shift away from traditional methods like slot filling, intent modeling, sentiment detection, and hybrid symbolic approaches. Rae has also expressed views on the potential emergence of Artificial General Intelligence (AGI), commenting on specific demonstrations of advanced AI capabilities. [6] [7] [8] [9]
On April 5, 2025, Jack Rae appeared on the YouTube channel Cognitive Revolution to discuss developments in large language models, focusing on Gemini 2.5 Pro. As Principal Research Scientist at Google DeepMind and technical lead for inference-time reasoning and scaling, Rae outlined key engineering strategies and research directions shaping current AI systems.
He described Gemini 2.5 Pro as the product of ongoing refinements in architecture and training, noting that its ability to handle input contexts of hundreds of thousands of tokens reflects gradual progress rather than sudden breakthroughs. These advancements were attributed to collaborative efforts and scaling practices within DeepMind.
Rae also commented on the convergence among AI labs around reasoning techniques such as chain-of-thought prompting, suggesting that shared challenges and resource environments contribute to similar outcomes. He discussed the role of reinforcement learning based on correctness signals in improving model reasoning, emphasizing its incremental evolution over time.
The conversation addressed challenges in interpretability, particularly the difficulty of analyzing internal model processes. Rae highlighted ongoing work in mechanistic interpretability aimed at improving transparency in models using complex reasoning paths.
Regarding the path toward artificial general intelligence (AGI), Rae identified areas such as long-term memory, multimodal learning, and agent behavior as current research priorities. He mentioned that Gemini 2.5 Pro’s long-context capabilities enable interaction with extended inputs, such as large codebases or documents, without the need for summarization.
He also noted that model deployment involves trade-offs, including compute limitations and user experience design, which shape how systems are used in practice. Throughout the interview, Rae emphasized the importance of iteration, scaling, and empirical testing in the development of language models. [12]