We've just announced IQ AI.
Ji Lin is an AI researcher known for his work on efficient deep learning, including deploying models on resource-constrained devices and optimizing large language models. He currently works at Meta's Superintelligence Labs (MSL), focusing on multimodal systems, reasoning, and synthetic data. [1]
Lin graduated from Tsinghua University with a Bachelor’s in Electrical and Electronics Engineering in 2018. He then earned his PhD at the Massachusetts Institute of Technology in 2023. [3]
Lin began his career as an AI intern researcher at Google in Beijing in 2018, followed by a research assistant role at the Massachusetts Institute of Technology from 2018 to 2023, where he contributed to long-term academic projects in artificial intelligence. In 2020, he completed a summer research internship at Adobe in Cambridge, Massachusetts, and later interned at NVIDIA in Santa Clara in 2023, focusing on advanced AI research. From November 2023, he worked as a member of technical staff at OpenAI, where he contributed to the development of o3/o4-mini, GPT-4o, GPT-4.1, GPT-4.5, 4o-imagegen, and the Operator reasoning stack, with a focus on multimodal systems, reasoning, and synthetic data. In July 2025, Lin joined Superintelligence Labs (MSL), a division of Meta focused on advancing artificial general intelligence. [1] [3]
Superintelligence Labs (MSL) is a division within Meta, launched in June 2024, to unify and accelerate the company’s artificial intelligence initiatives, particularly in pursuit of artificial general intelligence (AGI). Led by Alexandr Wang and Nat Friedman, MSL brings together teams working on foundation models, applied AI products, and core research from FAIR. The unit was formed alongside a major talent acquisition campaign, hiring researchers from OpenAI, Anthropic, and DeepMind, and follows Meta’s $14.3 billion investment in Scale AI. MSL oversees the development of Meta’s Llama model series and next-generation AI systems, with a focus on long-term advancements and integration across Meta’s consumer platforms. [2] [8]
Lin's PhD defense at MIT focused on his research titled "Efficient Deep Learning Computing: From TinyML to Large Language Models." During his presentation, Lin highlighted his significant contributions over five years, including projects that addressed efficiency in deep learning. His work spanned from developing techniques for deploying vision models on microcontrollers with limited memory to quantizing large language models to reduce their serving costs. For instance, he implemented algorithms that enabled efficient inference and training on devices with as little as 256 kilobytes of memory. Lin discussed the challenges of scaling deep learning models as their sizes increased, often outpacing available hardware capabilities. He also emphasized the importance of co-designing algorithms and hardware to achieve optimal performance. The presentation concluded with Lin expressing gratitude to his advisors, collaborators, family, and friends for their support throughout his research journey. [4] [5]