Courses by Nathan Lambert
The RLHF Book. Reinforcement learning from human feedback, alignment, and post-training LLMs
Delve into reinforcement learning with human feedback through a book on aligning models with preferences. Learn about RLHF and RLVR.
Nathan Lambert
Nathan Lambert is the head of the post-training direction at the Allen Institute for Artificial Intelligence. Previously, he worked at HuggingFace, DeepMind, and Facebook AI. Nathan has been a guest lecturer at Stanford, Harvard, MIT, and other leading universities, and is also a frequent and sought-after speaker at NeurIPS and other artificial intelligence conferences. He has received several professional awards, including the "Best Theme Paper Award" at ACL and "Geekwire Innovation of the Year." His scientific works in the field of AI have over 8,000 citations on Google Scholar, and his articles on contemporary AI research on the popular platform interconnects.ai attract millions of views annually. Nathan received his PhD in Electrical Engineering and Computer Science from the University of California, Berkeley.