The End of Finetuning — with Jeremy Howard of Fast.ai

2023/10/20 に公開
視聴回数 17,543
0
0
Fast.ai’s “Practical Deep Learning” courses been watched by over 6,000,000 people, and the fastai library has over 25,000 stars on Github. Jeremy Howard, one of the creators of Fast, is now one of the most prominent and respected voices in the machine learning industry; but that wasn’t always the case... Read the full show notes here: https://www.latent.space/p/fastai

0:00:00 Introduction
0:01:14 Jeremy’s background
0:02:53 Founding FastMail and Optimal Decisions
0:04:05 Starting Fast.ai with Rachel Thomas
0:05:28 Developing the ULMFit natural language processing model
0:10:11 Jeremy’s goal of making AI more accessible
0:14:30 Fine-tuning language models - issues with memorization and catastrophic forgetting
0:18:09 The development of GPT and other language models around the same time as ULMFit
0:20:00 Issues with validation loss metrics when fine-tuning language models
0:22:16 Jeremy’s motivation to do valuable work with AI that helps society
0:26:39 Starting fast.ai to spread AI capabilities more widely
0:29:27 Overview of fast.ai - courses, library, research
0:34:20 Using progressive resizing and other techniques to win the DAWNBench competition
0:38:42 Discovering the single-shot memorization phenomenon in language model fine-tuning
0:43:13 Why fine tuning is simply continued pre-training
0:46:47 Chris Lattner and Modular AI
0:48:38 Issues with incentives and citations limiting innovation in research
0:52:49 Joining AI research communities through Discord servers
0:55:23 Mojo
1:03:08 Most exciting areas - continued focus on transfer learning and small models
1:06:56 Pushing capabilities of small models through transfer learning
1:10:58 Opening up coding through AI to more people
1:13:51 Current state of AI capabilities compared to computer vision in 2013 - lots of basic research needed
1:17:08 Lightning Round