注册并分享邀请链接,可获得视频播放与邀请奖励。

Andrej Karpathy (@karpathy) “I spent more test time compute and realized that my micrograd can be dramaticall” — TopicDigg

Andrej Karpathy 的个人资料封面
Andrej Karpathy 的头像
Andrej Karpathy
@karpathy
I like to train large deep neural nets. Previously Director of AI @ Tesla, founding team @ OpenAI, PhD @ Stanford.
加入 April 2009
1.1K 正在关注    2.5M 粉丝
I spent more test time compute and realized that my micrograd can be dramatically simplified even further. You just return local gradients for each op and get backward() to do the multiply (chaining) with global gradient from loss. So each op just expresses the bare fundamentals of what it needs to: the forward computation and the backward gradients for it. Huge savings from 243 lines of code to just 200 (~18%). Also, the code now fits even more beautifully to 3 columns and happens to break just right: Column 1: Dataset, Tokenizer, Autograd Column 2: GPT model Column 3: Training, Inference Ok now surely we are done.
显示更多
0
90
2.6K
175
转发到社区