注册并分享邀请链接,可获得视频播放与邀请奖励。

与「deeplearning」相关的搜索结果

deeplearning 贴吧
一个关键词就是一个贴吧,路径全站唯一。
创建贴吧
用户
未找到
包含 deeplearning 的内容
2026年学习AI的最佳YouTube频道: 1. AI Explained 👉 2. Andrej Karpathy 👉 3. Cole Medin 👉 4. DeepLearningAI 👉 5. Futurepedia 👉 6. Matthew Berman 👉 7. Skill Leap AI 👉 8. Tech With Tim 👉 9. Tina Huang 👉 10. Two Minute Papers 👉
显示更多
0
2
69
25
转发到社区
The signature is alluding to NVIDIA GTC 2015, where Jensen excitedly told an audience of, at the time, mostly gamers and scientific computing professionals that Deep Learning is The Next Big Thing, citing among other examples my PhD thesis (one of the first image captioning systems that coupled image recognition ConvNet to an autoregressive RNN language model, trained end to end). This was back when most people were still unaware and somewhat skeptical but of course - Jensen was 1000% correct, highly prescient and locked in very early.
显示更多
0
29
1.3K
52
转发到社区
I still give the book Understanding Deep Learning by Simon J.D. Prince a good recommendation, but chapter 21: Deep learning and Ethics was sloppy. It could have been a chapter to really dig in on case studies, but it was just the basic public news story level coverage of bias and such, like: “In AI, it can be pernicious when this deviation depends on illegitimate factors that impact an output. For example, gender is irrelevant to job performance, so it is illegitimate to use gender as a basis for hiring a candidate. Similarly, race is irrelevant to criminality, so it is illegitimate to use race as a feature for recidivism prediction.” If they had stuck with “illegitimate”, then it would have been a question of societal choices, but “irrelevant” is a question about data, and your priors shouldn’t be so strong that data can’t move them. I would like to see a book or course walk through a machine learning problem with the input features being presented as something like car choices: color, style, doors, horsepower, etc. Do lots of analysis over representation, training, and generalization, then swap the feature labels to socially charged ones. What makes generalization credible in one situation but not the other?
显示更多
0
66
529
24
转发到社区
There is an alternate reality where Cray took their vector supercomputers, ditched FP64 calculations, and went with one FP32 pipe and a BF16 tensor core pipe. The same instruction set, memory architecture, and vector registers would have made a sweet deep learning machine, in many ways nicer than SIMT CUDA programming on GPUs. A Y-MP class machine like that could have delivered the AlexNet and DQN moments two decades earlier. Even doing everything in FP64 with no architectural changes, a Cray-1 would have been the best machine in the world for neural networks. If @geoffreyhinton had access to one for early research, the case could have been made for the architectural modifications to 10x the performance.
显示更多
0
192
3.1K
280
转发到社区
We're developing and using AI to revolutionize scientific discovery - from predicting protein structures with #AlphaFold#, to materials discovery with GNoME. Join our VP Science, @pushmeet, and Professor @fryrsquared as they explore how AI could lead to a new era of breakthroughs that could benefit humanity in countless ways, on Google DeepMind: The Podcast. Watch now ↓ Timestamps: 00:00 Intro 01:13 AlphaFold 06:13 AlphaFold Database 08:14 Weather forecasting 11:24 Creating new materials with deep learning 25:10 Imposter syndrome, being a generalist, and Nobel prize winners 31:21 Choosing the right projects 32:07 Root node problems 34:32 Large language models for science 36:06 Function search and algorithmic discovery 42:10 Math Olympiad 46:26 What’s coming next 48:35 Reflections from Hannah
显示更多
0
25
718
120
转发到社区