注册并分享邀请链接,可获得视频播放与邀请奖励。

John Carmack (@ID_AA_Carmack) “I still give the book Understanding Deep Learning by Simon J.D. Prince a good re” — TopicDigg

John Carmack 的个人资料封面
John Carmack 的头像
John Carmack
@ID_AA_Carmack
AGI at Keen Technologies, former CTO Oculus VR, Founder Id Software and Armadillo Aerospace
加入 August 2010
285 正在关注    1.6M 粉丝
I still give the book Understanding Deep Learning by Simon J.D. Prince a good recommendation, but chapter 21: Deep learning and Ethics was sloppy. It could have been a chapter to really dig in on case studies, but it was just the basic public news story level coverage of bias and such, like: “In AI, it can be pernicious when this deviation depends on illegitimate factors that impact an output. For example, gender is irrelevant to job performance, so it is illegitimate to use gender as a basis for hiring a candidate. Similarly, race is irrelevant to criminality, so it is illegitimate to use race as a feature for recidivism prediction.” If they had stuck with “illegitimate”, then it would have been a question of societal choices, but “irrelevant” is a question about data, and your priors shouldn’t be so strong that data can’t move them. I would like to see a book or course walk through a machine learning problem with the input features being presented as something like car choices: color, style, doors, horsepower, etc. Do lots of analysis over representation, training, and generalization, then swap the feature labels to socially charged ones. What makes generalization credible in one situation but not the other?
显示更多
0
66
529
24
转发到社区