🚀 Arthur Hayes 万字狂飙:AI 泡沫是史上最大印钞机!闭眼买入加密资产,趁散户还没醒来!
BitMEX 联合创始人 Arthur Hayes 刚刚发布了一篇极具煽动性的宏观长文《The Butterfly Touch》。他的核心判断非常暴力:不管世界怎么乱,中美印钞机都已经全速开启。AI 泡沫和地缘冲突正在制造人类历史上最恐怖的法币洪流,而比特币和 Crypto 将是最大的受益者。
他建议所有人:现在是牛市,闭上眼睛,按下买入键。别搞砸了。
如果你不想错过这轮史诗级的放水,请务必读懂他背后的四层硬核逻辑:
1、AI 军备竞赛:无底洞的算力与电力消耗
AI 模型的训练和推理,需要前所未有的资本支出(CAPEX)。中美两国都把 AI 霸权视为生死存亡的红线。为了赢,科技巨头和政府根本不在乎烧多少钱。
更可怕的是“杰文斯悖论”和“红皇后效应”:
AI 越便宜、越聪明,用它的人就越多,消耗的算力和电力就会呈指数级暴增。
竞争对手只要推出了更强的模型,你之前砸下的千亿算力就会瞬间贬值,逼着你继续投入万亿去追赶。
这笔钱从哪来?靠利润根本不够。中美两国的银行和央行正在通过狂放贷款、疯狂印钞来支持科技和电力基建。政治意愿 + 财务放水 = 加密货币的完美温床。
2、地缘冲突打破了“美元信仰”
特朗普轰炸伊朗,根本不在乎全球供应链死活,因为美国自己有便宜的化石燃料和粮食。
但这把其他国家(欧洲、亚洲、非洲)吓醒了。他们突然发现,过去几十年把国家盈余存在“美国国债”里是极其愚蠢的。当战争导致买不到化肥和石油时,手里拿着美债和标普 500 ETF 有个屁用?
未来的趋势是:各国主权基金将慢慢抛售美元金融资产,转而去囤积实物商品、建设基建和国防。
当外国人不再买单,美国市场怎么办?美联储和财政部只能继续放宽金融条件(比如放松银行的资本杠杆率要求 eSLR),自己印钱自己买。结论:宽松的货币环境将持续数十年。
3、更高的通胀,更久的狂欢
战争永远是通胀的催化剂。AI 基建和地缘冲突给了政客们完美的支持印钞的借口。这也是为什么自 2 月 28 日以来,比特币的表现彻底碾压了黄金和美股科技股。
很多人抱怨过去 24 个月 BTC 涨不过科技股,那是他们根本不懂比特币对法币流动性扩张的极端敏感性。
Hayes 断言:背后有数万亿即将印出来的美元和人民币做后盾,BTC 重回 126,000 美元已经是板上钉钉。 当价格突破 9 万美元,逼迫大量看涨期权卖方平仓时,上涨轨迹将极具爆发力。
4、狂欢何时结束?现在该买什么?
这场狂欢只有在两个条件下才会停止:
市场被撑死:出现极其荒唐的巨额 AI 相关 IPO 或并购,导致市场崩盘,资金开始质疑 AI 是否真的值那么多钱。
政治清算:2028 年美国大选前,如果 AI 消耗了太多电力导致电费飙升、通胀失控,政客们为了选票会开始反击 AI。
但现在,派对才刚刚开始!
Hayes 已经把旗下基金 Maelstrom 的风险拉满了。除了重仓的 $HYPE 和 $ZEC(主打隐私和抗审查叙事),他下一个强烈看好的山寨币是 $NEAR(隐私叙事 + 意图交易将带来正向现金流)。
“这是牛市。闭上眼睛买入。会有卖出的时候,但绝不是现在。趁乌合之众还没醒来,趁 AI 泡沫还没破裂,让我们一起疯狂吧!”
显示更多
Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project.
This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.:
- It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work.
- It found that the Value Embeddings really like regularization and I wasn't applying any (oops).
- It found that my banded attention was too conservative (i forgot to tune it).
- It found that AdamW betas were all messed up.
- It tuned the weight decay schedule.
- It tuned the network initialization.
This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism.
All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges.
And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.
显示更多