注册并分享邀请链接,可获得视频播放与邀请奖励。

与「SuperAI」相关的搜索结果

SuperAI 贴吧
一个关键词就是一个贴吧,路径全站唯一。
创建贴吧
用户
未找到
包含 SuperAI 的内容
OpenAI. Anthropic. Google DeepMind. Mistral AI. Cerebras. On stage with Balaji Srinivasan, Benedict Evans, Max Tegmark, Robbie Schingler, and more. 10,000+ attendees. Marina Bay Sands. SuperAI Singapore, 10–11 June
显示更多
0
260
3.3K
1K
转发到社区
Join 10,000+ attendees at Marina Bay Sands, Singapore (10-11 June) for SuperAI. Secure your spot now at US$499 before prices increase again.
0
419
5.6K
1.2K
转发到社区
Episode 172 Part 1 is live! Fresh off the plane from Dubai, Zuby (@ZubyMusic) has his first supervised FSD ride in Austin. He discusses his global experiences and the biggest myth about masculinity. Drop a listen ❤️ (Pt 2 soon!)
显示更多
0
115
1.2K
259
转发到社区
During #BinanceOnline#, CZ shared his insights and He said: “You may think a stablecoin is just a simple stablecoin, but it is actually financial access for millions of people in different parts of the world.” My POV: @cz_binance always makes every detail count🫡 If you know what #BNB# stands for in full, you already understand the message behind this background🔶💛 Build N Build is the mission🔶 #SuperApp# #CZ#
显示更多
0
16
57
6
转发到社区
Culper Research 发布做空英伟达( $NVDA )的详细解读 Culper Research 是一家美国激进做空机构,它的模式非常简单,先建立空头仓位,再发布调查报告,攻击上市公司的财务、业务、监管、关联交易或信息披露问题。 所以 Culper 的报告并不是一份中立的研究,而是交易的一部分。 不过这类机构真正的价值,是把市场没注意到的异常线索,挖掘出来,但从我个人的理解来看,可能更多的是对于做空标的短期的价格影响,长期还是要看企业本身的业绩。尤其是对于 英伟达 这种 AI 风口浪尖的企业来说。 这次 Culper 做空英伟达,核心表述只有一个,就是 英伟达 的中国业务,可能没有真正归零。潜台词就是 英伟达 可能没有完全遵循美国的对于中国的限制。 英伟达对市场的公开表述是: 2025年4月 美国收紧出口限制后,公司在中国的算力业务已经基本归零。黄仁勋也多次说,英伟达在中国的 compute business 从接近 95% 份额跌到 0%。 那么市场因此会认为: 既然中国业务已经没了,那么未来如果中美关系缓和,或者出口限制放松,中国就是英伟达的额外增量。尤其是这次在最后关头 黄仁勋 也加入了访华团队,这对于打开 英伟达 在中国的销售可能会有帮助。 但 Culper 的判断正好相反,Culper 认为中国需求并没有消失,只是从直接销售,变成了东南亚中转、云算力租赁、OEM 供货和中间商采购的形式。 也就是说,英伟达财报上看到的可能不是中国收入,但最终真实需求仍然可能来自中国客户。 Culper 报告里最重要的几条线索: 第一, Megaspeed Megaspeed 是一家新加坡 AI 算力云服务商,表面上是在东南亚购买英伟达服务器,然后把算力租给客户。 英伟达曾为 Megaspeed 背书,说它没有中国股东,也没有发现芯片转移。但没有中国股东,不等于没有中国资金。 Megaspeed 2023 年底体量还很小,2024 年底资产负债表突然膨胀到接近 30 亿美元,主要来自 29 亿美元可退还押金。 同时,它又有接近 29 亿美元对子公司的应收款,资金继续流向马来西亚子公司 Speedmatrix。 第二,Speedmatrix 和阿里相关资金链 Culper 指出,Speedmatrix 曾把自己的业务、设备和未来资产抵押给一家新加坡公司 Apex Enterprise Solutions。 而新加坡文件显示,Apex 的母公司是 Alibaba Group,业务目的包括 procurement activities。Apex 账上有超过 41 亿美元预付款,同时有约 42 亿美元来自阿里相关公司的贷款。 所以 Culper 的推论是阿里相关资金可能通过 Apex 进入采购结构,再通过 Megaspeed 和 Speedmatrix 体系购买英伟达服务器。 第三,Aivres Speedmatrix 从 2024 年底到 2026 年初进口了约 46 亿美元产品,其中约 40 亿美元来自 Aivres Systems。Aivres 是英伟达 Elite OEM compute partner,负责组装高端英伟达服务器。 但 Aivres 的前身是 Inspur Systems,也就是浪潮体系的一部分。浪潮集团被美国列入实体清单后,Inspur Systems 改名为 Aivres。 Culper 认为,Aivres 表面上是美国公司、合规 OEM 伙伴,但它和中国需求之间的关系非常敏感。如果 英伟达 把货卖给 Aivres,财报上可能体现为美国客户收入。 但如果这些服务器最终通过马来西亚、新加坡、印尼等路径服务中国客户,那么市场看到的区域收入分布,就可能低估英伟达对中国真实需求的依赖。 第四,Supermicro / OBON 案件 2026年3月,美国司法部起诉了几名与 Supermicro 有关的人士,指控他们通过东南亚中间实体,把至少 25 亿美元英伟达芯片服务器走私到中国。 这个案件对 Culper 很看重,因为它证明了东南亚中转 + 假数据中心 + 真实服务器转移到中国不是幻想,而是已经进入司法程序的真实案例。 第五,马来西亚数据中心 Culper 认为,东南亚数据中心是绕开出口限制的关键节点。美国限制的是高端 GPU 直接出口中国。但如果 GPU 放在马来西亚、新加坡、泰国的数据中心,中国公司远程租用算力,形式上可能不是芯片出口,但实质上仍然是在满足中国 AI 需求。 这才是英伟达中国问题的复杂之处。问题不是芯片有没有被直接运进中国。而是算力是否被中国客户实际使用。这也是 Culper 对英伟达最严重的指控,认为 英伟达 不可能完全不知道。 因为 英伟达 理论上可以通过客户 KYC、订单规模、客户成立时间、保修记录、服务器 IP、软件更新、延迟数据、设备心跳信号等方式,判断 GPU 是否真的在申报地点运行。 如果几万张 GPU 声称部署在马来西亚、新加坡,但实际使用路径异常,英伟达不应该完全没有感知。如果英伟达知道、默许或者放任,那就会变成出口管制、收入质量和管理层可信度问题。 当然,到这里复杂的情况就变多了,因为要证明 英伟达 主观知情,门槛非常高。英伟达有能力知道和英伟达已经知道并故意放任,完全是两回事。 所以这份报告真正的目的,并不是在于 Culper 能不能定罪英伟达,而在于后续监管会不会接手调查。 如果美国商务部、司法部、新加坡、马来西亚继续调查 Megaspeed、Speedmatrix、Aivres、YTL、Novagate 这些链条,那它就不是单纯做空报告,而是监管事件。 总体来说 Culper 做空英伟达的核心逻辑就是市场认为中国是英伟达未来的潜在增量。而 Culper 认为中国其实是英伟达过去一年隐藏的存量。 如果中国业务早就归零,那么未来放松限制就是利好。但如果中国业务过去只是藏在东南亚、OEM 和云算力渠道里,那么当美国继续收紧出口管制,中国又推动国产替代时,英伟达面对的就不是增量消失,而是隐藏存量被切断。 这才是 Culper 做空英伟达的真正逻辑。
显示更多
0
12
68
14
转发到社区
Excited to share that we’ve started Recursive Superintelligence to automate knowledge discovery, starting with AI that experiments on how to improve itself. We raised $ 650m+ led by GV and Greycroft.
显示更多
0
23
214
23
转发到社区
Creo que no superaré la rueda de prensa de Florentino en toda la semana. Me sé varias respuestas de memoria. No se me va de la cabeza.
0
276
40.2K
1.2K
转发到社区
For a driver born without arms, FSD Supervised is life-changing accessibility “I was born without arms and have driven with my feet my entire life. I’m a fully licensed driver, and traditionally I drove with my left foot on the steering wheel and my right foot handling the gas and brake. My only legal restrictions are automatic transmission and power steering. Over the years, though, the strain from my congenital birth defects has led to significant arthritis in my hips. I drove a Model 3 for the past seven years, and it honestly helped extend my independence in a huge way. Recently upgrading to the Model Y – along with Full Self-Driving – has been a complete game changer for me. It dramatically reduces the physical pressure and fatigue of driving and has helped preserve a level of freedom and mobility that means a great deal to me. Most people understandably think of Tesla in terms of innovation or sustainability, but for some of us, this technology truly becomes life-changing accessibility.” – John F.
显示更多
0
1.6K
22.7K
4.2K
转发到社区
This works really well btw, at the end of your query ask your LLM to "structure your response as HTML", then view the generated file in your browser. I've also had some success asking the LLM to present its output as slideshows, etc. More generally, imo audio is the human-preferred input to AIs but vision (images/animations/video) is the preferred output from them. Around a ~third of our brains are a massively parallel processor dedicated to vision, it is the 10-lane superhighway of information into brain. As AI improves, I think we'll see a progression that takes advantage: 1) raw text (hard/effortful to read) 2) markdown (bold, italic, headings, tables, a bit easier on the eyes) <-- current default 3) HTML (still procedural with underlying code, but a lot more flexibility on the graphics, layout, even interactivity) <-- early but forming new good default ...4,5,6,... n) interactive neural videos/simulations Imo the extrapolation (though the technology doesn't exist just yet) ends in some kind of interactive videos generated directly by a diffusion neural net. Many open questions as to how exact/procedural "Software 1.0" artifacts (e.g. interactive simulations) may be woven together with neural artifacts (diffusion grids), but generally something in the direction of the recently viral There are also improvements necessary and pending at the input. Audio nor text nor video alone are not enough, e.g. I feel a need to point/gesture to things on the screen, similar to all the things you would do with a person physically next to you and your computer screen. TLDR The input/output mind meld between humans and AIs is ongoing and there is a lot of work to do and significant progress to be made, way before jumping all the way into neuralink-esque BCIs and all that. For what's worth exploring at the current stage, hot tip try ask for HTML.
显示更多
0
863
17.2K
1.8K
转发到社区
If you love fine-tuning open-source models (like me), then listen. > Start with 1B, 2B, 4B, and 8B models. (Don't start with a 27B model or bigger at first.) > Use WebGPU providers. I use Google Colab Pro for any model smaller than 9B. A single A100 80GB costs around $0.60/hr, which is cheap. Enough for small models. > Don’t buy GPUs unless you fine-tune 7 to 10 models. You'll understand the nitty-gritty in the process. > Use Codex 5.5 × DeepSeek v4 Pro to create datasets. Codex to plan, DeepSeek v4 Pro to generate rows. > Use Unsloth's instruct models as a base from Hugging Face. Yes, there are others too, but Unsloth also provides fast fine-tuning notebooks. > Use Unsloth's fine-tuning notebooks as a reference. Paste them into Codex, and Codex will write a custom notebook with the configs you need. > Spend 1 day learning about: - SFT (supervised fine-tuning) - RL training (GRPO, DPO, PPO, etc.) - LoRA / QLoRA training - Quantization and types - Local inference engines (llama.cpp) - KV cache and prompt cache > Just get started. Claude, Codex, and ChatGPT can design a step-by-step plan for how you can fine-tune your first AI model. Future tech is moving toward small 5B to 15B ELMs (Expert Language Models) rather than general 1T LLMs. So fine-tuning is an important skill that anyone can acquire today. Tune models, test them, use them. Then fine-tune for companies and make a career out of it. (Companies pay $50k+ to fine-tune models on their data so they can get personalized AI models.) Shoot your questions below. I'll be sharing in-depth raw findings about this topic in the coming days.
显示更多
0
97
2.5K
315
转发到社区