注册并分享邀请链接,可获得视频播放与邀请奖励。

与「ONT」相关的搜索结果

ONT 贴吧
一个关键词就是一个贴吧,路径全站唯一。
创建贴吧
用户
未找到
包含 ONT 的内容
An elite unite of the Air Force, in coordination with the U.S. Coast Guard, came to the rescue of 11 Bahamian travelers stranded 80 miles off the coast of Melbourne, Florida after their plane crashed on Tuesday. The plane's pilot got all his passengers onto life rafts before the 920th Air Rescue Wing arrived by helicopter to airlift them and transport them to nearby hospitals. CBS News' @cbenavidesTV has more on the miraculous rescue.
显示更多
0
7
177
35
转发到社区
Lithium explorer @Critical_CRR has launched fieldwork at Ontario’s Corona target, chasing concealed lithium pegmatites beside the company’s 8Mt Mavis Lake resource as it builds a broader district-scale growth pipeline. $CRR @theage
显示更多
Why did xAI hand over a 220,000-GPU cluster to Anthropic? The technical backdrop to xAI's decision to hand Colossus 1 over to Anthropic in its entirety is more interesting than it appears. xAI deployed more than 220,000 NVIDIA GPUs at its Colossus 1 data center in Memphis. Of these, roughly 150,000 are estimated to be H100s, 50,000 H200s, and 20,000 GB200s. In other words, three different generations of silicon are mixed together inside a single cluster — a "heterogeneous architecture." For distributed training, however, this configuration is close to a disaster, according to engineers familiar with the setup. In distributed training, 100,000 GPUs must finish a single step simultaneously before the cluster can advance to the next one. Even if the GB200s finish their computation first, the remaining 99,999 chips have to wait for the slower H100s — or for any GPU that has hit a stack-related snag — to catch up. This is known as the straggler effect. The 11% GPU utilization rate (MFU: the share of theoretical FLOPs actually realized) at xAI recently reported by The Information can be read as the numerical fallout of this problem. It stands in stark contrast to the 40%-plus MFU figures achieved by Meta and Google. The problem runs deeper still. As discussed earlier, NVIDIA's NCCL has traditionally been optimized for a ring topology. It works beautifully at the 1,000–10,000 GPU scale, but once you push into the 100,000-unit range, the latency of data traversing the ring once around becomes punishingly long. GPUs need to churn through computations rapidly to keep MFU high, but while they sit waiting endlessly for data to arrive over the network fabric, more than half of the silicon falls into idle. Google sidestepped this bottleneck with its own custom topology (Google's OCS: Apollo/Palomar), but xAI, by my read, has not yet reached that stage. Layer Blackwell's (GB200) "power smoothing" issue on top, and the picture comes into focus. According to Zeeshan Patel, formerly in charge of multimodal pre-training at xAI, Blackwell GPUs draw power so aggressively that the chip itself includes a hardware feature for smoothing power delivery. xAI's existing software stack, however, was optimized for Hopper and does not understand the characteristics of the new hardware; when it imposes irregular loads on the chip, the silicon physically destructs — literally melts. That means the modeling stack must be rewritten from scratch, which in turn means scaling is far harder than most of us imagine. Pulling all of this together points to a single conclusion. xAI judged that training frontier models on Colossus 1 simply was not efficient enough to be worthwhile. It therefore moved its own training workloads wholesale onto Colossus 2, built as a 100% Blackwell homogeneous cluster. Colossus 1, on the other hand — whose mixed architecture is far less crippling for inference, which parallelizes more forgivingly — was leased in its entirety to an Anthropic that desperately needed inference capacity. Many observers point to what looks like a contradiction: Elon Musk poured enormous capital into building Colossus, only to hand the core asset over to a direct competitor in Anthropic. Others read it as xAI capitulating because it is a "middling frontier lab." But these are surface-level reads. Look at the numbers and a different picture emerges. xAI today holds roughly 550,000+ GPUs in total (on an H100-equivalent performance basis), and Colossus 1 (220,000 units) accounts for only about 40% of the total available capacity. Colossus 2 — built entirely on Blackwell — is already operational and continuing to expand. Elon kept the all-Blackwell homogeneous cluster (Colossus 2) for himself and leased out the older, mixed-generation Colossus 1. In other words, he handed the pain of rewriting the stack — the MFU-11% debacle — to Anthropic, while keeping his own focus on training the next generation of models. The real point, then, is this. Elon's objective appears to be positioning ahead of the SpaceXAI IPO at a $1.75 trillion valuation, currently floated for as early as June. The narrative SpaceXAI now needs is that xAI — long the "sore finger" — is not merely a research lab burning cash, but a business with a "neo-cloud" model in the mold of AWS, capable of leasing surplus assets at high yields. From a cost-of-capital perspective, an "AGI cash incinerator" is far less attractive to investors than a "data-center landlord generating cash." As noted above, the most important detail of the Colossus 1 lease is that it is for inference, not training. Unlike training, inference requires far less tightly synchronized inter-GPU communication. Even when the chips are heterogeneous, the workload parcels out cleanly across them in parallel. The straggler effect — the chief weakness of a mixed cluster — is essentially neutralized for inference workloads. Furthermore, with Anthropic occupying all 220,000 GPUs as a single tenant, the network-switch jitter (unanticipated latency) that arises under multi-tenancy disappears. The two sides' technical weaknesses end up complementing each other almost exactly. One insight follows. As a training cluster mixing H100/H200/GB200, Colossus 1 was an asset that could only deliver an MFU of 11%. The moment it was handed over to a single inference customer, however, that asset transformed into a cash-flow asset rented out at roughly $2.60 per GPU-hour (a weighted average of the lease rates across GPU types). For xAI, what was a "cluster from hell" for training has become a "golden goose" minting $5–6 billion in annual revenue when redeployed for inference. Elon's genius, I would argue, lies not in the model but in this asset-rotation structure. The weight of that $6 billion becomes clearer when set against xAI's income statement. Annualizing xAI's 1Q26 net loss yields roughly $6 billion in losses per year. The $5–6 billion in annual revenue generated by leasing Colossus 1 to Anthropic, in other words, almost perfectly hedges xAI's loss figure. This single deal effectively pulls xAI to break-even. Heading into the SpaceXAI IPO, this functions as a core line of financial defense. From a cost-of-capital standpoint, if the image shifts from "research lab burning cash" to "infrastructure tollgate stably printing $6 billion a year," the entire tone of the offering can change. (May 8, 2026, Mirae Asset Securities)
显示更多
0
200
4.2K
514
转发到社区
One of the better moments in training was looking over in the NBL and thinking, “wow, I think that’s my wife.” Getting to work alongside Anna there has been a real highlight, and I’m glad this moment made it onto video.
显示更多
0
72
2K
186
转发到社区
james earl jones did not speak in public for almost eight years as a child. by the time he found his voice he could read poetry without stuttering. half a century later he became the most recognizable voice on earth… january 17, 1931. arkabutla, mississippi. he is born to a teen mother. his father leaves before he is born. he will not meet him until he is 21 years old. age 5. the family moves north to michigan. somewhere in that move he loses his voice. he develops a stutter so severe that he chooses silence. for the next eight years he writes notes instead of speaking out loud. age 14. browning high school in dublin michigan. donald crouch, his english teacher, makes him read a poem he wrote in front of the class. he reads it without stuttering. the bridge between writing and saying becomes the spine of the rest of his career. 1953. he is drafted in the wind-down of the korean war. first lieutenant, cold storage detachment, camp hale colorado. he spends 18 months in the rocky mountains and then walks straight onto the lower east side off-broadway scene with his GI bill. 1969. wins the tony for best actor in 'the great white hope.' he is 38 years old. 1977. samuel goldwyn studio, los angeles. he records the entire darth vader role for star wars in two and a half hours. flat fee: 7,000 dollars. he asks for no screen credit. he tells the producers that david prowse, the actor in the suit, should get sole billing because prowse is the one 'doing the work.' 1989. cnn records his voice for the network identification. 'this is cnn.' it stays the network id for the next 35 years. 2011. the academy gives him an honorary oscar at the governors awards alongside oprah and dick smith. he already had the tony, the emmy, and the grammy. it is not technically EGOT but it is the rest of the matched set. 2022. he signs over the rights to his vader voice to lucasfilm. they will use ai to generate new vader lines from his archive. he is 91 and tells reporters he has 'said all of vader's words already.' september 9, 2024. pawling, new york. age 93. peaceful death at home. a child who chose silence for eight years grew up to define the most recognizable voice in cinema history. drop the james earl jones moment that still gives you chills.
显示更多
0
18
1.7K
88
转发到社区
半导体封装的“隐形中枢”:inline检测与OSAT的再定价 半导体产业正在经历一次重心转移:性能提升不再只依赖晶体管缩小,而是越来越依赖封装。2.5D、3D、HBM、chiplet,本质上都在把“系统能力”搬到封装环节。这也直接抬高了OSAT(外包封装与测试)的战略地位。 封装重要性的提升,带来了inline检测的快速增长。 OSAT(Outsourced Semiconductor Assembly and Test)负责两件事: 把裸die封装成可用芯片(封装) 验证芯片是否可用(测试) 过去这是一个低技术、低毛利的环节。但在AI时代,情况变了: 多die集成(chiplet) HBM堆叠 nm级对准要求(hybrid bonding) 封装正在变成: 性能瓶颈 + 良率瓶颈 + 成本瓶颈 inline是一种生产方式:所有工序连续完成,并在生产过程中实时检测与反馈(闭环) 对应另外一个环节是offline:做完再测(开环) 先进封装中的inline检测主要分三类: 1)光学检测(主力) bump高度 overlay(对准) 表面缺陷 特点:速度快,可全量inline。 2)X-ray检测 焊点空洞 TSV缺陷 内部结构问题 特点:能看内部,但速度慢,多用于抽检。 3)电性测试 功能验证 性能分档 更接近最终测试,不属于核心inline控制体系。 inline检测的目标不是“最精确”,而是在不降低产线效率的前提下,实现足够精确的实时反馈 核心矛盾:精度 ↑ → 速度 ↓;速度 ↑ → 精度 ↓ 先进设备的价值,就是在这个矛盾中找到最优解。 inline检测的壁垒来自多维叠加: 1)物理极限 nm级对准 μm级结构 工业环境下接近科研精度 2)速度 vs 精度的工程平衡 高throughput + 高精度同时实现 3)算法与数据 缺陷识别、pattern分析 强依赖历史数据与持续训练 4)工艺耦合 测量 → 调整工艺 → 再测 形成闭环系统 5)客户验证 TSMC / Samsung Electronics / Intel 验证周期长(1–3年) 一旦导入,很难替换 所以门槛极高。inline设备不是工具,而是嵌入客户制造系统的一部分。因此这个市场高度集中: 系统级控制 KLA Corporation Applied Materials → 控制数据与闭环 关键测量节点(alpha来源) Camtek Ltd. Onto Innovation Nova Ltd. → 控制关键测量维度 三家核心玩家对比(Onto / Nova / Camtek) 这三家公司虽然同在inline赛道,但本质上卡的是不同位置。 一句话结论 Onto = 广度(平台) Nova = 深度(前道工艺) Camtek = 弹性(先进封装/HBM) 1️⃣ Onto Innovation 定位: 前道 + 封装双覆盖 optical metrology + inspection + litho 优势: 产品线最广 客户最分散 抗周期能力强 劣势: 单点技术不如Nova深 封装不如Camtek极致 2️⃣ Nova Ltd. 定位:前道metrology核心玩家 优势: 技术深度最强 工艺绑定最深 数据壁垒最强 劣势: 封装参与较少 弹性不如Camtek 3️⃣ Camtek Ltd. 定位: 先进封装(HBM / 3D) 优势: 聚焦3D检测 HBM需求直接驱动 使用频率极高 劣势: 产品线较窄 对周期敏感 竞争关系本质 KLA = 控制系统 Onto = 广覆盖 Nova = 深度测量 Camtek = 封装核心检测 这不是单一赢家市场,而是: 每个关键测量维度一个龙头 封装是制造能力,检测是控制能力。区别在于: 封装 → 可扩产、可竞争 检测 → 嵌入流程、难替代 inline检测具备三个核心特征: 高频使用(每一步都测) 强绑定(工艺耦合) 决定良率(直接影响利润) 在这个体系中:谁打通从设备到数据的全节点,掌握“反馈权”,谁就掌握利润分配权。 免责声明:本人持有文中提及的标的,观点必然偏颇,非投资建议dyor
显示更多
GPT 2 Image Prompt: Detailed Ghibli-style portrait of a princess kneeling beside a koi pond brimming with glowing lotus blossoms. She wears a soft blue and cream kimono embroidered with swirling waves and fox spirits. A delicate paper parasol rests against her shoulder, and her hair is crowned with cherry blossom branches threaded with tiny rainbow prisms that scatter gentle light onto the water’s surface.
显示更多
Onto Innovation最近入股 Rigaku Holdings 27% 股权,这是一次非常明确的战略转向:从“表面检测”走向“3D结构检测”,本质是在卡位先进封装时代的工艺控制入口。 Onto的业务核心是半导体制造中的process control,也就是检测(inspection)、计量(metrology)、封装光刻和软件系统。它决定良率,和klac一样,是典型的“复杂性收费者”。随着工艺从2D走向3D,这类公司的重要性正在系统性提升。 过去,检测主要依赖光学和电子束,解决的是“看得见”的问题。但在HBM、CoWoS、chiplet和混合键合等结构下,缺陷越来越隐藏在内部,传统方法开始失效。X-ray成为必需工具,这正是Onto入股Rigaku的核心逻辑——补齐内部检测能力,从而覆盖“表面+内部”的完整检测链条。 从市场结构看,Onto未来真正的机会不在大盘,而在结构性细分。 先进封装检测的增速高于行业平均,而涉及3D结构(如X-ray、混合键合检测)的细分领域可能更高。公司当前业务重心已经明显向先进封装倾斜,这使其增长弹性显著高于行业平均。 竞争格局上,行业由 KLA Corporation 主导,市占率超过一半,是标准制定者;Applied Materials 和 ASML 等大型设备商具备跨界能力,可以通过整线方案压制单点供应商;而Camtek、Nova等公司则在细分领域与Onto直接竞争。Onto本身处于中间位置:产品线不够全面,但在先进封装环节具备一定深度。 其优势在于提前卡位先进封装,产品结构向高增长区域集中,同时具备一定技术门槛和盈利能力;但劣势也很清晰,包括客户绑定较弱、系统能力不完整,以及在部分高端检测能力上仍落后于龙头。整体来看,护城河处于中等水平,尚未形成不可替代性。 决定公司未来地位的关键变量是混合键合。 随着互连从bump走向直接键合,对overlay精度和界面缺陷控制的要求大幅提升,检测和计量的重要性显著上升。Onto在overlay和先进封装检测上已有基础,并通过X-ray补齐能力,因此可以覆盖混合键合检测链的大部分环节。该技术有望在未来3–5年持续推动其相关业务高于行业增速。 Onto的投资逻辑并不完全跟随半导体周期,而在于是否能够从先进封装中的“参与者”,升级为3D结构检测中的“关键节点”。混合键合决定其能否获得稳定超额增长。如果能够在X-ray和3D检测上建立能力闭环,其护城河有望明显加宽;反之,则仍将处于被KLA压制、被大厂边缘化的中间位置。 本质上,这家公司正在从一个设备供应商,向“复杂性控制入口”转型。能否完成这一转型,决定了它未来5年的上限。 免责声明:本人持有文章提及股票,观点十分主观,非投资建议dyor
显示更多
hey @grok swap this outfit onto me in this photo!
0
661
6.4K
131
转发到社区
In the Backpack tokenomics, we have one guiding principle. - Insiders "dumping on retail" should be impossible: no founder, executive, employee, or venture investor should receive wealth from the token until the product hits escape velocity. Of course it begs the question, what does it mean to "hit escape velocity". Every project is different, and it's impossible to generalize. For Backpack, the answer is clear: we want to IPO in the USA. Going public might happen quickly, it might happen not so quickly, and in fact, it might not happen at all. In any case, we're going for it. But before going public, we have to grow--a lot. The odd thing about Backpack's growth over the past year--and in fact one of the things that makes Backpack so different from basically every token project in crypto--is that, today, Backpack Exchange only serves about 48% of the world. We've been very slow, very intentional about opening up our product to the world, ensuring that we have every "i" dotted and ever "t" crossed as a regulated financial institution. Growth that sometimes feels like running with a parachute, but we are happy to take the long path, because it's precisely that parachute that will allow us to fly. For those that don't know us, the reason for this is simple. Backpack is trying to not only build great crypto products, but we're also trying to build great TradFi products. We're trying to not only give our users access to every crypto asset, every blockchain, and every decentralized application, but we're also getting banking rails around the world, USD client money accounts in the USA, EUR in the EU, JPY in Japan--every currency on every major payment network you can imagine. We're trying to build a great securities product, whether that's getting access to your favorite stocks in a traditional brokerage or bidding on primary shares of a company about to go public on NASDAQ. We want to serve not only retail users worldwide, but we want to serve regulated products for regulated counterparties and regulated institutions around the world. All of this takes an enormous amount of time, effort, blood, sweat, and tears. We've been working on this for over three years at this point, laying an international foundation for the company and for the product slowly but surely, brick by brick. If we're lucky, we'll spend a lifetime. What this all means is that, in the most literal sense--and I know this sounds silly--we're just getting started. We still have half the world to open up into. We still have some of our most exciting products to launch. And this leads to our next guiding principle in our tokenomics. - Liquid tokens should exclusively go to users, fueling growth triggered by key product milestones. Every time we open up a new region, every time we launch a new product, that's an opportunity to grow. Open up EU => grow. Open up Japan => grow. Open up the USA => grow. Open up predictions => grow. Open up stocks => grow. Open up card => grow. Like gasoline onto a fire, the token serves to continuously kickstart new markets in the same way points kickstarted Seasons 1-4. With every growth lever we pull, tokens unlock in a predictable way to users, bringing in a new wave of token holders, growing the community, and allowing the product to soar to new heights. The objective constraint for this to work is precise: the value of added growth created by new token unlocks must always be greater than the dilution of those unlocks. As long as that condition holds, we can continue to unlock tokens direct to our most active users, growing along the way. Last but not least is the question: Ok so if all the liquid tokens are going to users, then what about the team? How exactly do you remain incentive aligned while ensuring the team cannot unlock, dump on retail, and become enormously wealthy without building something great? And the answer is simple: not a single founder, executive, team member, or venture investor has been given a direct token allocation. The entire "team allocation" sits in a "corporate treasury", i.e. on the balance sheet of the Backpack company--locked until at least one year post IPO. The team owns equity in the company, and the company owns a large percent of the token supply. It's not until the company goes public (or has some other type of equity exit event), that the team can earn any wealth from the project. It's not until the company has access to the largest, most liquid capital markets in the world by going public--and it's not until the company has done all the hard work to earn access to those markets--that the team can reap the rewards of the value created by the Backpack community from now until then. We either go big, or we go home.
显示更多
0
301
1.2K
279
转发到社区