注册并分享邀请链接,可获得视频播放与邀请奖励。

与「Mira」相关的搜索结果

Mira 贴吧
一个关键词就是一个贴吧,路径全站唯一。
创建贴吧
用户
未找到
包含 Mira 的内容
God will always give you the vision… but He won’t live the life for you. He’ll open doors, whisper wisdom, and place opportunities right in front of you — sometimes quietly, sometimes in ways you almost overlook. But opportunity alone isn’t the miracle. The real miracle is what you choose to do with it. Faith isn’t just believing God will move… it’s moving when He shows you where to step. It’s thinking deeper, working harder, and committing fully, even when the outcome isn’t guaranteed. As scripture reminds us, we’re called to “make the best use of the time” and act with wisdom in what we’re given [1].1 God gives direction. But we have to bring discipline. God gives the chance. But we have to apply the effort. Because opportunity without action becomes regret. And purpose without work stays a dream. So when God places something in your hands — an idea, a path, a moment — don’t just pray over it… build it. Shape it. Give it your best. That’s where faith becomes real. If you like my story see my campaign at
显示更多
An elite unite of the Air Force, in coordination with the U.S. Coast Guard, came to the rescue of 11 Bahamian travelers stranded 80 miles off the coast of Melbourne, Florida after their plane crashed on Tuesday. The plane's pilot got all his passengers onto life rafts before the 920th Air Rescue Wing arrived by helicopter to airlift them and transport them to nearby hospitals. CBS News' @cbenavidesTV has more on the miraculous rescue.
显示更多
0
7
177
35
转发到社区
creo que la confianza se nota más de lo que pensamos… ¿es la mirada, la forma de hablar… o algo más? 🖤
0
15
5.3K
65
转发到社区
Why did xAI hand over a 220,000-GPU cluster to Anthropic? The technical backdrop to xAI's decision to hand Colossus 1 over to Anthropic in its entirety is more interesting than it appears. xAI deployed more than 220,000 NVIDIA GPUs at its Colossus 1 data center in Memphis. Of these, roughly 150,000 are estimated to be H100s, 50,000 H200s, and 20,000 GB200s. In other words, three different generations of silicon are mixed together inside a single cluster — a "heterogeneous architecture." For distributed training, however, this configuration is close to a disaster, according to engineers familiar with the setup. In distributed training, 100,000 GPUs must finish a single step simultaneously before the cluster can advance to the next one. Even if the GB200s finish their computation first, the remaining 99,999 chips have to wait for the slower H100s — or for any GPU that has hit a stack-related snag — to catch up. This is known as the straggler effect. The 11% GPU utilization rate (MFU: the share of theoretical FLOPs actually realized) at xAI recently reported by The Information can be read as the numerical fallout of this problem. It stands in stark contrast to the 40%-plus MFU figures achieved by Meta and Google. The problem runs deeper still. As discussed earlier, NVIDIA's NCCL has traditionally been optimized for a ring topology. It works beautifully at the 1,000–10,000 GPU scale, but once you push into the 100,000-unit range, the latency of data traversing the ring once around becomes punishingly long. GPUs need to churn through computations rapidly to keep MFU high, but while they sit waiting endlessly for data to arrive over the network fabric, more than half of the silicon falls into idle. Google sidestepped this bottleneck with its own custom topology (Google's OCS: Apollo/Palomar), but xAI, by my read, has not yet reached that stage. Layer Blackwell's (GB200) "power smoothing" issue on top, and the picture comes into focus. According to Zeeshan Patel, formerly in charge of multimodal pre-training at xAI, Blackwell GPUs draw power so aggressively that the chip itself includes a hardware feature for smoothing power delivery. xAI's existing software stack, however, was optimized for Hopper and does not understand the characteristics of the new hardware; when it imposes irregular loads on the chip, the silicon physically destructs — literally melts. That means the modeling stack must be rewritten from scratch, which in turn means scaling is far harder than most of us imagine. Pulling all of this together points to a single conclusion. xAI judged that training frontier models on Colossus 1 simply was not efficient enough to be worthwhile. It therefore moved its own training workloads wholesale onto Colossus 2, built as a 100% Blackwell homogeneous cluster. Colossus 1, on the other hand — whose mixed architecture is far less crippling for inference, which parallelizes more forgivingly — was leased in its entirety to an Anthropic that desperately needed inference capacity. Many observers point to what looks like a contradiction: Elon Musk poured enormous capital into building Colossus, only to hand the core asset over to a direct competitor in Anthropic. Others read it as xAI capitulating because it is a "middling frontier lab." But these are surface-level reads. Look at the numbers and a different picture emerges. xAI today holds roughly 550,000+ GPUs in total (on an H100-equivalent performance basis), and Colossus 1 (220,000 units) accounts for only about 40% of the total available capacity. Colossus 2 — built entirely on Blackwell — is already operational and continuing to expand. Elon kept the all-Blackwell homogeneous cluster (Colossus 2) for himself and leased out the older, mixed-generation Colossus 1. In other words, he handed the pain of rewriting the stack — the MFU-11% debacle — to Anthropic, while keeping his own focus on training the next generation of models. The real point, then, is this. Elon's objective appears to be positioning ahead of the SpaceXAI IPO at a $1.75 trillion valuation, currently floated for as early as June. The narrative SpaceXAI now needs is that xAI — long the "sore finger" — is not merely a research lab burning cash, but a business with a "neo-cloud" model in the mold of AWS, capable of leasing surplus assets at high yields. From a cost-of-capital perspective, an "AGI cash incinerator" is far less attractive to investors than a "data-center landlord generating cash." As noted above, the most important detail of the Colossus 1 lease is that it is for inference, not training. Unlike training, inference requires far less tightly synchronized inter-GPU communication. Even when the chips are heterogeneous, the workload parcels out cleanly across them in parallel. The straggler effect — the chief weakness of a mixed cluster — is essentially neutralized for inference workloads. Furthermore, with Anthropic occupying all 220,000 GPUs as a single tenant, the network-switch jitter (unanticipated latency) that arises under multi-tenancy disappears. The two sides' technical weaknesses end up complementing each other almost exactly. One insight follows. As a training cluster mixing H100/H200/GB200, Colossus 1 was an asset that could only deliver an MFU of 11%. The moment it was handed over to a single inference customer, however, that asset transformed into a cash-flow asset rented out at roughly $2.60 per GPU-hour (a weighted average of the lease rates across GPU types). For xAI, what was a "cluster from hell" for training has become a "golden goose" minting $5–6 billion in annual revenue when redeployed for inference. Elon's genius, I would argue, lies not in the model but in this asset-rotation structure. The weight of that $6 billion becomes clearer when set against xAI's income statement. Annualizing xAI's 1Q26 net loss yields roughly $6 billion in losses per year. The $5–6 billion in annual revenue generated by leasing Colossus 1 to Anthropic, in other words, almost perfectly hedges xAI's loss figure. This single deal effectively pulls xAI to break-even. Heading into the SpaceXAI IPO, this functions as a core line of financial defense. From a cost-of-capital standpoint, if the image shifts from "research lab burning cash" to "infrastructure tollgate stably printing $6 billion a year," the entire tone of the offering can change. (May 8, 2026, Mirae Asset Securities)
显示更多
0
200
4.2K
514
转发到社区
Space launch was a clear case where there was a large difference in efficiency between what was possible and what was done in practice before SpaceX. A large part of that was due to everything being locked in to what (just barely) already worked, with huge risk aversion. WIth national prestige or a half billion dollar geosync satellite on the line, speculative engineering ideas that might result in a public debacle were not welcome. When failure is not an option, success can stay very expensive. You need to experiment to improve, and that fundamentally means being comfortable with failure. If you know it is going to work, it isn’t an experiment. I have long believed that nuclear power today is in precisely the same state as space launch two decades ago, but the even more pressing question now is if semiconductor fabrication might also be. On the one hand, Moore’s Law has been a sequence of heroic miracles of technology at the wafer fabrication level, grinding out hundreds of compounding small improvements. On the other hand, fabs are “too big to fail”, and there are elements of extreme conservatism at play. Intel’s “Copy exactly!” fab development exemplifies that mindset – instead of every new building being an opportunity to explore and optimize processes, it was deemed more valuable to just replicate. While each individual machine may be straining against physical limits of technology, it is possible that the systems orchestrating them all together could be far from optimal. The explore / exploit axis is fundamental to all decision making, but human risk avoidance probably biases away from optimal exploration.
显示更多
0
103
3.3K
293
转发到社区
dropped hydroflask on my toes / foot tonight while wearing open toe heels . Didnt want to mess up my mascara so i couldn’t cry and had to walk around in public in socks i (miraculously) had in my purse. wasn’t even walking more like hobbling & limping. anyway how’s ur night
显示更多
Sam Altman texts Mira Murati November 19, 2023
0
351
15.8K
815
转发到社区
Introducing Mirage, a unified virtual filesystem for AI agents! 6 weeks. 1.1M+ lines of code. We rewrote bash from the ground up so cat, grep, head, and pipes work across heterogeneous services. S3, Google Drive, Slack, Gmail, GitHub, Linear, Notion, Postgres, MongoDB, SSH, and more, all mounted side-by-side as one filesystem. Bash that AI agents already know works on every format! cat, grep, head, and wc parse .parquet, .csv, .json, .h5, even .wav! One pipe can stitch S3, Drive, GitHub, Slack, and Linear together, same Unix semantics throughout. Workspaces are versioned too. Snapshot, clone, and roll back the whole thing with one API call. A two-layer cache turns repeated reads into local lookups, so agent loops stay fast and cheap. Drop a Workspace into FastAPI, Express, or a browser app. Wire it into OpenAI Agents SDK, Vercel AI SDK, LangChain, Mastra, or Pi. Run it alongside Claude Code and Codex. Site: GitHub: #AIAgents# #OpenSource# #AgenticAI# #Strukto# #Filesystem# #VFS#
显示更多
0
172
3.3K
336
转发到社区
Here's the #1# thing most people don't know about Warren Buffett: There is nothing special about Buffett’s stock picking. That doesn’t mean that Buffett wasn’t a great investor. He was! Buffett was, by far, the greatest investor in history, by a huge margin. Over 486 months between October 1976 and March 2017 –— 41 years –— Berkshire Hathaway’s Class A stock earned an average excess return of 18.6% per year above U.S. Tbills. Annualized volatility was 23.5%. Sharpe ratio: 0.79. Berkshire’s Sharpe ratio of (0.79) is roughly 1.6x times the broad U.S. stock market’s Sharpe ratio of 0.49 over the same period. Among all large-cap U.S. stocks and mutual funds with 30-plus-year continuous track records, those are unmatched numbers. A dollar invested in Berkshire on October 31, 1976, was worth more than $3,685 by March 31, 2017. A dollar invested in the S&P 500 with dividends reinvested over the same period was worth approximately $76. Buffett beat a passive index by a multiple of 48. But he didn’t do it with stock picking! Three researchers at AQR Capital Management –— Andrea Frazzini, David Kabiller, and Lasse Heje Pedersen –— dissected Berkshire’s 50 years of investments through 2013. They expanded and republished their findings in 2018 in the Financial Analysts Journal, which is the most highly respected industry financial journal. Their work won the Graham and Dodd Award for the best published paper of the year. The paper is called Buffett’s Alpha. They found, after accounting for cheap leverage (from the insurance float) and exposure to a handful of publicly documented factor premiums, Buffett’s investment skill –— the portion of his returns that cannot be explained by any mechanical strategy –— is 0.3% per year. That's statistically indistinguishable from zero. In other words, the alpha that Berkshire enjoyed for 50 years (as it compounded capital at 24% a year!) wasn’t due to Buffett’s stock picking. So, how did he do it? He did it by gaining access to a huge amount of investment capital that he did not own, for free. Buffett’s track record was built on leverage. That’s a dirty word for most investors, but it's the secret behind Berkshire. The AQR researchers had access to something most Buffett commentators do not: 40 years of Berkshire’s audited financial statements and the full quarterly history of the public 13F stock portfolio. The researchers asked a specific question: If I take Berkshire’s monthly stock returns from October 1976 through March 2017, and I run a linear regression against a set of well-documented risk factors –— market beta, size, value, momentum, and two newer factors called Betting-Against-Beta and Quality-Minus-Junk (detailed below) –— how much of Buffett’s performance can the factors explain? And after the factors have been stripped out, how much excess return remains? The data show clearly there are a few qualities that drove Berkshire’s results. First, Buffett has always preferred large-cap stocks, contrary to the popular image of him as a small-cap value investor. He buys elephants. Second, no surprise, Buffett buys cheap. Berkshire is almost six standard deviations away from neutral on the value axis. So far the picture is ordinary. Every large- cap value manager in America loads positively on size and on value. Buffett’s genius lies in the last two factors. These last two factors are a little complicated, but please stick with me. There’s a new factor, that, like value and size, characterizes Buffett’s strategy. It’s called Betting-Against-Beta (“BAB”). What it means is intentionally investing in stocks with very low volatility. The BAB factor captures the excess return that accrues to investors who own low-beta stocks. Low-beta stocks have historically earned higher risk-adjusted returns than high-beta stocks. Financial theory teaches that higher beta (higher risk) should mean higher return. But it doesn’t. The opposite occurs, in fact. And Buffett was one of the very first people to figure this out. Why does this factor persist? In an efficient market, once that factor is known to investors, then they should bid the price up on low- beta stocks until it no longer provides an edge. The explanation, per the theory of AQR’s Frazzini and Pedersen’s theory, is that because ordinary investors do not use leverage and seek high returns, they create persistent excess demand for more volatile stocks. (Having worked with retail investors for 30 years, I can assure you that is true.) But, an investor with access to cheap leverage –— Warren Buffett, for instance –— can exploit the mispricing by owning the low-beta names and levering them up to produce market-beating returns. And the last factor that matters to Buffett is quality. Buffett buys companies with high returns on invested capital. Quality-Minus-Junk (“QMJ”) is a factor described by Cliff Asness, also at AQR with Frazzini, and Pedersen, in a 2019 paper in Review of Accounting Studies. The QMJ factor captures the return to owning stocks of high-quality companies –— profitable, growing, safe, with high payout ratios –— against stocks lacking those characteristics. QMJ has been positive and statistically significant in every major developed equity market for which it has been measured. Berkshire’s loading is 0.37, with a t-statistic of 4.6. –– meaning it is highly significant to Berkshire’s results. In plain English: Buffett only buys large, high- quality, low-volatility stocks of the highest quality. But, Berkshire’s results were not, in any way, unusual. Any investor buying these same kinds of stocks would have earned those same returns –– about 16% a year over time. So how did Berkshire compound at 23% a year? To figure that out, AQR’s researchers built a Berkshire replica. They constructed a simple, rules-based, publicly investable portfolio that mechanically tilts toward large-cap, cheap, low-beta, high-quality stocks, and levers it 1.6- to- 1 to match Berkshire’s insurance float leverage. The correlation between their replica’s returns and Berkshire’s were virtually identical. The authors’ conclusion is unambiguous. “In summary, we find that Buffett has developed a unique access to leverage that he has invested in safe, high-quality, cheap stocks and that these key characteristics can largely explain his impressive performance.” Berkshire’s cost of insurance float has averaged almost three percentage points below the Treasury bill rate across 50fifty years of data. In roughly two-thirds of all years, Berkshire has been paid to hold other people’s money. That is not an investment strategy. That is a financing miracle. It is also the living, breathing heart of Berkshire Hathaway. It’s what Buffett built, starting in 1967 when he paid $8.6 million for National Indemnity’s $19.4 million of float. And it is the factor every retail investor admiring Berkshire’s returns has never paid any attention to. The 1.6-to-1 leverage that AQR measured over the full period, financed at this negative cost, explains the dollar magnitude of Berkshire’s returns. How do we know? An unleveraged version of the same stock portfolio –— which you can approximate by looking at the 13F holdings alone –— has earned an average excess return of 12% percent per year. It’s Berkshire’s leverage that magnifies this excess return to 18.6 %percent. How does this square with Berkshire’s reported gains? Berkshire’s 18.6% excess return, plus the T-bill rate that averaged roughly 4.7% over 1976–2017, gives you a total nominal return of roughly 23% per year, which is the figure you usually see quoted for Berkshire’s historical performance. The 23% tells you what Berkshire returned. The 18.6% tells you how much of that return was compensation for taking investment risk, as opposed to the baseline yield every lender to the U.S. government was earning anyway. With both of Berkshire’s “edges” –— systematic factor exposures to cheap, high-quality, low-volatility stocks and roughly 1.6-to-1 leverage delivered with insurance float –— you get Berkshire Hathaway’s 23% annual gains over 60 years. It’s the structure that’s genius, not the stock picking. And that's very important because it means the original Berkshire formula can work for any investor. I show you exactly how, in my new book.
显示更多
0
8
54
10
转发到社区
运营中转站这段时间是真没赚到钱,只能说勉强cover了我自己用ai的消费。 所以目前打算把开中转站的一切全部开源,包含如何建站+营销,门槛最低,让这个行业更卷一点。 首先整个系统由3个部分组成: • 第CN2 回国专线服务器:放在海外但回国速度极快的 VPS,作为运行核心。 • sub2api:核心程序,负责把网页账号转成 API 接口。 • Cloudflare:把流量再绕一道,提升国内访问速度,同时隐藏真实服务器 IP。 你需要准备: • 一台 CN2 GIA 或 CN2 GT 线路的海外 VPS(推荐配置:2 核 CPU、2GB 内存、20GB 硬盘以上)。 普通海外 VPS 在国内晚高峰几乎不可用,而 CN2 GIA 通过专线绕开了拥堵的公网节点,国内访问延迟一般在 150ms 以内。如果你买了不是 CN2 的服务器,国内用户体验会非常糟糕。 • 一个域名(建议在 Cloudflare 或 Namecheap 上购买,便宜的 .top 或 .xyz 也行,几块钱一年)。 • 一个 Cloudflare 账号(免费)。 • 号池:初期可以用 claude code pro 账户+ 注册大量gpt账户,货比三家去找到别的号商卡商,等后期你就可以搞claude code max kiro 反代 aws bedrock(去跟sales聊,基本能搞到7.2折),但是初期只需要保障claude code pro账号稳定即可,因为你需要养号,后期转max。 完整请求路径如下: 国内用户的客户端 → 解析到 Cloudflare 的 IP → Cloudflare 边缘节点 → CN2 专线回源到你的服务器 → 宝塔面板的 Nginx 反向代理 → sub2api 程序 → 你的号池 → ChatGPT 或 Claude 网页 → 数据原路返回。 购买并初始化CN2服务商 CN2 GIA 线路的常见服务商有 BandwagonHost(搬瓦工)、RackNerd、CloudCone、Lisahost。新手推荐搬瓦工的 CN2 GIA-E 套餐,稳定但价格略贵。预算紧的可以看 Lisahost 的香港 CN2 套餐。 如果你懂命令行搭建Nginx,手动部署SSL证书,那你就自己搞,如果你不懂可以使用中国程序员流行的宝塔面板,一键搭建Nginx、一键部署SSL证书、可视化配置反向代理,全程鼠标点击操作,新手也能轻松上手。 安装完Linux + Nginx + MySQL + PHP,就可以开始设置防火墙,够买域名,添加DNS解析。 最后去命令行输入ping.api.你购买的域名,返回服务器ip就行了。 搭建sub2api: sub2api 是一个开源项目,可以把 ChatGPT 网页版、Claude 网页版的 cookie 或者 session 转换成 OpenAI 兼容的 API 接口。 打开sub2api的官方教程,安装流程安装docker,拉取并启动sub2api的容器。 你需要把号池数据放到 /www/sub2api/data 目录下,sub2api 容器会读取这个目录。具体格式参考 sub2api 项目文档。 设置Nginx反向代理 添加完之后目标url是127.0.0.1:8080因为 sub2api 容器监听的就是这个地址。Nginx 收到外部请求后,转给本机的 8080 端口,sub2api 处理完返回给 Nginx,Nginx 再发回给用户。 后面你去问claude code 如何优化Nginx的配置,AI API 调用是流式响应(SSE),需要长连接 + 不缓存才能正常工作。默认 Nginx 配置在这种场景下会出问题,按照claude的提示优化,proxy_buffering 必须关闭,如果不关闭这个,AI 的回答会"卡一阵 → 一次性吐出",而不是逐字流式输出。客户端会感觉非常慢甚至超时。 申请HTTPS证书: OpenAI 兼容客户端基本只信任 HTTPS。HTTP 明文会暴露 API Key 给中间网络。 申请好Let's Encrypt证书之后,回到 SSL 主界面,把"强制 HTTPS"开关打开。 优化Cloudflare配置 测试HTTPS-开启cloudflare代理-Cloudflare SSL 模式必须设为 Full (strict) AI API 是动态接口,Cloudflare 的某些"优化"会破坏流式响应。 Cloudflare → 你的域名 → 速度 → 优化。 全部关掉以下选项: • Auto Minify(自动压缩 HTML/CSS/JS):关闭。 • Rocket Loader:关闭。 • Mirage:关闭。 • Polish:关闭。 设置缓存规则: Cloudflare → 缓存 → 配置。 Caching Level 选 Bypass,或者保持 Standard 但是后面用页面规则覆盖。 更彻底的做法:Cloudflare → 规则 → 页面规则 → 创建页面规则。 URL 模式: 设置:Cache Level = Bypass 设置防火墙规 Cloudflare → 安全性 → WAF → 自定义规则 → 创建规则。 规则一:限制单个 IP 频率 字段:IP source address,操作:Rate limiting,每 10 秒最多 30 次请求,超出后挑战或屏蔽 1 小时。 规则二:屏蔽明显恶意爬虫 字段:User Agent,运算符:包含,值:python-requests 启用 Cloudflare Argo Smart Routing,每月 5 美元,能在 Cloudflare 内部用最优路径路由你的流量。对国内用户访问海外服务器有 30% 到 50% 的速度提升。预算够推荐开。 测试上线 用 curl 测试 API,或者打开 CherryStudio 或 ChatBox,填写你的api地址和key做测试 使用Prometheus/Grafana,或者直接用宝塔面板做监控,可以看到 CPU、内存、流量实时数据。如果 sub2api 容器经常吃满 CPU,考虑升级服务器配置。
显示更多
0
194
1.2K
194
转发到社区