注册并分享邀请链接,可获得视频播放与邀请奖励。

与「Kimi」相关的搜索结果

Kimi 贴吧
一个关键词就是一个贴吧,路径全站唯一。
创建贴吧
用户
未找到
包含 Kimi 的内容
CodexBar 0.26.0 is live ⚡ Kiro, Antigravity, OpenRouter, Kimi 🧭 calmer menus + keyboard nav 📊 better Codex/Claude limits and cost scoping 📦 named macOS assets, CLI + Homebrew fixes
显示更多
Meet Kimi Web Bridge - Kimi's browser extension. Agent can now interact with websites like a human: search, scroll, click, type and complete tasks. Supports Kimi Code CLI, Claude Code, Cursor, Codex, Hermes, and more. Available now on and the Chrome Web Store.
显示更多
0
76
1.5K
159
转发到社区
The Hermes Agent Creative Hackathon sponsored by @Kimi_Moonshot has ended! Finalists were selected by Nous and Kimi staff out of 227 submissions on creativity, usefulness and presentation. We were absolutely blown away by the creativity of the things you all built using Hermes. It was tough choosing winners, huge thank you to all who participated! Winners below:
显示更多
0
35
331
30
转发到社区
FREEBUFF 让你可以使用 DeepSeek V4 Pro、Kimi K2.6 和 MiniMax M2.7 运行免费的编程智能体。
阿里、美团、腾讯三家在这三家AI公司上的浮盈毛估估加起来已经超过 1000 亿,投资上猛赚,但和这三家蒸发的市值来比的话简直亏大发了。 但同样是投AI,三家的画风完全不一样。美团投得最少、赚得最猛,阿里撒网最广、有部分是以算力投进去,腾讯居中。 1、先看智谱:美团45倍,碾压全场 智谱现在市值约 3500 亿 RMB(港股02513,5月初盘中一度突破 1000 港元,总市值超 4000 亿港元),智谱的 GLM5.1 登顶全球大模型 TOP3,coding 套餐天天卖爆。 美团是最早下注的。2023年3月B2轮,3 亿 RMB 直接拍进去,投后估值 32 亿,占股超过 10%。之后美团再没追加,经过后面数轮融资和IPO稀释,美团还剩 3.91%,对应市值 137 亿。3亿变137亿,净赚 134 亿,回报 45 倍。这个数字放在整个中国一级市场都是顶级水准。 腾讯来得晚一些。2024年8月B4轮才进,投了 2 亿,投后估值已经到 72 亿,占股 2.7%。稀释后剩 1.58%,值 55 亿,回报 27 倍。也不差,但跟美团一比,晚了一年多,回报直接砍了一半。 阿里的路径最绕。蚂蚁旗下上海云玡先在B3轮用 1.5 亿认购了智谱 66.7 万元注册资本,IPO后持 1.54%,值 54 亿,回报 33 倍。阿里自己不算差,蚂蚁通过上海云玡和上海飞玡合计持 3.66%,值 128 亿,成本 4.9 亿(已扣掉转让给阿里的 1.1 亿),回报 25 倍。阿里系(阿里+蚂蚁)在智谱合计持股 5.2%,浮盈约 175 亿。 2、MiniMax:阿里重仓,腾讯跟投,美团缺席 阿里在MiniMax上下了重注。通过Alisoft持股 12.52%,按当前市值算约 37 亿美金。阿里参与了B轮和基石轮,但MiniMax没详细披露阿里的出资金额,按毛估估约 6 亿美金成本,增值约 5.2 倍,赚31 亿美金 210亿人民币 腾讯占 2.37%,值约 7 亿美金,参与轮次比阿里略早,成本应该更低,按 ~1 亿美金估算,增值约 6 倍。虽然绝对金额不如阿里,但成本控制得更好,倍数反而更高。 美团没参与MiniMax投资。 3、Kimi:阿里可能是最大赢家、美团也是重注 月之暗面还没上市,最新一轮 20 亿美元融资刚刚完成,投后估值突破 200 亿美元(~1400 亿 RMB),美团龙珠领投。 阿里是Kimi早期最重要的财务投资者。2024年以 8 亿美金购入约 36% 股权,不过其中部分以阿里云算力结算,现金大约 6 亿美金。2026年2月又参与了 7 亿美金融资,具体金额未披露。按 36% 股权算,当前市值约 72 亿美金,账面回报约 9 倍。但这个倍数的前提是:阿里没有在后续融资中被大幅稀释,以及 200 亿估值站得住。两个条件都有变数。 腾讯2024年8月参与了 3 亿美金融资按照 33 亿美金估值投资,2026年2月又联合领投了 7 亿美金融资,但两轮出资金额都没披露,没法算精确浮盈。按腾讯一贯风格,大概率是跟投而非重仓。 美团这边分成两笔。王慧文个人累计投资约 7000 万美金(~4.9 亿 RMB),按当前估值算回报约 5 倍,这笔投得早,赚得还行。美团龙珠则是本轮 20 亿融资的领投方,单笔出资超 2 亿美金,刚投完估值没涨多少,暂时没有浮盈。 几个有意思的点: 1、美团回报倍数碾压。3亿进去,134亿出来,45倍。原因很简单:进得早,出得少。B2轮就进去了。 2、阿里智谱、MiniMax、Kimi三家全投了,总浮盈金额最大,且每家投入金额大,还有很大一部分是用算力来投的。 3、腾讯最均衡。智谱27倍,MiniMax 6倍,综合回报11倍,kimi 预计 6 倍。 4、Kimi可能是阿里最大的单笔回报来源——36%股权按 200 亿美金估值值 72 亿美金,如果最终IPO定价更高,这个数字还会膨胀。 这三家AI公司股价波动剧烈。智谱从上市首日 131 港元涨到 840+ 港元,MiniMax从 165 港元涨到 1200+ 港元,但回调也很猛。这些浮盈在变现之前,都是纸面富贵。 还有一点值得说:这三家大厂投AI的逻辑不是纯财务投资。阿里需要算力客户,腾讯需要生态卡位,美团需要技术壁垒,三个各自投的目的不同但都是为了不被取代。
显示更多
0
27
44
5
转发到社区
Announcing the Artificial Analysis Coding Agent Index! Our new coding agent benchmarks measure how combinations of agent harnesses and models perform on 3 leading benchmarks, token usage, cost and more When developers use AI to code they’re choosing a model, but also pairing it with a specific harness. It makes sense to benchmark that combination to understand and compare performance. The Artificial Analysis Coding Agent Index includes 3 leading benchmarks that represent a broad spectrum of coding agent use: ➤ SWE-Bench-Pro-Hard-AA, 150 realistic coding tasks that frontier models struggle with, sampled from Scale AI’s SWE-Bench Pro ➤ Terminal-Bench v2, 84 agentic terminal tasks from the Laude Institute and that range from system administration and cryptography to machine learning. 5 tasks were filtered due to environment incompatibility ➤ SWE-Atlas-QnA, 124 technical questions developed by Scale AI about how code behaves, root causes of issues, and more, requiring agents to explore codebases and give text answers Analysis of results: ➤ Opus 4.7 and GPT-5.5 lead the Index: Opus 4.7 in Cursor CLI scores 61, followed closely by GPT-5.5 in Codex and Opus 4.7 in Claude Code at 60. GPT-5.5 in Cursor CLI follows at 58. ➤ Open weights models are competitive, but still trail the leaders: GLM-5.1 in Claude Code is the top open-weight result at 53, followed by Kimi K2.6 and DeepSeek V4 Pro in Claude Code at 50. These are strong results, but still meaningfully behind the top proprietary models. ➤ Gemini 3.1 Pro in Gemini CLI underperforms: Gemini 3.1 Pro in Gemini CLI scores 43, well below where Gemini 3.1 Pro sits on our Intelligence Index, highlighting that Gemini’s performance in Gemini CLI remains a relative weak spot for Google’s offering. ➤ Cost per task (API token pricing) varies >30x: Composer 2 in Cursor CLI is cheapest at $0.07/task, followed by DeepSeek V4 Pro in Claude Code at $0.35/task and Kimi K2.6 in Claude Code at $0.76/task. At the high end, GPT-5.5 in Codex costs $2.21/task, while GLM-5.1 in Claude Code costs $2.26/task. For both models this was contributed to by high token usage, and in GPT-5.5’s case by a relatively higher per token cost. ➤ Token usage varies >3x: GLM-5.1 in Claude Code uses the most tokens at 4.8M/task, followed by Kimi K2.6 at 3.7M/task and DeepSeek V4 Pro at 3.5M/task. GPT-5.5 in Codex uses 2.8M tokens/task, substantially more than Opus 4.7 in Claude Code at 1.7M/task. In GLM-5.1’s case, higher token usage, cost and execution time were partly driven by the model entering loops on some tasks. ➤ Cache hit rates remain high but vary materially: Cache hit rates range from 80% to 96% across combinations. Provider routing, harness prompt structure and cache behavior can materially change the economics of running the same model given cached inputs are typically <50% the API price of regular input tokens. ➤ Time per task varies >7x: Opus 4.7 in Claude Code is fastest at ~6 minutes/task, while Kimi K2.6 in Claude Code is slowest at ~40 minutes/task. This is contributed to by differences in average turns per task, token usage and API serving speed. Opus 4.7 had materially lower amount of turns to complete a task than all other models while Kimi K2.6 had the most. ➤ Cursor made real progress with Composer 2: Composer 2 in Cursor CLI scores 48, near the leading open-weight model results, while being the cheapest combination measured at $0.07/task. Cursor has stated Composer 2 is built from Kimi K2.5, showcasing they have made substantial post-training gains. This is just the start. We are planning to add additional agents (both harnesses and models). Let us know what you would like to see added next.
显示更多
0
109
1.4K
161
转发到社区
另外OpenCode Go 的另一大优势是速度比较稳定,不能说最快,但是很稳定。 国内的Coding plan和kimi之类的,速度很不稳定还老限流。 奶多了,这个优势估计不好说。
显示更多
大部分人养的龙虾。。。能力还不如 kimi agent k2.6。。。强烈建议试试kimi的agent模式,和agent swarm。。。 非常暴力
这个kimi的agent模式有点强。随便一张图就把拍摄者位置返推出来了
《无穷的开始:世界进步的本源》这本书里提到了两个认知,在这个世界上: 1)问题是一定存在的; 2)问题是可以被解决的。 这是一个很可怕又威力无穷的观点。 人一旦相信,内心所认同的方向是可以找到答案的,就会义无反顾地一头扎进去,寻找答案。迷茫的时候,就去定义问题,找到了问题,就去求解。 Kimi 的创始人杨植麟也推荐过这本书,他认为 AGI 一定是可以被探索出来的,所以没有什么可以阻止他的脚步。
显示更多
0
14
328
57
转发到社区