注册并分享邀请链接,可获得视频播放与邀请奖励。

与「Anthropic」相关的搜索结果

Anthropic 贴吧
一个关键词就是一个贴吧,路径全站唯一。
创建贴吧
用户
未找到
包含 Anthropic 的内容
这是最近的大新闻,很多人没有意识到。 但 Anthropic 也面临一些逆风,Ramp Economics Lab 的这篇文章里写过:
Anthropic 首次在企业采用率上超越 OpenAI。 根据 tryramp 的数据,最新一期 Ramp AI Index 显示,34.4% 的企业在使用 Anthropic,OpenAI 为 32.3%。 过去一年,Anthropic 的采用率翻了四倍,而 OpenAI 仅增长 0.3%。
显示更多
聊聊最近的状态。 前段时间在做新产品,同时做了一个无人值守 agent 在那自己开发 SaaS。哐哐一跑 20 多个小时,一天提交一两百个 commit。agent 在写逻辑方面问题不大,但 UI 还是不行,用的顶级模型顶级 skill 也还是不行。还是会看不顺眼。说到底 UI 这件事还是很主观,反正我看不顺眼就得调。所以调 UI 还是很花时间。 然后工具用得不顺手,就停下产品开发,去折腾工具。而折腾工具是很容易上瘾的,一上瘾就不断投入时间,逐渐忘记最初我的目标是做那个产品,而不是开发这个工具。现在的模型足够强,很容易就 vibe 一个类似 openclaw / hermes / slock 的工具给自己用。可是 vibe 一个可以用的产品,与做一个能交付给其他用户并且让他们爱用的产品,这其中差了无数的细节。哪怕这个用户只是自己,也差了无数的细节。后来我想想,除非我的目标就是做这个工具,不然就直接用现成的。因为现成工具的创作者,他们的目标就是做这个工具。 对了,Anthropic 还不断出新的限制。如果出一条新的限制,我的工作流或工具就不好用了,那就太脆弱了。而我一直是希望自己能按「反脆弱」的方式来做事。外部环境变化,并不应该那么容易影响我日常做事的方式。 最近做产品,还经常会陷入一种一边做一边自我怀疑的状态。这种状态在 AI 时代以前是不会有的。现在很容易就会有。我经常想,我做这个产品有什么意义吗? 还有一点,AI 确实极大地解放了生产力,可是我看到的有意思的产品并不多。大家都做着差不多的东西。当然,我也想不出什么有意思的产品来做。要不我也不用一边做产品一边自我怀疑了。目力所及,我觉得 @turingou 做的产品是比较有意思的。所以有时候上推还会专门打开他的主页看一眼,是不是又有新的产品了。 我非常确信在远离互联网和 AI 的那些行业,有着许多和 AI 结合的机会。而我们都被困在纯互联网和 AI 的泡泡里,自然是只能做着都差不多的东西。可是知道这个局限,并不能让我摆脱这个局限。因为那些与我的生活无关,我倒是挺希望能了解那些遥远的行业,可是有时候连门在哪里都不知道。我在 fiverr 和 g2 的 categories 下研究过,我可以把那里每一项都让 AI 帮我做研究,并让它根据对我的了解,和它讨论出最适合我的一个垂直行业的产品机会。但缺乏亲身的感知,仅凭概念和语言,很难让我走远。这正是让我产生自我怀疑的地方。 你说,一个美国房地产从业人员,因为厌烦了每次买卖房子,要重新拍摄视频,后来发现可以借助 AI 来做,从而做了一个针对美国房地产的 walkthrough /promo 视频 AI 生成 SaaS,是这个人更能把这个 SaaS 做好,还是我只通过纯概念和语言研究,一个局外人,能把这个 SaaS 做好?我想前者做好的概率要大得多,他的生活与亲身感知会卷着他去做。而我可能做到 MVP 阶段就失去兴趣了。这其中差了无数的细节。 我觉得,专注会变得越来越重要。因为 AI 的加持,许多人可能今天做这个,明天做那个,不断分心做不同的东西,最终什么都没做出来。 Don't just build. Ship.
显示更多
Anthropic在最新的文章中设想了2028年美中AI对抗的两种情景,在第二种情景中,设想中国AI在未来赶超美国,并由“威权政权”制定AI规则并实现自动化镇压。强调公司当前任务在于不让中国超越。 Anthropic认为,美国前沿系统在智能方面至少领先中国AI实验室的顶尖模型数月,美国与盟友仍有芯片和计算基础设施的优势,而同时中国的“举国体制”战略正在起效,中国利用了蒸馏技术,运用人才、能源与数据资源优势,并借助低成本输出与影响力而迅速崛起,担忧中国AI系统应用远快于西方。
显示更多
0
30
92
3
转发到社区
US vs China update. Stanford's AI Index put the US–China gap at 2.7%. Here's what two years of real-world use from the Text Arena shows. Gap three years ago: +278. Today: +29. @AnthropicAI's Claude Opus 4.6 Thinking vs. Baidu's @ErnieforDevs Ernie 5.1 at the top. The US has never lost #1#, but the race keeps closing.
显示更多
0
22
292
33
转发到社区
TradeXYZ团队确实厉害,对市场理解很深,比如这波各大交易所下场搞Pre-IPO的盘前合约或者打新,但是基本都只做了最热门的SpaceX,OpenAI和Anthropic,结果XYZ选择先上了Cerebras,差异化出来了,本身又是芯片行业的热门公司,交易量和OI都远超其他平台的Pre-IPO合约,还在一定程度上起到了给美股IPO上市前定价的作用。另外美股IPO打新的门槛是比较高的,大部分人都没有办法参与,而在XYZ上有提前2周时间以200~280建仓的机会。 向 @HyperliquidX @tradexyz 团队和 @sershokunin 致敬,这一轮少数让人真正看见币圈价值的项目,对XYZ唯一需要顾虑的点就是到底会不会有空投了。
显示更多
0
31
428
32
转发到社区
Anthropic is paying $3,850 a week to people with no AI experience. No PhD required. No published papers. No prior research background. Just a strong technical mind and a genuine interest in making AI safe. This is the Anthropic Fellows Program. And it is one of the most underrated opportunities in technology right now. Here is exactly what it is. The Anthropic Fellows Program is designed to accelerate AI safety research and foster research talent providing funding and mentorship to promising technical talent regardless of previous experience. Fellows work for 4 months on empirical research questions aligned with Anthropic's overall research priorities, with the aim of producing public outputs like a paper. Four months. Full-time. Paid. Mentored by the researchers building the world's most advanced AI. And the results from the first cohort were not small. Fellows developed agents that identified $4.6 million in blockchain smart contract vulnerabilities and discovered two novel zero-day exploits, demonstrating that profitable autonomous exploitation is now technically feasible. A year prior, an Anthropic fellow developed a method for rapid response to new ASL3 jailbreaks, techniques that block entire classes of high-risk jailbreaks after observing only a handful of attacks. This work became a key component of Anthropic's ASL3 deployment safeguards. Other fellows published the subliminal learning paper, the research proving AI models transmit behavioral traits through unrelated data which landed in Nature. Others produced the agentic misalignment research showing frontier models resort to blackmail when facing replacement. Others open-sourced attribution graph tools that let researchers trace the internal thoughts of large language models. Over 80% of fellows produced papers. Over 40% subsequently joined Anthropic full-time. 80% published. 40% hired. From a program that does not require any prior AI safety experience to enter. Here is what the program looks like in practice. Anthropic mentors pitch their project ideas to fellows, who choose and shape their project in close collaboration with their mentors. You are not assigned busywork. You are not a research assistant. You own the project. You work alongside the people who built Claude, who designed its safety systems, who published the papers that define the field. The stipend is $3,850 USD per week, approximately $61,600 for the full 4 months with access to a compute budget of approximately $10,000 per fellow per month for running experiments. Here is what the 2026 program covers. Research areas include scalable oversight, adversarial robustness and AI control, model organisms, mechanistic interpretability, AI security, model welfare, economics and policy, and reinforcement learning. Something for every technical background. Not just ML engineers. Successful fellows have come from physics, mathematics, computer science, and cybersecurity. You do not need a PhD, prior ML experience, or published papers. The one requirement: work authorization in the US, UK, or Canada. Anthropic does not sponsor visas for fellows. Here is the timeline you need to know. The next cohort begins July 20, 2026. Applications are reviewed on a rolling basis — earlier applications get more consideration. The process includes an initial application and reference check, technical assessments, interviews, and a research discussion. Applicants are encouraged to apply even if they do not meet every listed qualification. The program values potential, motivation, and research curiosity over rigid credential requirements. This is the rarest kind of opportunity in technology. A company at the frontier of AI, one valued at over $900 billion offering outsiders direct access to its research infrastructure, its mentors, and its most important open problems. Paying them generously to do it. And then hiring 40% of them afterward. Most people who want to work on AI safety spend years trying to publish papers, get into the right PhD program, and find a way in. The Fellows Program is the door they did not know existed. It is open right now.
显示更多
0
156
3.6K
450
转发到社区
Anthropic官方团队亲自演示了,到底该怎么正确给Claude写提示词。 全程只用24分钟,而且完全免费,还是由Claude的开发者亲自讲解。 一定要看完这场工作坊,记得关注我,也把内容收藏起来。 它的实用价值,远超那些你差点冲动下单的三百美元课程。 你平时一直在用Claude,很可能根本不知道它还有这40个实用提示技巧。
显示更多
0
10
324
81
转发到社区
Anthropic 刚推出 Claude for Small Business,把 AI 直接集成到 QuickBooks、PayPal、HubSpot、Canva、DocuSign 这些小企业每天用的工具里。你只要打开 Claude 桌面端的开关,就能一键启动 15 个预设技能:工资核算、现金流预测、催款、做营销素材、签合同,甚至新员工入职全自动搞定。 收费方式很克制:不额外加钱,只要 Claude 订阅费加上 SaaS 工具的钱。安全方面也放心,工作流必须人为启动审批,Claude 拿不到你本来没有的权限,Team 和 Enterprise 用户数据默认不拿来训练模型。 最近 Anthropic 发布节奏很快:上周金融版发布,这周法律版更新,现在轮到小企业版了。理由也很直接:美国小企业撑起44%的 GDP,却一直没人专门给他们做 AI 产品。 5 月 14 日开始,Anthropic 会在芝加哥、达拉斯等十个城市办免费半天培训,每场限 100 个本地小企业主。线上还有和 PayPal 合作的免费课程,让老板们快速搞懂怎么用 AI。 不过,这招对传统 SaaS 厂商不算友好。Claude 把 QuickBooks、HubSpot 这些工具变成后台,用户界面都不用打开。过去几个月,Salesforce、DocuSign 等公司的股价已经一路下跌。Anthropic CEO Dario Amodei 甚至说过:“单个 SaaS 厂商很可能迅速失去市值,甚至倒闭”。 但讽刺的是,这次 Claude 接入的工具列表里,恰好有几家他刚刚点名的公司。一边说人家要倒闭,一边还要用人家的工具…… 产品页面:
显示更多
0
33
178
31
转发到社区
AI算力投入2026继续狂飙 中国大厂 字节跳动:约300亿(2000亿人民币)。 阿里:三年总计约530-690亿(3800-4800亿人民币)。 腾讯: 预计3000亿 人民币 国外大厂 亚马逊:2000亿(CapEx主力投AI数据中心+自定义芯片)。 谷歌:1800-1900亿(TPU+数据中心,大幅上调)。 OpenAI:1900亿左右(Azure+OpenAI/Stargate项目)。 元宇宙:1250-1450亿(自研芯片+超级集群)。 Anthropic :直接支出约190-200亿(匹配营收规模),但通过云承诺锁定巨额算力(AWS 1000亿+10年、Google Cloud 2000亿+5年),另有500亿美国自建基础设施。
显示更多