注册并分享邀请链接,可获得视频播放与邀请奖励。

与「Enterprise」相关的搜索结果

Enterprise 贴吧
一个关键词就是一个贴吧,路径全站唯一。
创建贴吧
用户
未找到
包含 Enterprise 的内容
AI时代,半导体公司到底该怎么估值? 昨天听了@ShanghaoJin 老师的space,获益匪浅。 但我对于存储板块,乃至整个半导体板块的在目前ai产业革命超级周期背景下的估值方法,有一些不同的想法,所以简单记录一下,也供herman老师拍砖。 过去很长时间里,半导体一直是典型周期行业。景气时利润暴涨,低谷时利润迅速蒸发。很多公司上一年PE几十倍,下一年直接亏损。所以过去市场并不太相信半导体公司的利润持续性,更喜欢用 PB、重置成本、EV/EBITDA,而不是PE。因为市场默认这些利润大概率只是周期利润,而不是长期利润。 但AI时代正在改变这一切。HBM、CoWoS、AI Networking、光模块、先进封装、电力与数据中心基础设施,开始出现长期供需失衡。整个行业的估值逻辑,也开始从“资产思维”转向“现金流思维”。 截至2026年,行业仍处在AI驱动的强景气阶段。根据 SIA 数据,2025年全球半导体销售额达到7917亿美元,同比增长25.6%,并预计2026年接近1万亿美元。SEMI 也预计设备销售将在2026、2027继续增长。这种环境下,很多股票估值已经提前包含高增长预期。重要的是增长质量,以及它所处的周期位置。 很多人喜欢只看 PE、forward PE 或 PEG,但半导体行业的问题在于,“周期 + 高成长 + 高资本开支 + 技术代际”全部混在一起,单一估值倍数很容易骗人。周期顶部时,利润爆炸,PE反而会显得特别便宜;周期底部时,利润低迷,PE又会显得特别贵,甚至失去意义。重要的是判断当前利润到底处在周期的哪个位置。 PE 本质上是: PE = \frac{Market\ Cap}{Net\ Income} 它看的是最终归属于股东的利润,因此会受到利息、税率、折旧和资本结构影响。而 EV/EBITDA 更接近企业经营本身的赚钱能力: EV/EBITDA = \frac{Enterprise\ Value}{EBITDA} 其中: EV = Market\ Cap + Debt - Cash 很多人会疑惑为什么现金要减掉。因为 EV 本质上是在看“买下整个公司的真实净成本”。债务需要接手,而账上的现金买下后也归你,所以现金会降低真实收购成本。重要的是理解 EV 关注的是经营业务本身值多少钱,而不是公司账上堆了多少现金。 这也是为什么 Apple、Alphabet、Meta Platforms 经常出现 EV 小于市值的情况,因为它们账上现金太多。 但AI时代又带来了一个新问题。很多公司的现金已经不再是“闲置现金”,而是GPU储备、数据中心扩张储备、AI基础设施战争储备。重要的是区分 Excess Cash、Operating Cash 和 Strategic Cash。有些现金未必真的应该全部减掉。 AI时代另一个巨大变化,是行业进入超级重资本时代。EUV越来越贵,High-NA越来越贵,CoWoS扩产越来越贵,HBM扩产越来越贵,数据中心基础设施越来越贵。整个行业折旧(D&A)正在快速上升。于是很多公司的 EBITDA 非常漂亮,但净利润没有那么夸张,因为大量利润被折旧吞掉了。重要的是现在 PE 和 EV/EBITDA 的差异,正在明显扩大。 不同子行业差异尤其明显。Fabless公司差异最小,比如 NVIDIA、AMD、Broadcom。因为它们不自己建厂,折旧压力较低,因此 EV/EBITDA 往往只比 PE 低20%-40%。 但 Foundry 完全不同。比如 Taiwan Semiconductor Manufacturing Company、Samsung Electronics、Intel。这些公司 CapEx 极大,折旧极高,厂房设备生命周期极长,所以 PE 和 EV/EBITDA 差异会明显扩大。TSMC 当前常见情况大概是 PE 20-30x,而 EV/EBITDA 只有12-18x。重要的是理解很多折旧本质上其实是“增长投资”。 存储行业更加极端。Micron Technology、SK hynix 过去长期是最典型的周期行业,市场几乎不相信利润持续性。但 HBM 改变了部分逻辑,市场开始认为其中一部分利润可能是结构性利润,于是行业开始重新定价。重要的是 HBM 让市场开始重新评估存储行业的长期盈利能力。 而半导体设备公司则是另一种情况。比如 ASML、Applied Materials、Lam Research、KLA。这些公司更像“拥有工业外壳的软件公司”,因为它们毛利率高、ROIC高、FCF强、资本效率极高,所以市场已经越来越多使用 PE、EV/EBITDA、EV/FCF 和 ROIC 来定价。 真正的问题,从来不是哪个指标最好。重要的是哪个指标适合哪个子行业。 Trailing PE 适合盈利稳定的成熟公司,但周期股在景气高点 PE 会显得特别便宜,在低谷又会显得特别贵。Forward PE 更重要,因为市场买的是未来12-24个月利润。重要的是盈利预期是否还在持续上修,而不是单纯看一个低 forward PE。 PEG 对稳定高成长公司很好用,但对周期行业非常危险。很多时候 EPS 从低谷恢复,会让 PEG 看起来异常便宜。重要的是判断这个增长到底来自长期成长,还是仅仅来自周期反弹。 EV/EBITDA 更适合设备、IDM、存储这些资本结构差异大的行业。重要的是最好使用中周期 EBITDA,否则很容易在周期顶部被误导。 我个人更喜欢 FCF Yield 和 EV/FCF。重要的是这两个指标会逼着你回答一个问题:这些利润最后到底能不能变成真钱。 EV/Sales 只适合高增长、利润暂时被投入压低的平台型公司。重要的是结合毛利率、经营杠杆和长期利润率一起看。 不同子行业应该看不同指标。AI/fabless 芯片更应该看 forward PE、EV/FCF、收入增速、毛利率、客户集中度和平台护城河;半导体设备更应该看 EV/EBITDA、订单、积压和 WFE 周期;存储更应该看 P/B、EV/EBITDA、库存以及 DRAM/NAND/HBM 价格;晶圆代工和 IDM 更应该看利用率、CapEx、折旧、ROIC 和 FCF;模拟、功率和车规更应该看 FCF yield、库存周期和工业需求;EDA/IP 更应该看 EV/Sales、EV/FCF 和长期增长确定性。 所以不要只按 PE、forward PE 或 PEG 买半导体股。重要的是先分子行业,再做多指标综合。 我的框架会更简单一些。第一看质量,包括毛利率、营业利润率、ROIC、技术壁垒和客户粘性。第二看增长。重要的是增长到底来自结构性需求,还是只是周期复苏。第三看现金流,包括 FCF margin、CapEx 强度、库存变化和应收变化。第四才是估值,包括 forward PE、EV/EBITDA、EV/FCF、PEG,并与同行和自身历史区间比较。最后才是风险,包括客户集中、出口限制、库存、产能过剩和盈利预期下修风险。 半导体行业最重要的一点,是不要被低PE欺骗。重要的从来不是今天便不便宜,而是未来3-5年的现金流和竞争地位,能不能支撑今天的估值。 AI时代最大的变化,本质上也是这个。过去市场担心的是“下一轮周期会不会崩”,现在市场开始关心的是“这些利润到底是周期性的,还是结构性的”。 如果市场认为只是周期,那么 EV/EBITDA 不会给太高,PE 也不会持续扩张。如果市场开始相信 AI需求是长期的、基础设施建设是长期的、供需失衡是长期的、行业进入结构性短缺,那么整个行业的估值体系就会继续升级,从 PB → EV/EBITDA → PE → FCF 一路向上迁移。 最终获得长期高估值的公司,往往都是那些 ROIC 持续提升、资本效率持续改善、拥有长期定价权、能把AI需求持续转化为现金流的企业。
显示更多
0
0
50
19
转发到社区
We first built Merciv to serve enterprise teams handling complex, fragmented data at scale. Now, we’re bringing that same technology to individuals and emerging brands to help them make better, more confident decisions faster. Identify whitespace opportunities, conduct market research, synthesize reviews, build and talk to user personas, and validate and test ideas. We just moved Merciv’s self serve product into open beta. Link to sign up below.
显示更多
Anthropic 刚推出 Claude for Small Business,把 AI 直接集成到 QuickBooks、PayPal、HubSpot、Canva、DocuSign 这些小企业每天用的工具里。你只要打开 Claude 桌面端的开关,就能一键启动 15 个预设技能:工资核算、现金流预测、催款、做营销素材、签合同,甚至新员工入职全自动搞定。 收费方式很克制:不额外加钱,只要 Claude 订阅费加上 SaaS 工具的钱。安全方面也放心,工作流必须人为启动审批,Claude 拿不到你本来没有的权限,Team 和 Enterprise 用户数据默认不拿来训练模型。 最近 Anthropic 发布节奏很快:上周金融版发布,这周法律版更新,现在轮到小企业版了。理由也很直接:美国小企业撑起44%的 GDP,却一直没人专门给他们做 AI 产品。 5 月 14 日开始,Anthropic 会在芝加哥、达拉斯等十个城市办免费半天培训,每场限 100 个本地小企业主。线上还有和 PayPal 合作的免费课程,让老板们快速搞懂怎么用 AI。 不过,这招对传统 SaaS 厂商不算友好。Claude 把 QuickBooks、HubSpot 这些工具变成后台,用户界面都不用打开。过去几个月,Salesforce、DocuSign 等公司的股价已经一路下跌。Anthropic CEO Dario Amodei 甚至说过:“单个 SaaS 厂商很可能迅速失去市值,甚至倒闭”。 但讽刺的是,这次 Claude 接入的工具列表里,恰好有几家他刚刚点名的公司。一边说人家要倒闭,一边还要用人家的工具…… 产品页面:
显示更多
0
33
178
31
转发到社区
We're open-sourcing the Seven Factors — a framework for safely governing AI agents in enterprise systems. This draws on our experience at Workato running agentic orchestrations for thousands of enterprise customers. We've distilled 7 principles at the intersection of AI agents and real business systems. The reasoning layer owns intent. The control plane owns consequences. This includes safe retries, deterministic mutations, recovery contracts & more. This is an open project — we're actively seeking community input and open dialogue. Feedback, issues, PRs, and discussions are very welcome.
显示更多
0
12
185
43
转发到社区
Culper Research 发布做空英伟达( $NVDA )的详细解读 Culper Research 是一家美国激进做空机构,它的模式非常简单,先建立空头仓位,再发布调查报告,攻击上市公司的财务、业务、监管、关联交易或信息披露问题。 所以 Culper 的报告并不是一份中立的研究,而是交易的一部分。 不过这类机构真正的价值,是把市场没注意到的异常线索,挖掘出来,但从我个人的理解来看,可能更多的是对于做空标的短期的价格影响,长期还是要看企业本身的业绩。尤其是对于 英伟达 这种 AI 风口浪尖的企业来说。 这次 Culper 做空英伟达,核心表述只有一个,就是 英伟达 的中国业务,可能没有真正归零。潜台词就是 英伟达 可能没有完全遵循美国的对于中国的限制。 英伟达对市场的公开表述是: 2025年4月 美国收紧出口限制后,公司在中国的算力业务已经基本归零。黄仁勋也多次说,英伟达在中国的 compute business 从接近 95% 份额跌到 0%。 那么市场因此会认为: 既然中国业务已经没了,那么未来如果中美关系缓和,或者出口限制放松,中国就是英伟达的额外增量。尤其是这次在最后关头 黄仁勋 也加入了访华团队,这对于打开 英伟达 在中国的销售可能会有帮助。 但 Culper 的判断正好相反,Culper 认为中国需求并没有消失,只是从直接销售,变成了东南亚中转、云算力租赁、OEM 供货和中间商采购的形式。 也就是说,英伟达财报上看到的可能不是中国收入,但最终真实需求仍然可能来自中国客户。 Culper 报告里最重要的几条线索: 第一, Megaspeed Megaspeed 是一家新加坡 AI 算力云服务商,表面上是在东南亚购买英伟达服务器,然后把算力租给客户。 英伟达曾为 Megaspeed 背书,说它没有中国股东,也没有发现芯片转移。但没有中国股东,不等于没有中国资金。 Megaspeed 2023 年底体量还很小,2024 年底资产负债表突然膨胀到接近 30 亿美元,主要来自 29 亿美元可退还押金。 同时,它又有接近 29 亿美元对子公司的应收款,资金继续流向马来西亚子公司 Speedmatrix。 第二,Speedmatrix 和阿里相关资金链 Culper 指出,Speedmatrix 曾把自己的业务、设备和未来资产抵押给一家新加坡公司 Apex Enterprise Solutions。 而新加坡文件显示,Apex 的母公司是 Alibaba Group,业务目的包括 procurement activities。Apex 账上有超过 41 亿美元预付款,同时有约 42 亿美元来自阿里相关公司的贷款。 所以 Culper 的推论是阿里相关资金可能通过 Apex 进入采购结构,再通过 Megaspeed 和 Speedmatrix 体系购买英伟达服务器。 第三,Aivres Speedmatrix 从 2024 年底到 2026 年初进口了约 46 亿美元产品,其中约 40 亿美元来自 Aivres Systems。Aivres 是英伟达 Elite OEM compute partner,负责组装高端英伟达服务器。 但 Aivres 的前身是 Inspur Systems,也就是浪潮体系的一部分。浪潮集团被美国列入实体清单后,Inspur Systems 改名为 Aivres。 Culper 认为,Aivres 表面上是美国公司、合规 OEM 伙伴,但它和中国需求之间的关系非常敏感。如果 英伟达 把货卖给 Aivres,财报上可能体现为美国客户收入。 但如果这些服务器最终通过马来西亚、新加坡、印尼等路径服务中国客户,那么市场看到的区域收入分布,就可能低估英伟达对中国真实需求的依赖。 第四,Supermicro / OBON 案件 2026年3月,美国司法部起诉了几名与 Supermicro 有关的人士,指控他们通过东南亚中间实体,把至少 25 亿美元英伟达芯片服务器走私到中国。 这个案件对 Culper 很看重,因为它证明了东南亚中转 + 假数据中心 + 真实服务器转移到中国不是幻想,而是已经进入司法程序的真实案例。 第五,马来西亚数据中心 Culper 认为,东南亚数据中心是绕开出口限制的关键节点。美国限制的是高端 GPU 直接出口中国。但如果 GPU 放在马来西亚、新加坡、泰国的数据中心,中国公司远程租用算力,形式上可能不是芯片出口,但实质上仍然是在满足中国 AI 需求。 这才是英伟达中国问题的复杂之处。问题不是芯片有没有被直接运进中国。而是算力是否被中国客户实际使用。这也是 Culper 对英伟达最严重的指控,认为 英伟达 不可能完全不知道。 因为 英伟达 理论上可以通过客户 KYC、订单规模、客户成立时间、保修记录、服务器 IP、软件更新、延迟数据、设备心跳信号等方式,判断 GPU 是否真的在申报地点运行。 如果几万张 GPU 声称部署在马来西亚、新加坡,但实际使用路径异常,英伟达不应该完全没有感知。如果英伟达知道、默许或者放任,那就会变成出口管制、收入质量和管理层可信度问题。 当然,到这里复杂的情况就变多了,因为要证明 英伟达 主观知情,门槛非常高。英伟达有能力知道和英伟达已经知道并故意放任,完全是两回事。 所以这份报告真正的目的,并不是在于 Culper 能不能定罪英伟达,而在于后续监管会不会接手调查。 如果美国商务部、司法部、新加坡、马来西亚继续调查 Megaspeed、Speedmatrix、Aivres、YTL、Novagate 这些链条,那它就不是单纯做空报告,而是监管事件。 总体来说 Culper 做空英伟达的核心逻辑就是市场认为中国是英伟达未来的潜在增量。而 Culper 认为中国其实是英伟达过去一年隐藏的存量。 如果中国业务早就归零,那么未来放松限制就是利好。但如果中国业务过去只是藏在东南亚、OEM 和云算力渠道里,那么当美国继续收紧出口管制,中国又推动国产替代时,英伟达面对的就不是增量消失,而是隐藏存量被切断。 这才是 Culper 做空英伟达的真正逻辑。
显示更多
0
12
68
14
转发到社区
Claude Code weekly limits are increasing 50%, now through July 13. Live now for all Pro, Max, Team, and seat-based Enterprise users.
0
1.2K
20.1K
1.9K
转发到社区
Want to (officially) use Codex at work? Send this post to your CTO to bring your team to Codex. Eligible enterprise customers who switch in the next 30 days get 2 free months of Codex usage for new users.
显示更多
0
223
4.1K
282
转发到社区
Has any blockchain actually given a convincing answer on how they'll be able to monetize any of the following: 1) Payments 2) RWAs 3) Enterprise Solutions 4) AI Agents
0
38
103
1
转发到社区
Dear ICP community, the Internet Computer has now been running strong for 5 years 👏👏👏 Here is a celebratory preview of ICP "cloud engines," the sovereign frontier cloud technology the network shall soon provide from Main points: — Cloud engines enable anyone to spin up their own sovereign frontier cloud. The technology involves an extraordinary inventive step, in which cloud is created from a mathematically secure network of nodes. The nodes run as part of the Internet Computer network ( but are selected and configured by the cloud engine's owner. — The frontier cloud provided by engines is strongly focused on enabling AI agents to build and update online applications and services for us. The world is changing fast, and nearly all new online apps and services are already being built with the help of AI, and thus cloud engines target the future of cloud. — Software hosted on cloud engines is tamperproof, which means that it is immune to infrastructure hacks, because it runs inside a mathematically secure network protocol, rather than on computers directly. This means that AI agents, and those building with them, don't need to have a security team in the loop, or to trust someone else's security team. This is crucial, because in the future, non technical people will demand the freedom to build with full automation — where they just need to issue instructions to AI about what to build, and don't need to worry about anything or anyone else. Of course, apps and services running on engines are also vastly safer from the new breed of hacker being enabled by frontier AI. (The cloud engines themselves are also "tamperproof." Even if a hacker gains physical access to some portion of a cloud engine's nodes, and can make arbitrary changes, the computations and data of the hosted apps and services cannot be corrupted or interrupted so long as the network's fault bounds aren't exceeded. The recent hack of Vercel, a major cloud platform, which gave hackers access to the apps it hosted, provides additional perspective on the importance of this advantage.) — Software hosted on cloud engines is guaranteed to run, so long as a sufficient number of the engine's nodes are running. This means that AI can build applications and services without the need to have a human systems admin team constantly tinkering with the underlying platform to keep it running, which is again crucial, because in the future, non technical people will expect the freedom to use AI to build without the support of others. — New frontier programming language technology, in the form of the Motoko language developed by Caffeine Labs, leverages seminal "orthogonal persistence" technology that unifies program logic and data to deliver further unlocks for AI (Motoko is the first computer language being developed that targets agents that are writing software rather than humans engineers per se). Nowadays, AI can build and update production apps at a prodigious rate, even at the speed of conversation. But it can also make mistakes, and there's a risk that an update it creates might be "lossy" in the sense it causes some transformed data to be lost. Again, in this new world, it's both undesirable and impractical for everyone to have to have a systems admin team on-hand to detect lossy updates and roll them back, but Motoko provides a solution: it can detect new software updates are lossy before they are applied, reducing potentially catastrophic errors by AI to harmless coding retries. — Software hosted on cloud engines is "serverless" but unlike traditional serverless software, directly it directly incorporates data through "orthogonal persistence." Another key purpose is simplify backend software logic and fuel the modeling power of AI by increasing abstraction (sorry for the technical language!!!). Put simply, this enables AI to produce more sophisticated backends, faster, and at dramatically lower costs, as measured by the number AI API tokens consumed during coding. (Tip for the technical: orthogonal persistence is a new paradigm where "the program is the database," and data lives inside program variables, which is possible because it's as if hosted software runs forever in persistent memory). — An expanding database of skills at shall make it possible to develop and directly deploy apps and services to your cloud engines directly from Claude Code, Perplexity, Codex and other AI platforms. Further, your account on can be connected, so that new apps and updates created through conversation automatically appear hosted from your cloud engine. In the future, R&D is going to be very seamless. You converse with AI, and your secure and unstoppable apps or services are created or updated. Cloud engines are designed to directly support this "self-writing cloud" future where we can work hands-free. — Tech sovereignty is becoming a huge issue worldwide, with governments and corporations seeking to create sovereign tech stacks owing to geopolitical tensions. Increasingly, people are realizing that tech provided by foreign nations can come with hidden backdoors and kills switches, from the base platform, right up through hosted apps and services. ICP technology is open source, and those building on ICP using AI own their own source code. When you have the source code, you can verify that there are no backdoors, and when you own the source code thanks to AI, you can update it at will, freeing you from vendor lock-in. But cloud engines take sovereignty much further... — You create a cloud engine by selecting the nodes that will be combined. You can choose the class of nodes used, and their number, but more importantly, you can choose who operates the nodes, and where they are located. Almost any configuration is possible, because the Internet Computer scales the security privileges afforded to hosted software within the network according to configuration (software hosted on cloud engines can directly interoperate with software on other engines and traditional subnets, but base restrictions are applied according to security rules). A cloud engine can be created within a region such as Europe, to comply with regs such as GDPR, or completely within a sovereign state like Switzerland or Pakistan. But cloud engines go further still... — Sovereignty is also about freedom from vendor lock-in. Cloud engines are essentially ICP (Internet Computer Protocol) network configurations, and this means the underlying compute nodes they combine can be swapped out without interrupting their hosted apps and services. This is a big deal. In addition, cloud engines now support nodes that are instances running on Big Tech's clouds, in addition to nodes that are dedicated specialized hardware, as per the Gen I and Gen II nodes that dominate the Internet Computer today. For example, it is possible to have an engine running across different AWS data centers, say, and then reconfigure the engine to run across a mixture of AWS, Google, Azure and Hetzner for even more resilience, without the users of hosted apps and services noticing a thing. That's true freedom. — Sovereign AI is becoming increasingly important too, and cloud engines allow special "AI nodes" to be added to them, so that hosted software can perform inference on hardware provisioned by the owner from a location the owner has selected. Even though the AI nodes are only accessible within the cloud engine, they can still benefit from the forthcoming Internet Intelligence Gateway (IG), which will make it possible to validate inference performed on key frontier open weights LLMs, even when the inference is performed on completely independent AI clouds. When the results of inference are received, this technology can verify that neither the prompt+context (input) nor the inference result (output) have been modified, and that the results were produced by the precise LLM expected. This ensures that AI clouds don't cheat by running inference on cheaper models than are being paid for, and bad actors aren't modifying the inputs or outputs to surreptitiously insert advertising into results, say, or change facts, or insert malware when code is being generated. What's super cool about this technology is the cost of the verification is scalable. A very valuable additional security can be achieved with only 1-2% of extra cost. — Scaling apps and services when they hit capacity limits is another thorny problem that cloud engines help the world address. Engines make scaling possible without rewriting or reconfiguring software. The query workload capacity of hosted software can be horizontally scaled simply by adding new nodes to an engine, and nodes can also be added in geographical proximity to demand. Meanwhile, update workload capacity can first be scaled-up by swapping an engine's nodes out for the next class up, and then when no larger class of node is available, horizontally scaled-out by "splitting" the engine into two, which doubles available capacity. (Technical tip: horizontally scaling update capacity by splitting engines requires multi-canister architectures). — For those who have been following how Caffeine builds apps that can efficiently store large numbers of files, I should mention that apps built on cloud engines will also support the new ICP Blob Storage cloud network (since cloud engines currently have up to about 3 TB of memory, which apps storing large amounts of files can easily exceed). We are also working on allowing blob storage nodes to be added to cloud engines, to enable sovereign mass blob storage within an engine, similarly to how AI nodes can be added currently. — Lastly, but certainly not least, I should mention that cloud engines are multi-blockchain capable, and ready for digital assets, thanks to the clever math at their core. For example, an e-commerce service built on a cloud engine can securely accept and custody stablecoin payments, or a multi-chain DEX could be hosted. Further, engines can support software autonomy (software orchestrated and controlled by other autonomous software, in a decentralized way) and can themselves be orchestrated by SNS technology, and thus run autonomously too. Today, though, the focus is on *mainstream* cloud. This year, the cloud industry will generate approximately one trillion dollars in revenue. That number is already huge, but is expected to grow to two trillion dollars by 2030. After years of continuous development, which have seen more than $500m spent on R&D, the Internet Computer network is now tacking directly toward this mainstream cloud market with cloud engine technology. In their first version, cloud engines are not meant to be a cloud panacea. For example, currently they are not ideal for working with big data. You should use something like DataBricks for that. Cloud engines are carefully targeted at enabling AI to produce traditional online applications and services, including SaaS, in a safer and more productive way, which represents a new market segment with tremendous potential. Of course, DFINITY will continue to work relentlessly to push forward ICP's capabilities, so expect further developments. It's worth mentioning that this cloud segment isn't just about creating new apps and services using AI, it's also about replacing legacy systems and apps built on super expensive SaaS services. Caffeine Labs is working to produce technology (Caffeine Snorkel) that can study an enterprise's legacy systems and app built on SaaS, create replacement systems and apps, and migrate the data, while supporting key stakeholders through the process over email and chat, with full automation. Thus the legacy systems and SaaS markets shall also be addressed by cloud engines. Zooming out, and reasoning in a more metaphysical way, we believe, as we always have, that there is room for a new kind of cloud created by mathematical networks, that provides seminal advances in the fields of security and resilience, as well as true sovereignty and freedom from lock-in. That this same technology, with the help of additional technologies like orthogonal persistence and Motoko, enables AI to build for us without the need for so much oversight, and to create more backend sophistication while consuming fewer AI API tokens, enables ICP to bring game-changing advances to the world. Cloud engines will work synergistically with the Intelligence Gateway, which will enable apps and services running on engines to seamlessly leverage AI, wherever that AI is running, while providing verifiability at extremely low cost for open weights frontier models. We believe that cloud engines represent an inflection point in the storied history of the Internet Computer project, and I'm very proud to be sharing the details with you on the network's fifth birthday 💪 I'll be back with more news soon!!
显示更多
0
209
1.8K
646
转发到社区