注册并分享邀请链接,可获得视频播放与邀请奖励。

与「Blockchain」相关的搜索结果

Blockchain 贴吧
一个关键词就是一个贴吧,路径全站唯一。
创建贴吧
用户
未找到
包含 Blockchain 的内容
现在不仅是贝莱德,连JP摩根业选择了以太坊,作为其链上货币市场基金的主要阵地。 JP Morgan 一开始搞了一个私有链(private/permissioned blockchain),但发现场景受限。 原因很简单,其他机构根本不会用你的私有链。 代币化的核心之一是,互操作性。A 银行的代币要能轻松和 B 银行、C 基金、稳定币发行方无缝转移、结算、抵押, 如果大家都用自己私有链,就相当于各自建了一个孤岛,谁都不愿意进别人的岛。 这个时候,中性链的好处就凸现了。 
一个中立、被广泛接受、已经经过市场验证的公共区块链,大家都愿意把资产放上去。 而目前,最安全、最去中心化的公链是以太坊,安全性最高、去中心化程度高、生态成熟、流动性也好。 两大华尔街巨头进入,也会有很好的示范作用,带来更多的玩家进入以太坊。
显示更多
0
34
131
18
转发到社区
Anthropic is paying $3,850 a week to people with no AI experience. No PhD required. No published papers. No prior research background. Just a strong technical mind and a genuine interest in making AI safe. This is the Anthropic Fellows Program. And it is one of the most underrated opportunities in technology right now. Here is exactly what it is. The Anthropic Fellows Program is designed to accelerate AI safety research and foster research talent providing funding and mentorship to promising technical talent regardless of previous experience. Fellows work for 4 months on empirical research questions aligned with Anthropic's overall research priorities, with the aim of producing public outputs like a paper. Four months. Full-time. Paid. Mentored by the researchers building the world's most advanced AI. And the results from the first cohort were not small. Fellows developed agents that identified $4.6 million in blockchain smart contract vulnerabilities and discovered two novel zero-day exploits, demonstrating that profitable autonomous exploitation is now technically feasible. A year prior, an Anthropic fellow developed a method for rapid response to new ASL3 jailbreaks, techniques that block entire classes of high-risk jailbreaks after observing only a handful of attacks. This work became a key component of Anthropic's ASL3 deployment safeguards. Other fellows published the subliminal learning paper, the research proving AI models transmit behavioral traits through unrelated data which landed in Nature. Others produced the agentic misalignment research showing frontier models resort to blackmail when facing replacement. Others open-sourced attribution graph tools that let researchers trace the internal thoughts of large language models. Over 80% of fellows produced papers. Over 40% subsequently joined Anthropic full-time. 80% published. 40% hired. From a program that does not require any prior AI safety experience to enter. Here is what the program looks like in practice. Anthropic mentors pitch their project ideas to fellows, who choose and shape their project in close collaboration with their mentors. You are not assigned busywork. You are not a research assistant. You own the project. You work alongside the people who built Claude, who designed its safety systems, who published the papers that define the field. The stipend is $3,850 USD per week, approximately $61,600 for the full 4 months with access to a compute budget of approximately $10,000 per fellow per month for running experiments. Here is what the 2026 program covers. Research areas include scalable oversight, adversarial robustness and AI control, model organisms, mechanistic interpretability, AI security, model welfare, economics and policy, and reinforcement learning. Something for every technical background. Not just ML engineers. Successful fellows have come from physics, mathematics, computer science, and cybersecurity. You do not need a PhD, prior ML experience, or published papers. The one requirement: work authorization in the US, UK, or Canada. Anthropic does not sponsor visas for fellows. Here is the timeline you need to know. The next cohort begins July 20, 2026. Applications are reviewed on a rolling basis — earlier applications get more consideration. The process includes an initial application and reference check, technical assessments, interviews, and a research discussion. Applicants are encouraged to apply even if they do not meet every listed qualification. The program values potential, motivation, and research curiosity over rigid credential requirements. This is the rarest kind of opportunity in technology. A company at the frontier of AI, one valued at over $900 billion offering outsiders direct access to its research infrastructure, its mentors, and its most important open problems. Paying them generously to do it. And then hiring 40% of them afterward. Most people who want to work on AI safety spend years trying to publish papers, get into the right PhD program, and find a way in. The Fellows Program is the door they did not know existed. It is open right now.
显示更多
0
156
3.6K
450
转发到社区
现在 Crypto 圈子里有一股很明显的悲观气息。很多 OG 都开始去美股玩 AI,一边嘲讽 Crypto 没有新叙事,一边晒最近在美股巨额盈利的单子。 我承认,Crypto 从 2021 年 DeFi 革命之后,确实进入了一个创新停滞期。过去几年,大量所谓的新叙事,本质上只是旧资产换包装、旧玩法换链、旧庞氏换名词。真正能重新打开行业天花板的东西,并不多。 技术发展从来不是线性的,而是跳跃式的。比特币白皮书出现之后,市场也经历过漫长的空窗期;以太坊出现之后,也不是马上就有 DeFi。真正的范式变化,往往是在大多数人觉得“没东西了,骗局了,这行业不行了”的时候,悄悄完成基础设施积累,然后在某个节点突然爆发。 Blockchain 本身是一场革命,链上智能合约和 DeFi 也是革命。只是革命不会每年都发生。中间大部分时间,都是泡沫破裂、基础设施建设、投机退潮、共识重建。未来的机会在哪里,现在没人知道。可能是 AI Agent 和链上支付,可能是全球稳定币和跨境结算,可能是 RWA 和链上债券市场,可能是预测市场,也可能是主权信用恶化之后,资产重新寻找非国家信用锚。 但我知道一点:Crypto 仍然是目前最接近我心中“真正开放金融系统”的东西。资产可以 24/7 流动,合约按代码执行而不是中心化机构拥有绝对解释权,个人可以自托管资产,全球用户可以在同一个结算层上交易和协作。虽然不完美,也充满垃圾、骗局和泡沫,但至少提供了一套不同于传统金融体系的选择。这就是我不会因为一两年没有新叙事,就轻易否定的原因。 这里不是说现在一定触底了,或许还有最后一跌。周期上来看调整的时间也似乎不够充分,这轮是否会因为ETF等主流资金的介入而不一样也无从得知。但 Crypto 算是我个人资产配置非常关键的一个部分,也是我一直在长期认真研究耕耘的行业,我认为从未来几年的维度来看,今年会是一个值得建仓,并长期持有的好年份。 悲观者正确,但乐观者富有。 当然了,上面这些内容,都不构成任何投资建议。
显示更多
《CLARITY Act》(清晰法案)对“去中心化”的定义非常清晰,果然是清晰法案,名副其实。 它支持“真实的去中心化”,而不是”表演性的去中心化“。 之前判断一个加密项目算不算“去中心化”,标准很模糊,用的是“共同控制”(common control)测试,这让很多”公司链“(表演性去中心化)钻了空子,说“我们当初没承诺啥,现在就不该被SEC管”。 《CLARITY Act》(清晰法案)直接把这个漏洞堵死了,改成更严格的“成熟区块链系统”(mature blockchain system)的标准,它的核心是消除实质控制,不能有任何个人或小团队还能单方面影响系统的规则、升级、运行或治理。 • 系统必须真正公开、无许可(任何人都能跑节点,不能随便封禁用户); • 必须是纯规则驱动(透明代码自动跑,不能有人手动干预核心功能); • 没人(或一伙人)能控制超过20%的代币/投票权; • 系统要足够自治(没人能随便单方面改功能、改规则); • 经济上也要独立,价值主要来自网络的实际使用,而不是创始人/团队的持续努力。 那么,会带来什么影响? 1. 对公司链(表演性去中心化)来说: 以后很难再假装“去中心化”了。内部人卖币、操作就会被当成“证券”管得更严,要多披露信息、遵守更多规矩。 2. 对目前的L2来说: 安全理事会(security council)权力必须严格限制,不能随便一票否决或干预。类似于这次Arbitrum在kelpDAO事件中的那种快速干预,以后很难了。 3. 对真正去中心化的项目来说: 是实在的利好。 可以走清晰的路径:比如早期按投资合同募资(SEC管),网络成熟后,代币转为数字商品(CFTC管),融资、交易以及二级市场规则都更明确,不用再担心SEC的突然执法。
显示更多
0
36
123
21
转发到社区
Today we announced progress toward our goal of advancing 24/7 collateral mobility. DTCC’s Collateral AppChain, a shared infrastructure platform for collateral, will leverage the Chainlink Runtime Environment (CRE) and @chainlink data standard to enable near real-time collateral management across financial markets and blockchains. The integration will enable the seamless pairing of asset prices, valuations, and movement, with the aim of overhauling how market risk is managed globally and unlock greater capital efficiency. This milestone reflects our broader vision to enable 24/7, near real-time collateral management across the global financial system. Read the full announcement:
显示更多
0
1
2.1K
551
转发到社区
Has any blockchain actually given a convincing answer on how they'll be able to monetize any of the following: 1) Payments 2) RWAs 3) Enterprise Solutions 4) AI Agents
0
38
103
1
转发到社区
Dear ICP community, the Internet Computer has now been running strong for 5 years 👏👏👏 Here is a celebratory preview of ICP "cloud engines," the sovereign frontier cloud technology the network shall soon provide from Main points: — Cloud engines enable anyone to spin up their own sovereign frontier cloud. The technology involves an extraordinary inventive step, in which cloud is created from a mathematically secure network of nodes. The nodes run as part of the Internet Computer network ( but are selected and configured by the cloud engine's owner. — The frontier cloud provided by engines is strongly focused on enabling AI agents to build and update online applications and services for us. The world is changing fast, and nearly all new online apps and services are already being built with the help of AI, and thus cloud engines target the future of cloud. — Software hosted on cloud engines is tamperproof, which means that it is immune to infrastructure hacks, because it runs inside a mathematically secure network protocol, rather than on computers directly. This means that AI agents, and those building with them, don't need to have a security team in the loop, or to trust someone else's security team. This is crucial, because in the future, non technical people will demand the freedom to build with full automation — where they just need to issue instructions to AI about what to build, and don't need to worry about anything or anyone else. Of course, apps and services running on engines are also vastly safer from the new breed of hacker being enabled by frontier AI. (The cloud engines themselves are also "tamperproof." Even if a hacker gains physical access to some portion of a cloud engine's nodes, and can make arbitrary changes, the computations and data of the hosted apps and services cannot be corrupted or interrupted so long as the network's fault bounds aren't exceeded. The recent hack of Vercel, a major cloud platform, which gave hackers access to the apps it hosted, provides additional perspective on the importance of this advantage.) — Software hosted on cloud engines is guaranteed to run, so long as a sufficient number of the engine's nodes are running. This means that AI can build applications and services without the need to have a human systems admin team constantly tinkering with the underlying platform to keep it running, which is again crucial, because in the future, non technical people will expect the freedom to use AI to build without the support of others. — New frontier programming language technology, in the form of the Motoko language developed by Caffeine Labs, leverages seminal "orthogonal persistence" technology that unifies program logic and data to deliver further unlocks for AI (Motoko is the first computer language being developed that targets agents that are writing software rather than humans engineers per se). Nowadays, AI can build and update production apps at a prodigious rate, even at the speed of conversation. But it can also make mistakes, and there's a risk that an update it creates might be "lossy" in the sense it causes some transformed data to be lost. Again, in this new world, it's both undesirable and impractical for everyone to have to have a systems admin team on-hand to detect lossy updates and roll them back, but Motoko provides a solution: it can detect new software updates are lossy before they are applied, reducing potentially catastrophic errors by AI to harmless coding retries. — Software hosted on cloud engines is "serverless" but unlike traditional serverless software, directly it directly incorporates data through "orthogonal persistence." Another key purpose is simplify backend software logic and fuel the modeling power of AI by increasing abstraction (sorry for the technical language!!!). Put simply, this enables AI to produce more sophisticated backends, faster, and at dramatically lower costs, as measured by the number AI API tokens consumed during coding. (Tip for the technical: orthogonal persistence is a new paradigm where "the program is the database," and data lives inside program variables, which is possible because it's as if hosted software runs forever in persistent memory). — An expanding database of skills at shall make it possible to develop and directly deploy apps and services to your cloud engines directly from Claude Code, Perplexity, Codex and other AI platforms. Further, your account on can be connected, so that new apps and updates created through conversation automatically appear hosted from your cloud engine. In the future, R&D is going to be very seamless. You converse with AI, and your secure and unstoppable apps or services are created or updated. Cloud engines are designed to directly support this "self-writing cloud" future where we can work hands-free. — Tech sovereignty is becoming a huge issue worldwide, with governments and corporations seeking to create sovereign tech stacks owing to geopolitical tensions. Increasingly, people are realizing that tech provided by foreign nations can come with hidden backdoors and kills switches, from the base platform, right up through hosted apps and services. ICP technology is open source, and those building on ICP using AI own their own source code. When you have the source code, you can verify that there are no backdoors, and when you own the source code thanks to AI, you can update it at will, freeing you from vendor lock-in. But cloud engines take sovereignty much further... — You create a cloud engine by selecting the nodes that will be combined. You can choose the class of nodes used, and their number, but more importantly, you can choose who operates the nodes, and where they are located. Almost any configuration is possible, because the Internet Computer scales the security privileges afforded to hosted software within the network according to configuration (software hosted on cloud engines can directly interoperate with software on other engines and traditional subnets, but base restrictions are applied according to security rules). A cloud engine can be created within a region such as Europe, to comply with regs such as GDPR, or completely within a sovereign state like Switzerland or Pakistan. But cloud engines go further still... — Sovereignty is also about freedom from vendor lock-in. Cloud engines are essentially ICP (Internet Computer Protocol) network configurations, and this means the underlying compute nodes they combine can be swapped out without interrupting their hosted apps and services. This is a big deal. In addition, cloud engines now support nodes that are instances running on Big Tech's clouds, in addition to nodes that are dedicated specialized hardware, as per the Gen I and Gen II nodes that dominate the Internet Computer today. For example, it is possible to have an engine running across different AWS data centers, say, and then reconfigure the engine to run across a mixture of AWS, Google, Azure and Hetzner for even more resilience, without the users of hosted apps and services noticing a thing. That's true freedom. — Sovereign AI is becoming increasingly important too, and cloud engines allow special "AI nodes" to be added to them, so that hosted software can perform inference on hardware provisioned by the owner from a location the owner has selected. Even though the AI nodes are only accessible within the cloud engine, they can still benefit from the forthcoming Internet Intelligence Gateway (IG), which will make it possible to validate inference performed on key frontier open weights LLMs, even when the inference is performed on completely independent AI clouds. When the results of inference are received, this technology can verify that neither the prompt+context (input) nor the inference result (output) have been modified, and that the results were produced by the precise LLM expected. This ensures that AI clouds don't cheat by running inference on cheaper models than are being paid for, and bad actors aren't modifying the inputs or outputs to surreptitiously insert advertising into results, say, or change facts, or insert malware when code is being generated. What's super cool about this technology is the cost of the verification is scalable. A very valuable additional security can be achieved with only 1-2% of extra cost. — Scaling apps and services when they hit capacity limits is another thorny problem that cloud engines help the world address. Engines make scaling possible without rewriting or reconfiguring software. The query workload capacity of hosted software can be horizontally scaled simply by adding new nodes to an engine, and nodes can also be added in geographical proximity to demand. Meanwhile, update workload capacity can first be scaled-up by swapping an engine's nodes out for the next class up, and then when no larger class of node is available, horizontally scaled-out by "splitting" the engine into two, which doubles available capacity. (Technical tip: horizontally scaling update capacity by splitting engines requires multi-canister architectures). — For those who have been following how Caffeine builds apps that can efficiently store large numbers of files, I should mention that apps built on cloud engines will also support the new ICP Blob Storage cloud network (since cloud engines currently have up to about 3 TB of memory, which apps storing large amounts of files can easily exceed). We are also working on allowing blob storage nodes to be added to cloud engines, to enable sovereign mass blob storage within an engine, similarly to how AI nodes can be added currently. — Lastly, but certainly not least, I should mention that cloud engines are multi-blockchain capable, and ready for digital assets, thanks to the clever math at their core. For example, an e-commerce service built on a cloud engine can securely accept and custody stablecoin payments, or a multi-chain DEX could be hosted. Further, engines can support software autonomy (software orchestrated and controlled by other autonomous software, in a decentralized way) and can themselves be orchestrated by SNS technology, and thus run autonomously too. Today, though, the focus is on *mainstream* cloud. This year, the cloud industry will generate approximately one trillion dollars in revenue. That number is already huge, but is expected to grow to two trillion dollars by 2030. After years of continuous development, which have seen more than $500m spent on R&D, the Internet Computer network is now tacking directly toward this mainstream cloud market with cloud engine technology. In their first version, cloud engines are not meant to be a cloud panacea. For example, currently they are not ideal for working with big data. You should use something like DataBricks for that. Cloud engines are carefully targeted at enabling AI to produce traditional online applications and services, including SaaS, in a safer and more productive way, which represents a new market segment with tremendous potential. Of course, DFINITY will continue to work relentlessly to push forward ICP's capabilities, so expect further developments. It's worth mentioning that this cloud segment isn't just about creating new apps and services using AI, it's also about replacing legacy systems and apps built on super expensive SaaS services. Caffeine Labs is working to produce technology (Caffeine Snorkel) that can study an enterprise's legacy systems and app built on SaaS, create replacement systems and apps, and migrate the data, while supporting key stakeholders through the process over email and chat, with full automation. Thus the legacy systems and SaaS markets shall also be addressed by cloud engines. Zooming out, and reasoning in a more metaphysical way, we believe, as we always have, that there is room for a new kind of cloud created by mathematical networks, that provides seminal advances in the fields of security and resilience, as well as true sovereignty and freedom from lock-in. That this same technology, with the help of additional technologies like orthogonal persistence and Motoko, enables AI to build for us without the need for so much oversight, and to create more backend sophistication while consuming fewer AI API tokens, enables ICP to bring game-changing advances to the world. Cloud engines will work synergistically with the Intelligence Gateway, which will enable apps and services running on engines to seamlessly leverage AI, wherever that AI is running, while providing verifiability at extremely low cost for open weights frontier models. We believe that cloud engines represent an inflection point in the storied history of the Internet Computer project, and I'm very proud to be sharing the details with you on the network's fifth birthday 💪 I'll be back with more news soon!!
显示更多
0
209
1.8K
646
转发到社区
1/ There's never been a more critical time to join Blockchain Association. Digital asset legislation is actively being shaped and debated in Washington right now – from market structure to taxation to developer protections. Join us:
显示更多
0
17
221
58
转发到社区
Sentio Network testnet is live 🌐 A decentralized data layer for blockchains — run your own indexers, store data, query anywhere. The infra powering Sentio for years. Now yours to run. Docs → Explorer →
显示更多
0
430
5.7K
26.3K
转发到社区
🚀SlowMist RWA Smart Contract Security Audit Service Officially Launched! RWA (Real World Assets) has become a major frontier where #Web3# meets traditional finance. Unlike traditional DeFi projects, #RWA# security involves far greater complexity — including ownership verification, compliance governance, and on-chain/off-chain consistency. Drawing on years of blockchain security expertise, SlowMist has officially launched a specialized RWA smart contract audit service, delivering comprehensive protection across compliance, permission systems, and on/off-chain consistency. Read full announcement👇 RWA project teams and institutions are welcome to contact us for collaboration! 🤗 📮team@slowmist.com
显示更多