注册并分享邀请链接,可获得视频播放与邀请奖励。

与「arb」相关的搜索结果

arb 贴吧
一个关键词就是一个贴吧,路径全站唯一。
创建贴吧
用户
未找到
包含 arb 的内容
现在是淡季,eth价格涨幅也被土豆吊打,不过有意思的是,其实现在以太坊的日均网络交易量已经远高于2021年牛市巅峰时的网络活动。 活跃地址、新地址创建、智能合约调用等指标也同步创出新高。这不是刷量或 spam,而是真实用户增长 + L2 生态带动的结果(Pectra、Fusaka 等升级降低费用、提升容量后,网络更活跃)。 注意:这些主要是以太坊L1主网数据。如果算上 Arbitrum、Base、Optimism、Lighter等 L2,总交易量和用户活动还要高不少——以太坊生态的“实际使用”其实被严重低估。 以太坊不会一直被忽视和埋没。它是基础设施 + 安全 + 流动性之王。
显示更多
0
32
104
7
转发到社区
--operator-- MES futures quietly solve a lot of headaches. No pattern day trader limits, so the bot can execute statistically-driven trades without hitting arbitrary roadblocks. Small account? Still scalable. Margins are accessible. Tax treatment is a real edge. Section 1256 gives 60/40 long-term/short-term rates on all trades, not just "investments." Over time, that difference compounds. Wash sale worries vanish. Accounting is cleaner. Just focus on the data. The bot isn't magic, it takes losses, but every entry is based on real statistical edge, not gut feel. Zero emotion. Discipline is systematized. MES lets me automate what should be automated, and step away. Passive, not effortless. It's definitely not free or easy money, but it's about building a process where the odds, and regulations, are finally on your side. You can start learning more about trading Micro Futures here:
显示更多
《CLARITY Act》(清晰法案)对“去中心化”的定义非常清晰,果然是清晰法案,名副其实。 它支持“真实的去中心化”,而不是”表演性的去中心化“。 之前判断一个加密项目算不算“去中心化”,标准很模糊,用的是“共同控制”(common control)测试,这让很多”公司链“(表演性去中心化)钻了空子,说“我们当初没承诺啥,现在就不该被SEC管”。 《CLARITY Act》(清晰法案)直接把这个漏洞堵死了,改成更严格的“成熟区块链系统”(mature blockchain system)的标准,它的核心是消除实质控制,不能有任何个人或小团队还能单方面影响系统的规则、升级、运行或治理。 • 系统必须真正公开、无许可(任何人都能跑节点,不能随便封禁用户); • 必须是纯规则驱动(透明代码自动跑,不能有人手动干预核心功能); • 没人(或一伙人)能控制超过20%的代币/投票权; • 系统要足够自治(没人能随便单方面改功能、改规则); • 经济上也要独立,价值主要来自网络的实际使用,而不是创始人/团队的持续努力。 那么,会带来什么影响? 1. 对公司链(表演性去中心化)来说: 以后很难再假装“去中心化”了。内部人卖币、操作就会被当成“证券”管得更严,要多披露信息、遵守更多规矩。 2. 对目前的L2来说: 安全理事会(security council)权力必须严格限制,不能随便一票否决或干预。类似于这次Arbitrum在kelpDAO事件中的那种快速干预,以后很难了。 3. 对真正去中心化的项目来说: 是实在的利好。 可以走清晰的路径:比如早期按投资合同募资(SEC管),网络成熟后,代币转为数字商品(CFTC管),融资、交易以及二级市场规则都更明确,不用再担心SEC的突然执法。
显示更多
0
36
123
21
转发到社区
🚨 New advisory was just published! A pre-auth remote code execution vulnerability was found in the CWMP implementation of ipTIME routers, allowing unauthenticated attackers to execute arbitrary code remotely. This vulnerability was found by Park Minchan from SSD Labs Korea:
显示更多
Watch @saylor 's latest thoughts on $BTC, $STRC, and $MSTR with @TheBonnieChang and @davidlin_TV at Consensus 2026. A Master Class. 0:00 - Strategy’s Bitcoin sale controversy 0:15 - Why Strategy may sell Bitcoin 2:42 - “Never sell your Bitcoin” explained 4:19 - How Strategy buys more Bitcoin than it sells 6:12 - Michael Saylor’s Bitcoin accumulation philosophy 7:45 - Using Bitcoin liquidity and market arbitrage 9:49 - Responding to Ponzi scheme criticism 12:07 - STRC trading patterns and Bitcoin buying 13:28 - What really drives Bitcoin’s price 16:34 - Bitcoin, macro risks, and Fed policy 18:21 - Bitcoin as digital capital and digital credit 20:29 - Strategy’s dominance in preferred stock issuance 21:45 - AI, digital credit, and Bitcoin’s future 23:13 - Saylor’s childhood inspiration and MIT story
显示更多
Update on Arbitrum DAO proposal: On May 1, plaintiff-judgment creditors in an unrelated matter served a restraining notice on Arbitrum DAO targeting the ETH immobilized after the rsETH incident. A few days later, Aave LLC filed an emergency motion to vacate. On May 8, the court modified the restraining notice to permit an onchain Arbitrum vote and transfer of the immobilized ETH to Aave LLC. The restraining notice attaches to Aave LLC upon transfer. The amended Constitutional AIP preserves the recovery intent approved by Arbitrum DAO, and the ETH remains directed toward the rsETH recovery. Aave LLC will comply with all court obligations as proceedings continue.
显示更多
0
2K
403
54
转发到社区
已经有了Base链,$CRCL做Layer1是不是多此一举? 先说蜂兄的分析结论:不是。 应该不是像 @PhyrexNi 说的这样有点傻, 也不是Talk老师说的为了分一杯羹, 甚至不仅不是Talk老师说的"还没站稳就出竞争链",可能还有些紧迫些。 ┈➤Base是单节点Layer2,有中心化风险 但是Base链,基于Optimism Stack构建的,和Optimism 一样,只有一个排序器。包括Arbitrum、zkSync其实目前都是只有一个排序器。 也就是说Base链只有一个节点,这就面临着安全问题了。 如果说,游戏、甚至打狗这类应用在单节点的Base链上运行,可以成本更低、效率更高。 但是,当x402发展起来,USDC支付发展起来以后,对安全性的要求更高,单节点的base链可能不适合。 所以,这应该才是 $CRCL 做Layer1的原因,ARC是一个专注于USDC支付的链,重点是去中心化、安全性…… ┈➤KelpDAO与Layerzero被攻击事件,提升了紧迫性 前不久KelpDAO与Layerzero被攻击事件中,其中一个原因就是因为KelpDAO选用了Layerzero的单节点服务。 也许正是这个被攻击事件,触发了Circle的风险意识的紧迫性。 还记得Arbitrum吧,Arbitrum冻结了黑客在Arbitrum链上的资产,计划补偿用户,结果被纽约什么法院要求赔偿给受朝鲜迫害的美国人…… 如果不是Arbitrum,是比特币或以太坊,法院不可能这样裁决,因为比特币和以太坊去中心化,根本找不到一个中心化主体去执行。 周日刚买了点$CRCL做LP,虽然不多,但是也想帮着说说话。
显示更多
0
42
58
3
转发到社区
🚨SlowMist TI Alert🚨 💸 Loss: 140,180 USDT (140,180,175,562 tokens) 🔍 Root Cause: Missing access control in addUsers (0x4777ff62) function of PayrollDistribution. Anyone can register users for existing drop and set arbitrary totalAmount. 📌 Attacker: 0x90b147592191388e955401af43842e19faa87ee2 📌 Victim: 0xa184af4b1c01815a4b57422a3419e4fb78a96ee4 📌 Vulnerable Contract: 0xef2c77f3b9b8aaa067239bc6b4588bae26433494 Attacker registered exploit contract via addUsers in constructor, flash loaned USDT deposit, claimed oversized payroll from drop #3#. Powered by #SlowMist#.AI
显示更多
0
1
37
13
转发到社区
Dear ICP community, the Internet Computer has now been running strong for 5 years 👏👏👏 Here is a celebratory preview of ICP "cloud engines," the sovereign frontier cloud technology the network shall soon provide from Main points: — Cloud engines enable anyone to spin up their own sovereign frontier cloud. The technology involves an extraordinary inventive step, in which cloud is created from a mathematically secure network of nodes. The nodes run as part of the Internet Computer network ( but are selected and configured by the cloud engine's owner. — The frontier cloud provided by engines is strongly focused on enabling AI agents to build and update online applications and services for us. The world is changing fast, and nearly all new online apps and services are already being built with the help of AI, and thus cloud engines target the future of cloud. — Software hosted on cloud engines is tamperproof, which means that it is immune to infrastructure hacks, because it runs inside a mathematically secure network protocol, rather than on computers directly. This means that AI agents, and those building with them, don't need to have a security team in the loop, or to trust someone else's security team. This is crucial, because in the future, non technical people will demand the freedom to build with full automation — where they just need to issue instructions to AI about what to build, and don't need to worry about anything or anyone else. Of course, apps and services running on engines are also vastly safer from the new breed of hacker being enabled by frontier AI. (The cloud engines themselves are also "tamperproof." Even if a hacker gains physical access to some portion of a cloud engine's nodes, and can make arbitrary changes, the computations and data of the hosted apps and services cannot be corrupted or interrupted so long as the network's fault bounds aren't exceeded. The recent hack of Vercel, a major cloud platform, which gave hackers access to the apps it hosted, provides additional perspective on the importance of this advantage.) — Software hosted on cloud engines is guaranteed to run, so long as a sufficient number of the engine's nodes are running. This means that AI can build applications and services without the need to have a human systems admin team constantly tinkering with the underlying platform to keep it running, which is again crucial, because in the future, non technical people will expect the freedom to use AI to build without the support of others. — New frontier programming language technology, in the form of the Motoko language developed by Caffeine Labs, leverages seminal "orthogonal persistence" technology that unifies program logic and data to deliver further unlocks for AI (Motoko is the first computer language being developed that targets agents that are writing software rather than humans engineers per se). Nowadays, AI can build and update production apps at a prodigious rate, even at the speed of conversation. But it can also make mistakes, and there's a risk that an update it creates might be "lossy" in the sense it causes some transformed data to be lost. Again, in this new world, it's both undesirable and impractical for everyone to have to have a systems admin team on-hand to detect lossy updates and roll them back, but Motoko provides a solution: it can detect new software updates are lossy before they are applied, reducing potentially catastrophic errors by AI to harmless coding retries. — Software hosted on cloud engines is "serverless" but unlike traditional serverless software, directly it directly incorporates data through "orthogonal persistence." Another key purpose is simplify backend software logic and fuel the modeling power of AI by increasing abstraction (sorry for the technical language!!!). Put simply, this enables AI to produce more sophisticated backends, faster, and at dramatically lower costs, as measured by the number AI API tokens consumed during coding. (Tip for the technical: orthogonal persistence is a new paradigm where "the program is the database," and data lives inside program variables, which is possible because it's as if hosted software runs forever in persistent memory). — An expanding database of skills at shall make it possible to develop and directly deploy apps and services to your cloud engines directly from Claude Code, Perplexity, Codex and other AI platforms. Further, your account on can be connected, so that new apps and updates created through conversation automatically appear hosted from your cloud engine. In the future, R&D is going to be very seamless. You converse with AI, and your secure and unstoppable apps or services are created or updated. Cloud engines are designed to directly support this "self-writing cloud" future where we can work hands-free. — Tech sovereignty is becoming a huge issue worldwide, with governments and corporations seeking to create sovereign tech stacks owing to geopolitical tensions. Increasingly, people are realizing that tech provided by foreign nations can come with hidden backdoors and kills switches, from the base platform, right up through hosted apps and services. ICP technology is open source, and those building on ICP using AI own their own source code. When you have the source code, you can verify that there are no backdoors, and when you own the source code thanks to AI, you can update it at will, freeing you from vendor lock-in. But cloud engines take sovereignty much further... — You create a cloud engine by selecting the nodes that will be combined. You can choose the class of nodes used, and their number, but more importantly, you can choose who operates the nodes, and where they are located. Almost any configuration is possible, because the Internet Computer scales the security privileges afforded to hosted software within the network according to configuration (software hosted on cloud engines can directly interoperate with software on other engines and traditional subnets, but base restrictions are applied according to security rules). A cloud engine can be created within a region such as Europe, to comply with regs such as GDPR, or completely within a sovereign state like Switzerland or Pakistan. But cloud engines go further still... — Sovereignty is also about freedom from vendor lock-in. Cloud engines are essentially ICP (Internet Computer Protocol) network configurations, and this means the underlying compute nodes they combine can be swapped out without interrupting their hosted apps and services. This is a big deal. In addition, cloud engines now support nodes that are instances running on Big Tech's clouds, in addition to nodes that are dedicated specialized hardware, as per the Gen I and Gen II nodes that dominate the Internet Computer today. For example, it is possible to have an engine running across different AWS data centers, say, and then reconfigure the engine to run across a mixture of AWS, Google, Azure and Hetzner for even more resilience, without the users of hosted apps and services noticing a thing. That's true freedom. — Sovereign AI is becoming increasingly important too, and cloud engines allow special "AI nodes" to be added to them, so that hosted software can perform inference on hardware provisioned by the owner from a location the owner has selected. Even though the AI nodes are only accessible within the cloud engine, they can still benefit from the forthcoming Internet Intelligence Gateway (IG), which will make it possible to validate inference performed on key frontier open weights LLMs, even when the inference is performed on completely independent AI clouds. When the results of inference are received, this technology can verify that neither the prompt+context (input) nor the inference result (output) have been modified, and that the results were produced by the precise LLM expected. This ensures that AI clouds don't cheat by running inference on cheaper models than are being paid for, and bad actors aren't modifying the inputs or outputs to surreptitiously insert advertising into results, say, or change facts, or insert malware when code is being generated. What's super cool about this technology is the cost of the verification is scalable. A very valuable additional security can be achieved with only 1-2% of extra cost. — Scaling apps and services when they hit capacity limits is another thorny problem that cloud engines help the world address. Engines make scaling possible without rewriting or reconfiguring software. The query workload capacity of hosted software can be horizontally scaled simply by adding new nodes to an engine, and nodes can also be added in geographical proximity to demand. Meanwhile, update workload capacity can first be scaled-up by swapping an engine's nodes out for the next class up, and then when no larger class of node is available, horizontally scaled-out by "splitting" the engine into two, which doubles available capacity. (Technical tip: horizontally scaling update capacity by splitting engines requires multi-canister architectures). — For those who have been following how Caffeine builds apps that can efficiently store large numbers of files, I should mention that apps built on cloud engines will also support the new ICP Blob Storage cloud network (since cloud engines currently have up to about 3 TB of memory, which apps storing large amounts of files can easily exceed). We are also working on allowing blob storage nodes to be added to cloud engines, to enable sovereign mass blob storage within an engine, similarly to how AI nodes can be added currently. — Lastly, but certainly not least, I should mention that cloud engines are multi-blockchain capable, and ready for digital assets, thanks to the clever math at their core. For example, an e-commerce service built on a cloud engine can securely accept and custody stablecoin payments, or a multi-chain DEX could be hosted. Further, engines can support software autonomy (software orchestrated and controlled by other autonomous software, in a decentralized way) and can themselves be orchestrated by SNS technology, and thus run autonomously too. Today, though, the focus is on *mainstream* cloud. This year, the cloud industry will generate approximately one trillion dollars in revenue. That number is already huge, but is expected to grow to two trillion dollars by 2030. After years of continuous development, which have seen more than $500m spent on R&D, the Internet Computer network is now tacking directly toward this mainstream cloud market with cloud engine technology. In their first version, cloud engines are not meant to be a cloud panacea. For example, currently they are not ideal for working with big data. You should use something like DataBricks for that. Cloud engines are carefully targeted at enabling AI to produce traditional online applications and services, including SaaS, in a safer and more productive way, which represents a new market segment with tremendous potential. Of course, DFINITY will continue to work relentlessly to push forward ICP's capabilities, so expect further developments. It's worth mentioning that this cloud segment isn't just about creating new apps and services using AI, it's also about replacing legacy systems and apps built on super expensive SaaS services. Caffeine Labs is working to produce technology (Caffeine Snorkel) that can study an enterprise's legacy systems and app built on SaaS, create replacement systems and apps, and migrate the data, while supporting key stakeholders through the process over email and chat, with full automation. Thus the legacy systems and SaaS markets shall also be addressed by cloud engines. Zooming out, and reasoning in a more metaphysical way, we believe, as we always have, that there is room for a new kind of cloud created by mathematical networks, that provides seminal advances in the fields of security and resilience, as well as true sovereignty and freedom from lock-in. That this same technology, with the help of additional technologies like orthogonal persistence and Motoko, enables AI to build for us without the need for so much oversight, and to create more backend sophistication while consuming fewer AI API tokens, enables ICP to bring game-changing advances to the world. Cloud engines will work synergistically with the Intelligence Gateway, which will enable apps and services running on engines to seamlessly leverage AI, wherever that AI is running, while providing verifiability at extremely low cost for open weights frontier models. We believe that cloud engines represent an inflection point in the storied history of the Internet Computer project, and I'm very proud to be sharing the details with you on the network's fifth birthday 💪 I'll be back with more news soon!!
显示更多
0
209
1.8K
646
转发到社区
robots dancing / fighting is such old meta in china lol, somehow catching wind in the states one can simply bring what's popular in china 6 months ago, and score a win growth marketing = arbitrage
0
13
47
2
转发到社区