注册并分享邀请链接,可获得视频播放与邀请奖励。

与「NODE」相关的搜索结果

NODE 贴吧
一个关键词就是一个贴吧,路径全站唯一。
创建贴吧
用户
未找到
包含 NODE 的内容
🚨 SlowMist TI Alert 🚨 MistEye has received critical threat intelligence regarding an active supply chain attack compromising node-ipc, a foundational Node.js library. The malicious releases have been identified as versions 9.1.6, 9.2.3, and 12.0.1. Threat actors injected an obfuscated credential-stealing payload into the CommonJS bundle. Once loaded, it silently harvests over 90 categories of developer data—including AWS, Azure, GCP, SSH, K8s tokens, and Terraform states—and exfiltrates it to attacker-controlled infrastructure. We have synchronized this IOC with our clients immediately. Detection & Remediation: Please urgently audit your environments for exposure: • Dependencies: Run npm ls node-ipc --all to identify direct or transitive inclusions. • Lockfiles: Search package-lock.json, yarn.lock, or pnpm-lock.yaml for the affected version ranges. • CI/CD: Review pipeline jobs executed after May 14, 2026, that may have pulled loose semver updates (~9.1.x, ^12, etc.). ⚠️ Critical Action: If a compromised version was installed, assume certain compromise. Do not wait for exfiltration confirmation. Downgrade to a known safe version immediately and aggressively rotate all credentials, tokens, and environment secrets present on the affected machine or CI runner. As always, stay vigilant!
显示更多
0
1
23
10
转发到社区
🚨 node-ipc is compromised again. Three new malicious versions just dropped: 9.1.6, 9.2.3, and 12.0.1. Socket’s AI scanner flagged them as malware within three minutes of publication. The attack vector: a dormant maintainer account (atiertant) was likely taken over via an expired email domain. The attacker registered the lapsed domain, triggered an npm password reset, and gained publish rights to a package with millions of historical downloads. The payload is a credential stealer embedded in the CommonJS entrypoint (node-ipc.cjs). It activates on require(“node-ipc”), not through a postinstall script. Here’s what it does: •Fingerprints the host (OS, arch, hostname, uname) •Harvests 113-127 credential file patterns depending on platform (AWS, GCP, Azure, SSH keys, Kubernetes configs, npm tokens, .env files, shell histories, macOS Keychain databases, and more) •Dumps the entire process.env, capturing every CI secret and cloud credential in memory •Builds a gzip archive in a temp directory •Exfiltrates everything over DNS TXT queries to bt[.]node[.]js, using a bootstrap resolver at sh[.]azurestaticprovider[.]net:443 (a deliberate lookalike of Microsoft’s Azure Static Web Apps domain) The DNS exfiltration is chunked. A 500 KB archive generates roughly 29,400 TXT queries. The body is XOR-encrypted with a SHA-256 keystream, base64-encoded, alphabet-substituted, and split into 31-character chunks before hex-encoding into DNS labels. Header, data, and footer queries use xh, xd, and xf prefixes respectively. The malware forks a detached child process (env var __ntw=1) so credential theft runs silently in the background. It also exposes a __ntRun export, meaning any downstream code that calls require(“node-ipc”).__ntRun() can trigger a second collection/exfiltration cycle. ESM-only consumers using the import path are not affected by the reviewed package metadata. CommonJS consumers are. This is the same package involved in the 2022 protestware incident. It has a history. If you use node-ipc: •Do not install 9.1.6, 9.2.3, or 12.0.1 •Audit your lockfiles for these versions •If you loaded the CommonJS entrypoint, treat all environment variables, SSH keys, cloud credentials, npm tokens, and local secrets as compromised. Rotate immediately. •Hunt for DNS TXT queries to bt[.]node[.]js and sh[.]azurestaticprovider[.]net in your network logs •Check for temp files matching /nt-/.tar.gz Credit to Ian Ahl (@TekDefense) for first publicly identifying the expired-domain account takeover vector. Developing story. Full technical breakdown and IOCs on the Socket blog:
显示更多
0
3
81
23
转发到社区
Shipping 🌐 Proxyline 0.2.0. Process-global proxy routing for Node.js: one explicit policy for node:http, node:https, fetch/undici, WebSocket, and CONNECT. Much lighter than global-agent, which we used in OpenClaw before. 12 sub-dependencies GONE!
显示更多
0
13
158
7
转发到社区
🚨 BREAKING: node-ipc compromised. Again. Three malicious versions of node-ipc (9.1.6, 9.2.3, 12.0.1) were published today carrying an identical credential-stealing payload. This package has 10M+ weekly downloads. Here's what happened: An attacker injected an 80KB obfuscated IIFE into the CommonJS bundle. It fires on every require('node-ipc') call. No special config needed, just importing the package is enough. What it steals: → AWS, Azure, GCP credentials → SSH private keys → Kubernetes configs → Docker tokens → GitHub CLI tokens → AI tool configs (including Claude) → Terraform state → 90+ credential file patterns in total Everything gets gzipped and exfiltrated to an attacker-controlled domain (sh[.]azurestaticprovider[.]net) via DNS TXT queries and HTTPS POST, designed to look like normal traffic. The attacker published across two major version lines simultaneously (9.x and 12.x) to maximize blast radius. Semver ranges like ^9, ~9.1.x, ~9.2.x, ^12, and ~12.0 all resolve to compromised versions automatically on the next install or lockfile refresh. Key details: Only the CommonJS bundle (node-ipc.cjs) is affected. ESM imports are clean. The 9.x releases are fabricated. The 9.x line never shipped a .cjs bundle before this attack. This is a different actor from the 2022 peacenotwar incident. Purely financial, credential-theft motivation. If you installed any of these versions, assume all secrets on that machine are compromised. Rotate everything. Our full technical breakdown covers the attack chain stage by stage, IOCs, and how to check if you're affected:
显示更多
0
0
46
24
转发到社区
Dear ICP community, the Internet Computer has now been running strong for 5 years 👏👏👏 Here is a celebratory preview of ICP "cloud engines," the sovereign frontier cloud technology the network shall soon provide from Main points: — Cloud engines enable anyone to spin up their own sovereign frontier cloud. The technology involves an extraordinary inventive step, in which cloud is created from a mathematically secure network of nodes. The nodes run as part of the Internet Computer network ( but are selected and configured by the cloud engine's owner. — The frontier cloud provided by engines is strongly focused on enabling AI agents to build and update online applications and services for us. The world is changing fast, and nearly all new online apps and services are already being built with the help of AI, and thus cloud engines target the future of cloud. — Software hosted on cloud engines is tamperproof, which means that it is immune to infrastructure hacks, because it runs inside a mathematically secure network protocol, rather than on computers directly. This means that AI agents, and those building with them, don't need to have a security team in the loop, or to trust someone else's security team. This is crucial, because in the future, non technical people will demand the freedom to build with full automation — where they just need to issue instructions to AI about what to build, and don't need to worry about anything or anyone else. Of course, apps and services running on engines are also vastly safer from the new breed of hacker being enabled by frontier AI. (The cloud engines themselves are also "tamperproof." Even if a hacker gains physical access to some portion of a cloud engine's nodes, and can make arbitrary changes, the computations and data of the hosted apps and services cannot be corrupted or interrupted so long as the network's fault bounds aren't exceeded. The recent hack of Vercel, a major cloud platform, which gave hackers access to the apps it hosted, provides additional perspective on the importance of this advantage.) — Software hosted on cloud engines is guaranteed to run, so long as a sufficient number of the engine's nodes are running. This means that AI can build applications and services without the need to have a human systems admin team constantly tinkering with the underlying platform to keep it running, which is again crucial, because in the future, non technical people will expect the freedom to use AI to build without the support of others. — New frontier programming language technology, in the form of the Motoko language developed by Caffeine Labs, leverages seminal "orthogonal persistence" technology that unifies program logic and data to deliver further unlocks for AI (Motoko is the first computer language being developed that targets agents that are writing software rather than humans engineers per se). Nowadays, AI can build and update production apps at a prodigious rate, even at the speed of conversation. But it can also make mistakes, and there's a risk that an update it creates might be "lossy" in the sense it causes some transformed data to be lost. Again, in this new world, it's both undesirable and impractical for everyone to have to have a systems admin team on-hand to detect lossy updates and roll them back, but Motoko provides a solution: it can detect new software updates are lossy before they are applied, reducing potentially catastrophic errors by AI to harmless coding retries. — Software hosted on cloud engines is "serverless" but unlike traditional serverless software, directly it directly incorporates data through "orthogonal persistence." Another key purpose is simplify backend software logic and fuel the modeling power of AI by increasing abstraction (sorry for the technical language!!!). Put simply, this enables AI to produce more sophisticated backends, faster, and at dramatically lower costs, as measured by the number AI API tokens consumed during coding. (Tip for the technical: orthogonal persistence is a new paradigm where "the program is the database," and data lives inside program variables, which is possible because it's as if hosted software runs forever in persistent memory). — An expanding database of skills at shall make it possible to develop and directly deploy apps and services to your cloud engines directly from Claude Code, Perplexity, Codex and other AI platforms. Further, your account on can be connected, so that new apps and updates created through conversation automatically appear hosted from your cloud engine. In the future, R&D is going to be very seamless. You converse with AI, and your secure and unstoppable apps or services are created or updated. Cloud engines are designed to directly support this "self-writing cloud" future where we can work hands-free. — Tech sovereignty is becoming a huge issue worldwide, with governments and corporations seeking to create sovereign tech stacks owing to geopolitical tensions. Increasingly, people are realizing that tech provided by foreign nations can come with hidden backdoors and kills switches, from the base platform, right up through hosted apps and services. ICP technology is open source, and those building on ICP using AI own their own source code. When you have the source code, you can verify that there are no backdoors, and when you own the source code thanks to AI, you can update it at will, freeing you from vendor lock-in. But cloud engines take sovereignty much further... — You create a cloud engine by selecting the nodes that will be combined. You can choose the class of nodes used, and their number, but more importantly, you can choose who operates the nodes, and where they are located. Almost any configuration is possible, because the Internet Computer scales the security privileges afforded to hosted software within the network according to configuration (software hosted on cloud engines can directly interoperate with software on other engines and traditional subnets, but base restrictions are applied according to security rules). A cloud engine can be created within a region such as Europe, to comply with regs such as GDPR, or completely within a sovereign state like Switzerland or Pakistan. But cloud engines go further still... — Sovereignty is also about freedom from vendor lock-in. Cloud engines are essentially ICP (Internet Computer Protocol) network configurations, and this means the underlying compute nodes they combine can be swapped out without interrupting their hosted apps and services. This is a big deal. In addition, cloud engines now support nodes that are instances running on Big Tech's clouds, in addition to nodes that are dedicated specialized hardware, as per the Gen I and Gen II nodes that dominate the Internet Computer today. For example, it is possible to have an engine running across different AWS data centers, say, and then reconfigure the engine to run across a mixture of AWS, Google, Azure and Hetzner for even more resilience, without the users of hosted apps and services noticing a thing. That's true freedom. — Sovereign AI is becoming increasingly important too, and cloud engines allow special "AI nodes" to be added to them, so that hosted software can perform inference on hardware provisioned by the owner from a location the owner has selected. Even though the AI nodes are only accessible within the cloud engine, they can still benefit from the forthcoming Internet Intelligence Gateway (IG), which will make it possible to validate inference performed on key frontier open weights LLMs, even when the inference is performed on completely independent AI clouds. When the results of inference are received, this technology can verify that neither the prompt+context (input) nor the inference result (output) have been modified, and that the results were produced by the precise LLM expected. This ensures that AI clouds don't cheat by running inference on cheaper models than are being paid for, and bad actors aren't modifying the inputs or outputs to surreptitiously insert advertising into results, say, or change facts, or insert malware when code is being generated. What's super cool about this technology is the cost of the verification is scalable. A very valuable additional security can be achieved with only 1-2% of extra cost. — Scaling apps and services when they hit capacity limits is another thorny problem that cloud engines help the world address. Engines make scaling possible without rewriting or reconfiguring software. The query workload capacity of hosted software can be horizontally scaled simply by adding new nodes to an engine, and nodes can also be added in geographical proximity to demand. Meanwhile, update workload capacity can first be scaled-up by swapping an engine's nodes out for the next class up, and then when no larger class of node is available, horizontally scaled-out by "splitting" the engine into two, which doubles available capacity. (Technical tip: horizontally scaling update capacity by splitting engines requires multi-canister architectures). — For those who have been following how Caffeine builds apps that can efficiently store large numbers of files, I should mention that apps built on cloud engines will also support the new ICP Blob Storage cloud network (since cloud engines currently have up to about 3 TB of memory, which apps storing large amounts of files can easily exceed). We are also working on allowing blob storage nodes to be added to cloud engines, to enable sovereign mass blob storage within an engine, similarly to how AI nodes can be added currently. — Lastly, but certainly not least, I should mention that cloud engines are multi-blockchain capable, and ready for digital assets, thanks to the clever math at their core. For example, an e-commerce service built on a cloud engine can securely accept and custody stablecoin payments, or a multi-chain DEX could be hosted. Further, engines can support software autonomy (software orchestrated and controlled by other autonomous software, in a decentralized way) and can themselves be orchestrated by SNS technology, and thus run autonomously too. Today, though, the focus is on *mainstream* cloud. This year, the cloud industry will generate approximately one trillion dollars in revenue. That number is already huge, but is expected to grow to two trillion dollars by 2030. After years of continuous development, which have seen more than $500m spent on R&D, the Internet Computer network is now tacking directly toward this mainstream cloud market with cloud engine technology. In their first version, cloud engines are not meant to be a cloud panacea. For example, currently they are not ideal for working with big data. You should use something like DataBricks for that. Cloud engines are carefully targeted at enabling AI to produce traditional online applications and services, including SaaS, in a safer and more productive way, which represents a new market segment with tremendous potential. Of course, DFINITY will continue to work relentlessly to push forward ICP's capabilities, so expect further developments. It's worth mentioning that this cloud segment isn't just about creating new apps and services using AI, it's also about replacing legacy systems and apps built on super expensive SaaS services. Caffeine Labs is working to produce technology (Caffeine Snorkel) that can study an enterprise's legacy systems and app built on SaaS, create replacement systems and apps, and migrate the data, while supporting key stakeholders through the process over email and chat, with full automation. Thus the legacy systems and SaaS markets shall also be addressed by cloud engines. Zooming out, and reasoning in a more metaphysical way, we believe, as we always have, that there is room for a new kind of cloud created by mathematical networks, that provides seminal advances in the fields of security and resilience, as well as true sovereignty and freedom from lock-in. That this same technology, with the help of additional technologies like orthogonal persistence and Motoko, enables AI to build for us without the need for so much oversight, and to create more backend sophistication while consuming fewer AI API tokens, enables ICP to bring game-changing advances to the world. Cloud engines will work synergistically with the Intelligence Gateway, which will enable apps and services running on engines to seamlessly leverage AI, wherever that AI is running, while providing verifiability at extremely low cost for open weights frontier models. We believe that cloud engines represent an inflection point in the storied history of the Internet Computer project, and I'm very proud to be sharing the details with you on the network's fifth birthday 💪 I'll be back with more news soon!!
显示更多
0
209
1.8K
646
转发到社区
最Pua项目给 Tabi ,有没有反对的? 出身自带光芒,23年获得币安的投资+大牛市行情。五六年的等待换来的是一个又一个新赛道,哪个赛道热追哪个、什么都没玩好,最后弄的四不像!细数下这项目具体蹭了多少热门赛道:NFT、Bsc、L1、GameFi、基础设施、Cosmos、EVM、支付、稳定币、RWA、Ai 总结 21–23:NFT 市场阶段(Treasureland → Tabi) 24:L1 公链 + Gaming/Consumer 基础设施阶段 25:Consumer Finance(TabiPay)深耕阶段 26 :Consumer AI 执行层(当前核心叙事) 详细项目进展: 20–21 早期 NFT 平台 23.5 L1 公链诞生 24.2–3 公布 Tabi Chain(Cosmos SDK + EVM 兼容模块化链) 24.4–6公布路线图:Public Sale → Captain Node Sale → Airdrop → TGE 25 Devnet v3/v4 陆续上线、Genesis Deposit、Testnet v3、TabiPay 推出 ,布局支付、稳定币、RWA 等 26 Q1 原计划 Mainnet 上线(实际执行中) 26.4–5TabiPay sunset(逐步下线),全面 pivot 到 Consumer AI 定位
显示更多
建筑师们肯定会对此感到“抓狂”。有人刚刚开源了一个完全运行在浏览器中的全功能 3D 建筑编辑器。无需 AutoCAD,无需 Revit,也不需要每年 5,000 美元的授权费用。它被称为 Pascal Editor。它基于 React Three Fiber 和 WebGPU 构建——这意味着它直接在你的 GPU 上进行渲染,速度接近原生。以下是它的核心功能: → 一个完整的建筑/楼层/墙体/区域层级结构,支持实时编辑 → ECS 风格的架构,每个对象都通过 GPU 驱动的系统进行更新 → 集成了完整撤销/重做功能的 Zustand 状态管理 → 基于 Next.js 的前端,因此它作为 Web 应用部署,而非桌面安装版 → 脏节点追踪(Dirty node tracking)——仅重新渲染发生变化的部分,而非整个场景 最令人惊叹的部分在于:你可以对单个建筑楼层进行堆叠、拆解或单独查看。选择一个区域,拖动一面墙,重塑一个楼板——全部在 3D 环境中,全部在浏览器中完成。 建筑事务所为实现这种工作流的 BIM 软件,每个席位的费用高达 5 万美元以上。而这个是免费的。100% 开源。
显示更多
0
12
1.2K
225
转发到社区
Cloudflare Mesh is here. Ready to connect your devices, servers, and agents to a single private network 🔐 And with Workers VPC, your Workers, Agents, and Durable Objects running on Cloudflare can now reach your private MCPs, APIs, and databases directly Oh, and it's 50 nodes + 50 users free on every account
显示更多
0
49
1.7K
195
转发到社区
@evilcos 早上升级Openclaw 3.28 版本已经中招: • axios@1.14.1 位于全局路径:~/.npm-global/lib/node_modules/openclaw/node_modules/axios (被 OpenClaw 的内部依赖使用)。 • 未在全局 npm 或当前工作区中找到 axios@0.30.4。
显示更多
建议给你的 Agents(包括 OpenClaw)都投喂如下提示词,好好排查下是否存在这波 axios 被投毒事件影响: 参考下面这个方法排查一遍我们的环境是否存在被投毒的 axios@1.14.1 与 axios@0.30.4,及恶意模块 plain-crypto-js,不能漏,确保排查全面: Check for the malicious axios versions in your project: npm list axios 2>/dev/null | grep -E "1\.14\.1|0\.30\.4" grep -A1 '"axios"' package-lock.json | grep -E "1\.14\.1|0\.30\.4" Check for plain-crypto-js in node_modules: ls node_modules/plain-crypto-js 2>/dev/null && echo "POTENTIALLY AFFECTED" If setup.js already ran, package.jsoninside this directory will have been replaced with a clean stub. The presence of the directory is sufficient evidence the dropper executed. Check for RAT artifacts on affected systems: # macOS ls -la /Library/Caches/com.apple.act.mond 2>/dev/null && echo "COMPROMISED" # Linux ls -la /tmp/ld.py 2>/dev/null && echo "COMPROMISED" "COMPROMISED" # Windows (cmd.exe) dir "%PROGRAMDATA%\wt.exe" 2>nul && echo COMPROMISED
显示更多
0
25
321
70
转发到社区