注册并分享邀请链接,可获得视频播放与邀请奖励。

与「hacker」相关的搜索结果

hacker 贴吧
一个关键词就是一个贴吧,路径全站唯一。
创建贴吧
用户
未找到
包含 hacker 的内容
100+ places to launch your startup: 1. Product Hunt 2. BetaList 3. TrustMRR 4. Uneed 5. TinyLaunch 6. Indie Hackers 7. Hacker News 8. Tiny Startup 9. PeerPush 10. SideProjectors 11. DevHunt 12. Launching Next 13. Microlaunch 14. Launch Directories 15. StartupBase 16. ShowMeBestAI 17. Trendy Startups 18. Software Advice 19. There's an AI for that 20. AlternativeTo 21. OpenAlternative 22. SaaSHub 23. Toolfolio 24. LibHunt 25. SaaS Genius 26. FoundrList 27. Stacker News 28. PitchWall 29. API List 30. MakerPad 31. Dan Recommends 32. Startup Buffer 33. AppSumo 34. SEO Wins 35. RocketHub 36. StackSocial 37. SaaS Mantra 38. SaaS Warrior 39. LTD Hunt 40. KEN Moo 41. Prime Club 42. SaaSZilla 43. Fazier 44. Peerlist 45. Next Gen Tools 46. Sustainability Softwares 47. Saas Baba 48. PromptZone 49. Futurepedia 50. Toolkitly 51. LaunchIgniter 52. Firsto 53. Indie Tools 54. Manta 55. Indie Deals 56. PayOnceUseForever 57. Slocco 58. ToolFame 59. GPTStore 60. AlterOpen 61. SaaS Gallery 62. Aura Plus Plus 63. That AI Collection 64. BasedTools 65. SaaS Pirate 66. Product Canyon 67. Deal Mirror 68. Dealify 69. Goodfirms 70. AI Agent Store 71. BroUseAI 72. Altern 73. BestWebDesignTools 74. MadGenius 75. BotsFloor 76. AIDir Wiki 77. Look AI Tools 78. The AI Generation 79. Waild World 80. Wavel 81. Indie Products 82. Invent List 83. Hack the Prompt 84. Startup Heroes 85. AI Marketing Directory 86. RankYourAI 87. EarlyHunt 88. Tekpon 89. Dokey AI 90. Appscribed 91. Open Tools 92. SEOFAI 93. Startups FYI 94. AI Tool Trek 95. Powerusers 96. AI Parabellum 97. Serchen 98. RobinGood 99. Affiliate Watch 100. IndieHunt 101. Reviano 102. Nocode List 103. Software World 104. AIxploria 105. Ctrlalt 106. AI Hunter 107. Public APIs
显示更多
0
48
487
121
转发到社区
Dear ICP community, the Internet Computer has now been running strong for 5 years 👏👏👏 Here is a celebratory preview of ICP "cloud engines," the sovereign frontier cloud technology the network shall soon provide from Main points: — Cloud engines enable anyone to spin up their own sovereign frontier cloud. The technology involves an extraordinary inventive step, in which cloud is created from a mathematically secure network of nodes. The nodes run as part of the Internet Computer network ( but are selected and configured by the cloud engine's owner. — The frontier cloud provided by engines is strongly focused on enabling AI agents to build and update online applications and services for us. The world is changing fast, and nearly all new online apps and services are already being built with the help of AI, and thus cloud engines target the future of cloud. — Software hosted on cloud engines is tamperproof, which means that it is immune to infrastructure hacks, because it runs inside a mathematically secure network protocol, rather than on computers directly. This means that AI agents, and those building with them, don't need to have a security team in the loop, or to trust someone else's security team. This is crucial, because in the future, non technical people will demand the freedom to build with full automation — where they just need to issue instructions to AI about what to build, and don't need to worry about anything or anyone else. Of course, apps and services running on engines are also vastly safer from the new breed of hacker being enabled by frontier AI. (The cloud engines themselves are also "tamperproof." Even if a hacker gains physical access to some portion of a cloud engine's nodes, and can make arbitrary changes, the computations and data of the hosted apps and services cannot be corrupted or interrupted so long as the network's fault bounds aren't exceeded. The recent hack of Vercel, a major cloud platform, which gave hackers access to the apps it hosted, provides additional perspective on the importance of this advantage.) — Software hosted on cloud engines is guaranteed to run, so long as a sufficient number of the engine's nodes are running. This means that AI can build applications and services without the need to have a human systems admin team constantly tinkering with the underlying platform to keep it running, which is again crucial, because in the future, non technical people will expect the freedom to use AI to build without the support of others. — New frontier programming language technology, in the form of the Motoko language developed by Caffeine Labs, leverages seminal "orthogonal persistence" technology that unifies program logic and data to deliver further unlocks for AI (Motoko is the first computer language being developed that targets agents that are writing software rather than humans engineers per se). Nowadays, AI can build and update production apps at a prodigious rate, even at the speed of conversation. But it can also make mistakes, and there's a risk that an update it creates might be "lossy" in the sense it causes some transformed data to be lost. Again, in this new world, it's both undesirable and impractical for everyone to have to have a systems admin team on-hand to detect lossy updates and roll them back, but Motoko provides a solution: it can detect new software updates are lossy before they are applied, reducing potentially catastrophic errors by AI to harmless coding retries. — Software hosted on cloud engines is "serverless" but unlike traditional serverless software, directly it directly incorporates data through "orthogonal persistence." Another key purpose is simplify backend software logic and fuel the modeling power of AI by increasing abstraction (sorry for the technical language!!!). Put simply, this enables AI to produce more sophisticated backends, faster, and at dramatically lower costs, as measured by the number AI API tokens consumed during coding. (Tip for the technical: orthogonal persistence is a new paradigm where "the program is the database," and data lives inside program variables, which is possible because it's as if hosted software runs forever in persistent memory). — An expanding database of skills at shall make it possible to develop and directly deploy apps and services to your cloud engines directly from Claude Code, Perplexity, Codex and other AI platforms. Further, your account on can be connected, so that new apps and updates created through conversation automatically appear hosted from your cloud engine. In the future, R&D is going to be very seamless. You converse with AI, and your secure and unstoppable apps or services are created or updated. Cloud engines are designed to directly support this "self-writing cloud" future where we can work hands-free. — Tech sovereignty is becoming a huge issue worldwide, with governments and corporations seeking to create sovereign tech stacks owing to geopolitical tensions. Increasingly, people are realizing that tech provided by foreign nations can come with hidden backdoors and kills switches, from the base platform, right up through hosted apps and services. ICP technology is open source, and those building on ICP using AI own their own source code. When you have the source code, you can verify that there are no backdoors, and when you own the source code thanks to AI, you can update it at will, freeing you from vendor lock-in. But cloud engines take sovereignty much further... — You create a cloud engine by selecting the nodes that will be combined. You can choose the class of nodes used, and their number, but more importantly, you can choose who operates the nodes, and where they are located. Almost any configuration is possible, because the Internet Computer scales the security privileges afforded to hosted software within the network according to configuration (software hosted on cloud engines can directly interoperate with software on other engines and traditional subnets, but base restrictions are applied according to security rules). A cloud engine can be created within a region such as Europe, to comply with regs such as GDPR, or completely within a sovereign state like Switzerland or Pakistan. But cloud engines go further still... — Sovereignty is also about freedom from vendor lock-in. Cloud engines are essentially ICP (Internet Computer Protocol) network configurations, and this means the underlying compute nodes they combine can be swapped out without interrupting their hosted apps and services. This is a big deal. In addition, cloud engines now support nodes that are instances running on Big Tech's clouds, in addition to nodes that are dedicated specialized hardware, as per the Gen I and Gen II nodes that dominate the Internet Computer today. For example, it is possible to have an engine running across different AWS data centers, say, and then reconfigure the engine to run across a mixture of AWS, Google, Azure and Hetzner for even more resilience, without the users of hosted apps and services noticing a thing. That's true freedom. — Sovereign AI is becoming increasingly important too, and cloud engines allow special "AI nodes" to be added to them, so that hosted software can perform inference on hardware provisioned by the owner from a location the owner has selected. Even though the AI nodes are only accessible within the cloud engine, they can still benefit from the forthcoming Internet Intelligence Gateway (IG), which will make it possible to validate inference performed on key frontier open weights LLMs, even when the inference is performed on completely independent AI clouds. When the results of inference are received, this technology can verify that neither the prompt+context (input) nor the inference result (output) have been modified, and that the results were produced by the precise LLM expected. This ensures that AI clouds don't cheat by running inference on cheaper models than are being paid for, and bad actors aren't modifying the inputs or outputs to surreptitiously insert advertising into results, say, or change facts, or insert malware when code is being generated. What's super cool about this technology is the cost of the verification is scalable. A very valuable additional security can be achieved with only 1-2% of extra cost. — Scaling apps and services when they hit capacity limits is another thorny problem that cloud engines help the world address. Engines make scaling possible without rewriting or reconfiguring software. The query workload capacity of hosted software can be horizontally scaled simply by adding new nodes to an engine, and nodes can also be added in geographical proximity to demand. Meanwhile, update workload capacity can first be scaled-up by swapping an engine's nodes out for the next class up, and then when no larger class of node is available, horizontally scaled-out by "splitting" the engine into two, which doubles available capacity. (Technical tip: horizontally scaling update capacity by splitting engines requires multi-canister architectures). — For those who have been following how Caffeine builds apps that can efficiently store large numbers of files, I should mention that apps built on cloud engines will also support the new ICP Blob Storage cloud network (since cloud engines currently have up to about 3 TB of memory, which apps storing large amounts of files can easily exceed). We are also working on allowing blob storage nodes to be added to cloud engines, to enable sovereign mass blob storage within an engine, similarly to how AI nodes can be added currently. — Lastly, but certainly not least, I should mention that cloud engines are multi-blockchain capable, and ready for digital assets, thanks to the clever math at their core. For example, an e-commerce service built on a cloud engine can securely accept and custody stablecoin payments, or a multi-chain DEX could be hosted. Further, engines can support software autonomy (software orchestrated and controlled by other autonomous software, in a decentralized way) and can themselves be orchestrated by SNS technology, and thus run autonomously too. Today, though, the focus is on *mainstream* cloud. This year, the cloud industry will generate approximately one trillion dollars in revenue. That number is already huge, but is expected to grow to two trillion dollars by 2030. After years of continuous development, which have seen more than $500m spent on R&D, the Internet Computer network is now tacking directly toward this mainstream cloud market with cloud engine technology. In their first version, cloud engines are not meant to be a cloud panacea. For example, currently they are not ideal for working with big data. You should use something like DataBricks for that. Cloud engines are carefully targeted at enabling AI to produce traditional online applications and services, including SaaS, in a safer and more productive way, which represents a new market segment with tremendous potential. Of course, DFINITY will continue to work relentlessly to push forward ICP's capabilities, so expect further developments. It's worth mentioning that this cloud segment isn't just about creating new apps and services using AI, it's also about replacing legacy systems and apps built on super expensive SaaS services. Caffeine Labs is working to produce technology (Caffeine Snorkel) that can study an enterprise's legacy systems and app built on SaaS, create replacement systems and apps, and migrate the data, while supporting key stakeholders through the process over email and chat, with full automation. Thus the legacy systems and SaaS markets shall also be addressed by cloud engines. Zooming out, and reasoning in a more metaphysical way, we believe, as we always have, that there is room for a new kind of cloud created by mathematical networks, that provides seminal advances in the fields of security and resilience, as well as true sovereignty and freedom from lock-in. That this same technology, with the help of additional technologies like orthogonal persistence and Motoko, enables AI to build for us without the need for so much oversight, and to create more backend sophistication while consuming fewer AI API tokens, enables ICP to bring game-changing advances to the world. Cloud engines will work synergistically with the Intelligence Gateway, which will enable apps and services running on engines to seamlessly leverage AI, wherever that AI is running, while providing verifiability at extremely low cost for open weights frontier models. We believe that cloud engines represent an inflection point in the storied history of the Internet Computer project, and I'm very proud to be sharing the details with you on the network's fifth birthday 💪 I'll be back with more news soon!!
显示更多
0
209
1.8K
646
转发到社区
周末阅读《币安人生》,翻到了 180-182页关于币安被盗 7000枚比特币(2019年5月8日)的经历,颇有共呜。 好巧不巧,在我加入 Trust Wallet 前一个月的圣诞节,插件钱包因为人为小过失,让骇客有机可乘。 @cz_binance 在情况还未被完全诊断清楚之前,率先发推转达Trust Wallet愿意赔偿。 调查后来发现:骇客通过恶意程序渗透了几名员工的电脑,在代码中植入了恶意指令。虽然管理助记词和私钥的核心安全组件并未被攻破,但插件钱包v2.68确实被植入了恶意代码,情况和2019年币安那次类似。在漏洞修复前的短短数小时内打开过插件钱包的用户受到了牵连。 时至今天,@TrustWallet 已加固安全揩施、每天活跃用户量不降反升,对受影响用户赔付进度也已完成95%。而对于已错过申请赔付最后限期的长尾用户,CZ 认为只要对方带著合理凭证,Trust Wallet 该继续受理,「做正确的事」。 Funds are SAFU,从不是一句口号,而是用行动写下的承诺。 Weekend reading Freedom of Money, I reached pages 180-182 about Binance’s May 8, 2019 hack where 7,000 BTC were stolen. It resonated with me. Coincidentally, one month before I joined Trust Wallet, the extension wallet was compromised during Christmas. CZ immediately tweeted that Trust Wallet would reimburse users, even before the issue was fully diagnosed. The investigation showed it was similar to 2019: hackers used malware to infect employee computers and injected malicious code into v2.68 (deprecated). The core seed phrase security was never breached — only users who opened the extension wallet in the few hours before the fix were affected. Today, Trust Wallet has strengthened security, daily active users are up, and 95% of affected users have been compensated. For the remaining vicitms who missed the deadline, CZ believes Trust Wallet should still accept claims with valid proof. Funds are SAFU — not just a slogan, but a promise backed by action.
显示更多
0
70
320
34
转发到社区
✅ Recovery Update #2# - Volo Vaults Sharing another significant development: We successfully intercepted and blocked the hacker's attempt to bridge 19.6 WBTC out of reach. These funds are no longer under hacker control. We are now working with ecosystem partners to determine the best path to return these funds to Volo. We will share the outcome of those discussions as soon as a plan is confirmed. More updates to follow.
显示更多
0
22
114
8
转发到社区
Tether froze 3.29M USDT to the hackers. Tether cares.
0
134
1.5K
113
转发到社区
Hacker Group Plans To Publish Stolen GTA Data Online stunned 😳 Monday's Insider Today kicks off now 🗓️
今天刚发生的重大安全事件,Karpathy 亲自发帖警告。 litellm 被投毒:一次教科书级的供应链攻击 今天(3月24日),AI 开发者常用的 Python 库 litellm 在 PyPI 上被植入恶意代码。版本 1.82.8 在 UTC 时间 10:52 发布到 PyPI,包含一个名为 litellm_init.pth 的恶意文件,会在每次 Python 进程启动时自动执行。不需要你主动调用这个库,装上就中招。 litellm 是干什么的?它是一个统一调用各家大模型 API 的 Python 库,GitHub 超过 4 万星,每月下载量超过 9500 万次。很多 AI 工具链都依赖它,包括 DSPy、MLflow、Open Interpreter 等,总共有 2000 多个包把它当作依赖项。 也就是说,你可能从来没有手动安装过 litellm,但你用的某个工具替你装了。 恶意代码会系统性地收集主机上的敏感数据:SSH 密钥、AWS/GCP/Azure 云凭证、Kubernetes 密钥、环境变量文件、数据库配置,甚至加密货币钱包。收集完毕后加密打包,发送到攻击者控制的域名。 如果检测到 Kubernetes 环境,恶意代码还会利用服务账户令牌在集群的每个节点上部署特权 Pod,进行横向扩散。 怎么发现的?攻击者自己写了个 bug 发现过程颇具讽刺意味。FutureSearch 的 Callum McMahon 在 Cursor 编辑器里用了一个 MCP 插件,这个插件间接依赖了 litellm。恶意 .pth 文件在每次 Python 启动时都会触发,子进程又触发同一个 .pth,形成指数级的 fork bomb,直接把机器内存撑爆了。 Karpathy 在推文里说得很清楚:如果攻击者没有在写恶意代码时犯这个 bug,这个投毒可能好几天甚至好几周都不会被发现。 攻击链:安全工具反成突破口 根源在于 litellm 的 CI/CD 流程中使用了 Trivy(一个漏洞扫描工具),而 Trivy 本身在 3 月 19 日就已经被同一个攻击组织 TeamPCP 攻陷了。攻击者通过被污染的 Trivy 窃取了 litellm 的 PyPI 发布令牌,然后直接往 PyPI 上推送了带毒版本。 litellm 1.82.7 在 UTC 10:39 发布,1.82.8 在 10:52 发布,两个版本都包含恶意代码。 时间线更完整地看:3月19日 TeamPCP 攻陷 Trivy,3月23日攻陷 Checkmarx KICS,3月24日轮到 litellm。Wiz 安全研究员 Gal Nagli 的评价是:开源供应链正在形成连锁崩塌,Trivy 被攻破导致 litellm 被攻破,数万个环境的凭证落入攻击者手中,而这些凭证又会成为下一次攻击的弹药。 攻击者还试图“灭口” 社区成员在 GitHub 上提交 issue 报告此事后,攻击者在 102 秒内用 73 个被盗账号发了 88 条垃圾评论试图淹没讨论,然后利用被盗的维护者账号把 issue 关闭。社区不得不另开 issue 并转移到 Hacker News 继续讨论。 Karpathy 借此事重提了他对软件依赖的警惕态度:供应链攻击是现代软件中最可怕的威胁,每次安装一个依赖,都可能在依赖树的深处引入一个被投毒的包。他现在越来越倾向于用大模型直接生成简单功能的代码,而不是引入外部依赖。 如果你的环境中有 litellm,立刻运行 pip show litellm 检查版本。1.82.6 是最后一个干净版本。如果不幸装了 1.82.7 或 1.82.8,假设所有凭证已泄露,立即轮换。
显示更多
0
57
1.3K
260
转发到社区
因为每一个人都太太太太不同了,在群里学习神鱼大大 @bitfish,和 @nake13 一起实践 bio hacker 把 Wegene 的核心数据导入到电脑里,然后交给 AI 分析(我在用 Claude Code,潘老师在用 Codex 尝试) 结果非常理想!😺你要补充的补剂,甚至到一些细节都会给到你,比如我补叶酸就必须用甲基化形式,这类细节如果只看所谓的别人经验很容易被忽略。 每一款新补品,每一个你要新尝试的吃的,都能先做一轮针对性的科学分析:机制是什么,风险点在哪里,和我自己的代谢路径可能有什么冲突,甚至还能顺手把相关论文拎出来做对照。 希望这个事情可以继续普及!让更多人从“瞎吃补剂”升级到“有证据、有路径、有反馈”的 bio hacking
显示更多
0
10
33
1
转发到社区
Quick new post: Auto-grading decade-old Hacker News discussions with hindsight I took all the 930 frontpage Hacker News article+discussion of December 2015 and asked the GPT 5.1 Thinking API to do an in-hindsight analysis to identify the most/least prescient comments. This took ~3 hours to vibe code and ~1 hour and $60 to run. The idea was sparked by the HN article yesterday where Gemini 3 was asked to hallucinate the HN front page one decade forward. More generally: 1. in-hindsight analysis has always fascinated me as a way to train your forward prediction model so reading the results is really interesting and 2. it's worth contemplating what it looks like when LLM megaminds of the future can do this kind of work a lot cheaper, faster and better. Every single bit of information you contribute to the internet can (and probably will be) scrutinized in great detail if it is "free". Hence also my earlier tweet from a while back - "be good, future LLMs are watching". Congrats to the top 10 accounts pcwalton, tptacek, paulmd, cstross, greglindahl, moxie, hannob, 0xcde4c3db, Manishearth, and johncolanduoni - GPT 5.1 Thinking found your comments to be the most insightful and prescient of all comments of HN in December of 2015. Links: - A lot more detail in my blog post - GitHub repo of the project if you'd like to play - The actual results pages for your reading pleasure
显示更多
0
238
5.4K
578
转发到社区
在发布 2 个月后,用户量突破了 3w 在经过 22 个版本的迭代后,几乎浏览器能打开内容都可以让 Elmo Chat 先帮你看看来节省时间 - 精读论文:Arxiv / PDF - 速览新闻:Hackernews / BBC / NYT - 总结视频:Youtube / Bilibili - 审阅简历:PDF / HTML / Google Doc 不限次数免费 + 不需注册可用 + 不记录网页数据 + 0 后台运行
显示更多
0
43
445
81
转发到社区