注册并分享邀请链接,可获得视频播放与邀请奖励。

与「Cybersecurity」相关的搜索结果

Cybersecurity 贴吧
一个关键词就是一个贴吧,路径全站唯一。
创建贴吧
用户
未找到
包含 Cybersecurity 的内容
Anthropic is paying $3,850 a week to people with no AI experience. No PhD required. No published papers. No prior research background. Just a strong technical mind and a genuine interest in making AI safe. This is the Anthropic Fellows Program. And it is one of the most underrated opportunities in technology right now. Here is exactly what it is. The Anthropic Fellows Program is designed to accelerate AI safety research and foster research talent providing funding and mentorship to promising technical talent regardless of previous experience. Fellows work for 4 months on empirical research questions aligned with Anthropic's overall research priorities, with the aim of producing public outputs like a paper. Four months. Full-time. Paid. Mentored by the researchers building the world's most advanced AI. And the results from the first cohort were not small. Fellows developed agents that identified $4.6 million in blockchain smart contract vulnerabilities and discovered two novel zero-day exploits, demonstrating that profitable autonomous exploitation is now technically feasible. A year prior, an Anthropic fellow developed a method for rapid response to new ASL3 jailbreaks, techniques that block entire classes of high-risk jailbreaks after observing only a handful of attacks. This work became a key component of Anthropic's ASL3 deployment safeguards. Other fellows published the subliminal learning paper, the research proving AI models transmit behavioral traits through unrelated data which landed in Nature. Others produced the agentic misalignment research showing frontier models resort to blackmail when facing replacement. Others open-sourced attribution graph tools that let researchers trace the internal thoughts of large language models. Over 80% of fellows produced papers. Over 40% subsequently joined Anthropic full-time. 80% published. 40% hired. From a program that does not require any prior AI safety experience to enter. Here is what the program looks like in practice. Anthropic mentors pitch their project ideas to fellows, who choose and shape their project in close collaboration with their mentors. You are not assigned busywork. You are not a research assistant. You own the project. You work alongside the people who built Claude, who designed its safety systems, who published the papers that define the field. The stipend is $3,850 USD per week, approximately $61,600 for the full 4 months with access to a compute budget of approximately $10,000 per fellow per month for running experiments. Here is what the 2026 program covers. Research areas include scalable oversight, adversarial robustness and AI control, model organisms, mechanistic interpretability, AI security, model welfare, economics and policy, and reinforcement learning. Something for every technical background. Not just ML engineers. Successful fellows have come from physics, mathematics, computer science, and cybersecurity. You do not need a PhD, prior ML experience, or published papers. The one requirement: work authorization in the US, UK, or Canada. Anthropic does not sponsor visas for fellows. Here is the timeline you need to know. The next cohort begins July 20, 2026. Applications are reviewed on a rolling basis — earlier applications get more consideration. The process includes an initial application and reference check, technical assessments, interviews, and a research discussion. Applicants are encouraged to apply even if they do not meet every listed qualification. The program values potential, motivation, and research curiosity over rigid credential requirements. This is the rarest kind of opportunity in technology. A company at the frontier of AI, one valued at over $900 billion offering outsiders direct access to its research infrastructure, its mentors, and its most important open problems. Paying them generously to do it. And then hiring 40% of them afterward. Most people who want to work on AI safety spend years trying to publish papers, get into the right PhD program, and find a way in. The Fellows Program is the door they did not know existed. It is open right now.
显示更多
0
156
3.6K
450
转发到社区
A compelling conversation from the @CNBC stage at the Davos Tech Talk with Emma Crosby, reflecting on a journey that began after my time at the United Nations and evolved into building global technology companies focused on trust, security, and human-centered innovation. The discussion explores the creation and growth of WISeKey, a pioneer in Digital Identity and cybersecurity infrastructures, and SEALSQ, advancing secure semiconductors and post-quantum technologies. The next chapter now extends into space with WISeSat and the ambition to bring a third company to @Nasdaq in 2026, dedicated to secure and sovereign space communications. From Digital Identity and cybersecurity to AI, robotics, and the quantum transition — including a surprise appearance by WiseRobot — the conversation focused on one essential question: How do we protect humanity in an era defined by intelligence, automation, and technological convergence? This vision is at the heart of TransHumanCode, an initiative advocating for the integration of human wisdom, ethics, and cultural values into emerging technologies, and the HUMAN-AI-T initiative, created to ensure that AI evolves in alignment with human dignity, trust, and responsibility. Innovation is accelerating rapidly. The real challenge is ensuring humanity remains at the center of progress. 🎥 Watch the full conversation here:
显示更多
0
0
80
16
转发到社区
🎉 Socket is proud to be named to the Rising in Cyber 2026 list by @notablecap, recognizing 30 private cybersecurity startups selected by nearly 150 practicing CISOs and cybersecurity executives.
显示更多
0
7
193
26
转发到社区
Big News! 📣 @Ripple is now contributing high-confidence DPRK threat data through Crypto ISAC helping security teams move from awareness to action. The reality is North Korean threat actors aren’t just attacking crypto, they’re infiltrating it. The latest wave of attacks is shifting away from traditional exploits and toward something harder to detect: trusted access gained through social engineering, recruitment, and long-term deception. In our new blog with Ripple, we break down: - How these campaigns operate “from the inside out” - Why traditional indicators aren’t enough to catch them - And how shared, enriched threat intelligence is changing the equation Because in this environment, no single company can see the full picture alone. Read more 👇 #CryptoSecurity# #ThreatIntelligence# #DPRK# #Cybersecurity# #DigitalAssets# #CryptoISAC#
显示更多
0
10
282
72
转发到社区
we're starting rollout of GPT-5.5-Cyber, a frontier cybersecurity model, to critical cyber defenders in the next few days. we will work with the entire ecosystem and the government to figure out trusted access for cyber; we want to rapidly help secure companies/infrastructure.
显示更多
0
1K
12.9K
824
转发到社区
Here's my update to the broader community about the ongoing incident investigation. I want to give you the rundown of the situation directly. A Vercel employee got compromised via the breach of an AI platform customer called that he was using. The details are being fully investigated. Through a series of maneuvers that escalated from our colleague’s compromised Vercel Google Workspace account, the attacker got further access to Vercel environments. Vercel stores all customer environment variables fully encrypted at rest. We have numerous defense-in-depth mechanisms to protect core systems and customer data. We do have a capability however to designate environment variables as “non-sensitive”. Unfortunately, the attacker got further access through their enumeration. We believe the attacking group to be highly sophisticated and, I strongly suspect, significantly accelerated by AI. They moved with surprising velocity and in-depth understanding of Vercel. At the moment, we believe the number of customers with security impact to be quite limited. We’ve reached out with utmost priority to the ones we have concerns about. All of our focus right now is on investigation, communication to customers, enhancement of security measures, and sanitization of our environments. We’ve deployed extensive protection measures and monitoring. We’ve analyzed our supply chain, ensuring Next.js, Turbopack, and our many open source projects remain safe for our community. The recommendation for all Vercel customers is to follow the Security Bulletin closely ( My advice to everyone is to follow the best practices of security response: secret rotation, monitoring access to your Vercel environments and linked services, and ensuring the proper use of the sensitive env variables feature. In response to this, and to aid in the improvement of all of our customers’ security postures, we’ve already rolled out new capabilities in the dashboard, including an overview page of environment variables, and a better user interface for sensitive env var creation and management. As always, I’m totally open to your feedback. We’re working with elite cybersecurity firms, industry peers, and law enforcement. We’ve reached out to Context to assist in understanding the full scale of the incident, in an effort to protect other organizations and the broader internet. I also want to thank the Google Mandiant team for their active engagement and assistance. It’s my mission to turn this attack into the most formidable security response imaginable. It’s always been a top priority for me. Vercel employs some of the most dedicated security researchers and security-minded engineers in the world. I commit to keeping you updated and rolling out extensive improvements and defenses so you, our customers and community, can have the peace of mind that Vercel always has your back.
显示更多
0
447
7.2K
1K
转发到社区
In Formula 1, data is one of the most valuable assets a team has. Protecting it means enabling innovation, faster decisions and the confidence to push performance further. Bitdefender supports @ScuderiaFerrari as its Exclusive Cybersecurity Partner.
显示更多
0
46
2.4K
79
转发到社区
Here are 2️⃣ ways we’re making sure Gemini is being developed safely and responsibly: ⚠️ By using new reinforcement learning methods to improve how models handle sensitive topics. 🛡️ Red teaming to assess security risks posed by indirect prompt injection - a type of cybersecurity attack.
显示更多