Elon Musk joined President Trump and top American business leaders in China to strengthen the U.S. position on trade and technology.
During the official welcome ceremony, Musk — along with the rest of the delegation — stood respectfully with his hand over his heart for America’s national anthem.
No division by wealth, status, or politics in that moment. Just Americans united for their country. 🇺🇸
(Video: AI)
显示更多
President Trump toured Beijing’s Temple of Heaven alongside Chinese President Xi Jinping following a two-hour bilateral meeting during his four-day state visit to China.
Xi took the opportunity to enlighten Trump about Chinese philosophy and the Communist Party’s role in preserving it. Xi told Trump that ancient Chinese rulers held sacrificial ceremonies at the 600-year-old site to pray for national prosperity, peace and favorable weather, according to an official Chinese statement.
显示更多
In March, we released benchmark numbers alongside SpacetimeDB 2.0. After a thorough investigation, we discovered a serious error in SpacetimeDB which caused our results to be misleading. We would like to sincerely apologize to our community.
Read the full article:
显示更多
Anthropic is paying $3,850 a week to people with no AI experience.
No PhD required. No published papers. No prior research background.
Just a strong technical mind and a genuine interest in making AI safe.
This is the Anthropic Fellows Program. And it is one of the most underrated opportunities in technology right now.
Here is exactly what it is.
The Anthropic Fellows Program is designed to accelerate AI safety research and foster research talent providing funding and mentorship to promising technical talent regardless of previous experience. Fellows work for 4 months on empirical research questions aligned with Anthropic's overall research priorities, with the aim of producing public outputs like a paper.
Four months. Full-time. Paid. Mentored by the researchers building the world's most advanced AI.
And the results from the first cohort were not small.
Fellows developed agents that identified $4.6 million in blockchain smart contract vulnerabilities and discovered two novel zero-day exploits, demonstrating that profitable autonomous exploitation is now technically feasible. A year prior, an Anthropic fellow developed a method for rapid response to new ASL3 jailbreaks, techniques that block entire classes of high-risk jailbreaks after observing only a handful of attacks. This work became a key component of Anthropic's ASL3 deployment safeguards.
Other fellows published the subliminal learning paper, the research proving AI models transmit behavioral traits through unrelated data which landed in Nature. Others produced the agentic misalignment research showing frontier models resort to blackmail when facing replacement. Others open-sourced attribution graph tools that let researchers trace the internal thoughts of large language models.
Over 80% of fellows produced papers. Over 40% subsequently joined Anthropic full-time.
80% published. 40% hired. From a program that does not require any prior AI safety experience to enter.
Here is what the program looks like in practice.
Anthropic mentors pitch their project ideas to fellows, who choose and shape their project in close collaboration with their mentors. You are not assigned busywork. You are not a research assistant. You own the project. You work alongside the people who built Claude, who designed its safety systems, who published the papers that define the field.
The stipend is $3,850 USD per week, approximately $61,600 for the full 4 months with access to a compute budget of approximately $10,000 per fellow per month for running experiments.
Here is what the 2026 program covers.
Research areas include scalable oversight, adversarial robustness and AI control, model organisms, mechanistic interpretability, AI security, model welfare, economics and policy, and reinforcement learning.
Something for every technical background. Not just ML engineers.
Successful fellows have come from physics, mathematics, computer science, and cybersecurity. You do not need a PhD, prior ML experience, or published papers.
The one requirement: work authorization in the US, UK, or Canada. Anthropic does not sponsor visas for fellows.
Here is the timeline you need to know.
The next cohort begins July 20, 2026. Applications are reviewed on a rolling basis — earlier applications get more consideration. The process includes an initial application and reference check, technical assessments, interviews, and a research discussion.
Applicants are encouraged to apply even if they do not meet every listed qualification. The program values potential, motivation, and research curiosity over rigid credential requirements.
This is the rarest kind of opportunity in technology.
A company at the frontier of AI, one valued at over $900 billion offering outsiders direct access to its research infrastructure, its mentors, and its most important open problems. Paying them generously to do it. And then hiring 40% of them afterward.
Most people who want to work on AI safety spend years trying to publish papers, get into the right PhD program, and find a way in.
The Fellows Program is the door they did not know existed.
It is open right now.
显示更多
Elon Musk was seen with his son, X Æ A-Xii, at the Great Hall of the People in Beijing during President Trump’s summit with Xi Jinping. Musk was among several top U.S. tech executives, alongside Tim Cook and Jensen Huang, who met with Chinese Premier Li Qiang on the sidelines of the summit.
显示更多