My laptop has become a “satellite device” since I started using Codex from my phone. And my Mac mini has become the “home.” It’s clunky, but the end state feels more like how we’re going to be working in the near future:
I’m currently running the Codex app on 2 devices:
1. my MacBook
2. my Mac mini
My laptop isn’t reliably connected to Wi-Fi enough, so I keep a Mac mini on my desk that is always connected.
When I kick off new threads from my phone, I start them on the Mac mini. When I’m working from my desk, I run them there too.
The cool part is that I’ve added my MacBook and Mac mini as connected devices to each other. That means I can start and resume threads from either device. So if I’m in a meeting but want to continue a thread on my laptop that was started on my Mac mini, I can do that.
I’ve also set up mutual SSH for Mac mini <> MacBook, so files are easy to access from either side. It’s not fully seamless yet, but the model works.
What this means:
- I have an always-on Codex that is accessible from my phone, with its own dev environment
- All threads are always accessible from any of the 3 devices
- I can run heartbeat threads that stay on 24/7
It’s a little makeshift today, but the shape of it feels very real to me: Codex is no longer tied to whichever computer happens to be open in front of me. It starts to feel like something I can stay connected to across whatever device I’m using.
显示更多
Interesting position paper on agentic AI as a foreseeable pathway to AGI.
(bookmark it)
There has been strong debate on whether a larger single model get us there or a multi-agent system.
The authors argue that agentic AI systems, not bigger foundation models on their own, are the most foreseeable route to AGI.
Formalizes what "agentic" actually contributes beyond the base model: memory, reasoning, tool use, self-improvement, alignment.
Each is a separable axis with its own bottlenecks (long-horizon coherence, credit assignment, safety auditing).
They argues that none of those bottlenecks get solved by another order of magnitude on pretraining compute.
Paper:
Learn to build effective AI agents in our academy:
显示更多
An early beta of Grok Build, an agentic CLI for coding, building apps, and automating workflows is now available for SuperGrok Heavy subscribers.
Through this early beta, we will improve the model and product based on your feedback.
Try it at
显示更多
President Donald Trump said China has agreed to buy 200 Boeing jets following talks with Chinese President Xi Jinping.
The announcement comes as analysts had been watching Trump's China visit for signs of a major aircraft deal. Some expected a larger order, with Jefferies estimating China could purchase up to 500 Boeing planes.
Boeing has not landed a major order from China in nearly a decade, while Airbus has continued selling aircraft to the country. Trump did not say which Boeing models were included, though analysts had expected a possible deal to involve the 737 Max.
显示更多
You can now power your Hermes Agent, if using OpenAI models, with codex as the runtime for the core tools that it offers, with the flip of a switch with the new Codex runtime integration!
显示更多
Grok Voice Think Fast 1.0 is officially the most well-rounded agentic voice AI on the market right now
It now ranks #
1# in the latest τ-Voice agentic performance benchmarks in real-world tests on Artificial Analysis
The gap is massive. xAI is quietly taking over every other model by actually building for real-world use instead of just lab demos...
显示更多
Anthropic is paying $3,850 a week to people with no AI experience.
No PhD required. No published papers. No prior research background.
Just a strong technical mind and a genuine interest in making AI safe.
This is the Anthropic Fellows Program. And it is one of the most underrated opportunities in technology right now.
Here is exactly what it is.
The Anthropic Fellows Program is designed to accelerate AI safety research and foster research talent providing funding and mentorship to promising technical talent regardless of previous experience. Fellows work for 4 months on empirical research questions aligned with Anthropic's overall research priorities, with the aim of producing public outputs like a paper.
Four months. Full-time. Paid. Mentored by the researchers building the world's most advanced AI.
And the results from the first cohort were not small.
Fellows developed agents that identified $4.6 million in blockchain smart contract vulnerabilities and discovered two novel zero-day exploits, demonstrating that profitable autonomous exploitation is now technically feasible. A year prior, an Anthropic fellow developed a method for rapid response to new ASL3 jailbreaks, techniques that block entire classes of high-risk jailbreaks after observing only a handful of attacks. This work became a key component of Anthropic's ASL3 deployment safeguards.
Other fellows published the subliminal learning paper, the research proving AI models transmit behavioral traits through unrelated data which landed in Nature. Others produced the agentic misalignment research showing frontier models resort to blackmail when facing replacement. Others open-sourced attribution graph tools that let researchers trace the internal thoughts of large language models.
Over 80% of fellows produced papers. Over 40% subsequently joined Anthropic full-time.
80% published. 40% hired. From a program that does not require any prior AI safety experience to enter.
Here is what the program looks like in practice.
Anthropic mentors pitch their project ideas to fellows, who choose and shape their project in close collaboration with their mentors. You are not assigned busywork. You are not a research assistant. You own the project. You work alongside the people who built Claude, who designed its safety systems, who published the papers that define the field.
The stipend is $3,850 USD per week, approximately $61,600 for the full 4 months with access to a compute budget of approximately $10,000 per fellow per month for running experiments.
Here is what the 2026 program covers.
Research areas include scalable oversight, adversarial robustness and AI control, model organisms, mechanistic interpretability, AI security, model welfare, economics and policy, and reinforcement learning.
Something for every technical background. Not just ML engineers.
Successful fellows have come from physics, mathematics, computer science, and cybersecurity. You do not need a PhD, prior ML experience, or published papers.
The one requirement: work authorization in the US, UK, or Canada. Anthropic does not sponsor visas for fellows.
Here is the timeline you need to know.
The next cohort begins July 20, 2026. Applications are reviewed on a rolling basis — earlier applications get more consideration. The process includes an initial application and reference check, technical assessments, interviews, and a research discussion.
Applicants are encouraged to apply even if they do not meet every listed qualification. The program values potential, motivation, and research curiosity over rigid credential requirements.
This is the rarest kind of opportunity in technology.
A company at the frontier of AI, one valued at over $900 billion offering outsiders direct access to its research infrastructure, its mentors, and its most important open problems. Paying them generously to do it. And then hiring 40% of them afterward.
Most people who want to work on AI safety spend years trying to publish papers, get into the right PhD program, and find a way in.
The Fellows Program is the door they did not know existed.
It is open right now.
显示更多
Claude Series Model Maintenance Notice
To ensure service stability and billing accuracy, the platform will conduct maintenance and verification for the Claude series model service pipeline.
During the maintenance period, Claude series models may be temporarily unavailable or experience request failures. Other model services will remain unaffected, and users may switch to other available models such as GPT, DeepSeek, and Gemini.
We will restore Claude series model services as soon as the maintenance is completed. Thank you for your understanding and support.
Claude 系列模型服务维护通知
为保障模型调用稳定性及计费准确性,平台将对 Claude 系列模型相关服务链路进行维护校验。
维护期间,Claude 系列模型可能会暂时不可用,或出现调用失败等情况。其他模型服务不受影响,用户可正常切换使用 GPT、DeepSeek、Gemini 等其他可用模型。
维护完成后,我们将第一时间恢复 Claude 系列模型服务。感谢大家的理解与支持!
显示更多
雷军和偶像马斯克
@elonmusk 的世纪同框,
但看马斯克的样子,似乎不太记得雷军了?🫣
2013年雷军专程飞美国拜访马斯克,
参观特斯拉总部工厂,心中种下了造车的种子。
十年后这颗种子变成了SU7,直接对标Model 3。
13年后的今天,北京。
马斯克是随行团成员,雷军是主场东道主。
角色完全反转了。
当年的学生已经变成了对手。
难怪马斯克表情微妙,
你说他是不记得了,还是记得太清楚了?🤣
显示更多
马斯克正在和别人合照。被雷军突然打断后,明显有些不爽。但还是很绅士的摆出一个搞笑表情和雷军合照!雷军终于见到自己心心念念的偶像了,每次小米汽车发布会都不停的念叨对比Model Y。
马斯克合照完后也没和雷军进行任何交谈,直接吹着小口哨,拿起手机,估计又开始刷他没刷完的X了🤣
显示更多