My laptop has become a “satellite device” since I started using Codex from my phone. And my Mac mini has become the “home.” It’s clunky, but the end state feels more like how we’re going to be working in the near future:
I’m currently running the Codex app on 2 devices:
1. my MacBook
2. my Mac mini
My laptop isn’t reliably connected to Wi-Fi enough, so I keep a Mac mini on my desk that is always connected.
When I kick off new threads from my phone, I start them on the Mac mini. When I’m working from my desk, I run them there too.
The cool part is that I’ve added my MacBook and Mac mini as connected devices to each other. That means I can start and resume threads from either device. So if I’m in a meeting but want to continue a thread on my laptop that was started on my Mac mini, I can do that.
I’ve also set up mutual SSH for Mac mini <> MacBook, so files are easy to access from either side. It’s not fully seamless yet, but the model works.
What this means:
- I have an always-on Codex that is accessible from my phone, with its own dev environment
- All threads are always accessible from any of the 3 devices
- I can run heartbeat threads that stay on 24/7
It’s a little makeshift today, but the shape of it feels very real to me: Codex is no longer tied to whichever computer happens to be open in front of me. It starts to feel like something I can stay connected to across whatever device I’m using.
显示更多
The
@Chiefs will host the
@Broncos to kick off Monday Night Football in 2026.
NFL Schedule Release — Thursday 8pm ET on ESPN/NFLN
PSA, we are hiring for our DevX team to kick start our India presence 🇮🇳! If you want to help builders get the most out of Gemini in India, please ping me via DM or email.
India is our largest market from an AI Studio user pov, very excited to visit later this year as well!!
显示更多
ANUNCIAMOS A MI RIVAL!!!! En vivo por YouTube, Twitch, Kick y TikTok.
don’t forget to follow my new page too btw(: it’s gonna be my main page to post photos and what not. Make sure to have notifications on! I’ll be using this one as more my personal and where we can kick the shits and what not <3 okay.. ily goodnight!!
显示更多
Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes, today I measured that the leaderboard's "Time to GPT-2" drops from 2.02 hours to 1.80 hours (~11% improvement), this will be the new leaderboard entry. So yes, these are real improvements and they make an actual difference. I am mildly surprised that my very first naive attempt already worked this well on top of what I thought was already a fairly manually well-tuned project.
This is a first for me because I am very used to doing the iterative optimization of neural network training manually. You come up with ideas, you implement them, you check if they work (better validation loss), you come up with new ideas based on that, you read some papers for inspiration, etc etc. This is the bread and butter of what I do daily for 2 decades. Seeing the agent do this entire workflow end-to-end and all by itself as it worked through approx. 700 changes autonomously is wild. It really looked at the sequence of results of experiments and used that to plan the next ones. It's not novel, ground-breaking "research" (yet), but all the adjustments are "real", I didn't find them manually previously, and they stack up and actually improved nanochat. Among the bigger things e.g.:
- It noticed an oversight that my parameterless QKnorm didn't have a scaler multiplier attached, so my attention was too diffuse. The agent found multipliers to sharpen it, pointing to future work.
- It found that the Value Embeddings really like regularization and I wasn't applying any (oops).
- It found that my banded attention was too conservative (i forgot to tune it).
- It found that AdamW betas were all messed up.
- It tuned the weight decay schedule.
- It tuned the network initialization.
This is on top of all the tuning I've already done over a good amount of time. The exact commit is here, from this "round 1" of autoresearch. I am going to kick off "round 2", and in parallel I am looking at how multiple agents can collaborate to unlock parallelism.
All LLM frontier labs will do this. It's the final boss battle. It's a lot more complex at scale of course - you don't just have a single train. py file to tune. But doing it is "just engineering" and it's going to work. You spin up a swarm of agents, you have them collaborate to tune smaller models, you promote the most promising ideas to increasingly larger scales, and humans (optionally) contribute on the edges.
And more generally, *any* metric you care about that is reasonably efficient to evaluate (or that has more efficient proxy metrics such as training a smaller network) can be autoresearched by an agent swarm. It's worth thinking about whether your problem falls into this bucket too.
显示更多
It is hard to communicate how much programming has changed due to AI in the last 2 months: not gradually and over time in the "progress as usual" way, but specifically this last December. There are a number of asterisks but imo coding agents basically didn’t work before December and basically work since - the models have significantly higher quality, long-term coherence and tenacity and they can power through large and long tasks, well past enough that it is extremely disruptive to the default programming workflow.
Just to give an example, over the weekend I was building a local video analysis dashboard for the cameras of my home so I wrote: “Here is the local IP and username/password of my DGX Spark. Log in, set up ssh keys, set up vLLM, download and bench Qwen3-VL, set up a server endpoint to inference videos, a basic web ui dashboard, test everything, set it up with systemd, record memory notes for yourself and write up a markdown report for me”. The agent went off for ~30 minutes, ran into multiple issues, researched solutions online, resolved them one by one, wrote the code, tested it, debugged it, set up the services, and came back with the report and it was just done. I didn’t touch anything. All of this could easily have been a weekend project just 3 months ago but today it’s something you kick off and forget about for 30 minutes.
As a result, programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You're spinning up AI agents, giving them tasks *in English* and managing and reviewing their work in parallel. The biggest prize is in figuring out how you can keep ascending the layers of abstraction to set up long-running orchestrator Claws with all of the right tools, memory and instructions that productively manage multiple parallel Code instances for you. The leverage achievable via top tier "agentic engineering" feels very high right now.
It’s not perfect, it needs high-level direction, judgement, taste, oversight, iteration and hints and ideas. It works a lot better in some scenarios than others (e.g. especially for tasks that are well-specified and where you can verify/test functionality). The key is to build intuition to decompose the task just right to hand off the parts that work and help out around the edges. But imo, this is nowhere near "business as usual" time in software.
显示更多
0
0
1.6K
37.3K
4.8K
转发到社区
Nos vemos en una hora….
Twitch:
Kick:
YouTube:
TikTok:
Facebook:
显示更多
Nos vemos en vivo mañana por kick y YouTube: alanafloresf 🥊🫰🏼
🚨🚨🚨🚨🚨🚨
Hagan sus predicciones de las peleas… los leo si atinas algo te doy like 🤫
显示更多
📷 2026 年第一場活動,就從香港 RG 開始! 📷
📷 My first event of 2026 starts in Hong Kong! 📷
今年的第一站是在 香港RG(Rainbow Gala)
📷 攤位 CB10
真的很期待在新的一年
第一場活動就能和香港的你們見到面📷
歡迎來找我聊天、拍照、打招呼📷
我們 RG 見!
I’ll be at Hong Kong RG (Rainbow Gala)
📷 Booth CB10
So excited to kick off the year with my first event and finally meet all of you in Hong Kong! 📷
Come say hi, take photos, and hang out with me—
I can’t wait to see you there!
#
香港RG# #
RainbowGala#
#
喜多川海夢# #
Cos#
显示更多