注册并分享邀请链接,可获得视频播放与邀请奖励。

与「Linnea」相关的搜索结果

Linnea 贴吧
一个关键词就是一个贴吧,路径全站唯一。
创建贴吧
用户
未找到
包含 Linnea 的内容
要一起上課嗎,同學?🌟 自從和Tee子熟識之後,我的生活變得很精采呢XD 連續兩天當回年輕人通霄🤣 也用香香的雙人合照祝福大家有個美好的年假哦! 吃飽睡好精神好!💕 新年快樂~ #鳴潮# #WutheringWaves# #愛彌斯# #aemeath# #琳奈# #Linnea# #Cos#
显示更多
0
5
1.8K
107
转发到社区
Introducing Mirage, a unified virtual filesystem for AI agents! 6 weeks. 1.1M+ lines of code. We rewrote bash from the ground up so cat, grep, head, and pipes work across heterogeneous services. S3, Google Drive, Slack, Gmail, GitHub, Linear, Notion, Postgres, MongoDB, SSH, and more, all mounted side-by-side as one filesystem. Bash that AI agents already know works on every format! cat, grep, head, and wc parse .parquet, .csv, .json, .h5, even .wav! One pipe can stitch S3, Drive, GitHub, Slack, and Linear together, same Unix semantics throughout. Workspaces are versioned too. Snapshot, clone, and roll back the whole thing with one API call. A two-layer cache turns repeated reads into local lookups, so agent loops stay fast and cheap. Drop a Workspace into FastAPI, Express, or a browser app. Wire it into OpenAI Agents SDK, Vercel AI SDK, LangChain, Mastra, or Pi. Run it alongside Claude Code and Codex. Site: GitHub: #AIAgents# #OpenSource# #AgenticAI# #Strukto# #Filesystem# #VFS#
显示更多
0
172
3.3K
336
转发到社区
Here's the #1# thing most people don't know about Warren Buffett: There is nothing special about Buffett’s stock picking. That doesn’t mean that Buffett wasn’t a great investor. He was! Buffett was, by far, the greatest investor in history, by a huge margin. Over 486 months between October 1976 and March 2017 –— 41 years –— Berkshire Hathaway’s Class A stock earned an average excess return of 18.6% per year above U.S. Tbills. Annualized volatility was 23.5%. Sharpe ratio: 0.79. Berkshire’s Sharpe ratio of (0.79) is roughly 1.6x times the broad U.S. stock market’s Sharpe ratio of 0.49 over the same period. Among all large-cap U.S. stocks and mutual funds with 30-plus-year continuous track records, those are unmatched numbers. A dollar invested in Berkshire on October 31, 1976, was worth more than $3,685 by March 31, 2017. A dollar invested in the S&P 500 with dividends reinvested over the same period was worth approximately $76. Buffett beat a passive index by a multiple of 48. But he didn’t do it with stock picking! Three researchers at AQR Capital Management –— Andrea Frazzini, David Kabiller, and Lasse Heje Pedersen –— dissected Berkshire’s 50 years of investments through 2013. They expanded and republished their findings in 2018 in the Financial Analysts Journal, which is the most highly respected industry financial journal. Their work won the Graham and Dodd Award for the best published paper of the year. The paper is called Buffett’s Alpha. They found, after accounting for cheap leverage (from the insurance float) and exposure to a handful of publicly documented factor premiums, Buffett’s investment skill –— the portion of his returns that cannot be explained by any mechanical strategy –— is 0.3% per year. That's statistically indistinguishable from zero. In other words, the alpha that Berkshire enjoyed for 50 years (as it compounded capital at 24% a year!) wasn’t due to Buffett’s stock picking. So, how did he do it? He did it by gaining access to a huge amount of investment capital that he did not own, for free. Buffett’s track record was built on leverage. That’s a dirty word for most investors, but it's the secret behind Berkshire. The AQR researchers had access to something most Buffett commentators do not: 40 years of Berkshire’s audited financial statements and the full quarterly history of the public 13F stock portfolio. The researchers asked a specific question: If I take Berkshire’s monthly stock returns from October 1976 through March 2017, and I run a linear regression against a set of well-documented risk factors –— market beta, size, value, momentum, and two newer factors called Betting-Against-Beta and Quality-Minus-Junk (detailed below) –— how much of Buffett’s performance can the factors explain? And after the factors have been stripped out, how much excess return remains? The data show clearly there are a few qualities that drove Berkshire’s results. First, Buffett has always preferred large-cap stocks, contrary to the popular image of him as a small-cap value investor. He buys elephants. Second, no surprise, Buffett buys cheap. Berkshire is almost six standard deviations away from neutral on the value axis. So far the picture is ordinary. Every large- cap value manager in America loads positively on size and on value. Buffett’s genius lies in the last two factors. These last two factors are a little complicated, but please stick with me. There’s a new factor, that, like value and size, characterizes Buffett’s strategy. It’s called Betting-Against-Beta (“BAB”). What it means is intentionally investing in stocks with very low volatility. The BAB factor captures the excess return that accrues to investors who own low-beta stocks. Low-beta stocks have historically earned higher risk-adjusted returns than high-beta stocks. Financial theory teaches that higher beta (higher risk) should mean higher return. But it doesn’t. The opposite occurs, in fact. And Buffett was one of the very first people to figure this out. Why does this factor persist? In an efficient market, once that factor is known to investors, then they should bid the price up on low- beta stocks until it no longer provides an edge. The explanation, per the theory of AQR’s Frazzini and Pedersen’s theory, is that because ordinary investors do not use leverage and seek high returns, they create persistent excess demand for more volatile stocks. (Having worked with retail investors for 30 years, I can assure you that is true.) But, an investor with access to cheap leverage –— Warren Buffett, for instance –— can exploit the mispricing by owning the low-beta names and levering them up to produce market-beating returns. And the last factor that matters to Buffett is quality. Buffett buys companies with high returns on invested capital. Quality-Minus-Junk (“QMJ”) is a factor described by Cliff Asness, also at AQR with Frazzini, and Pedersen, in a 2019 paper in Review of Accounting Studies. The QMJ factor captures the return to owning stocks of high-quality companies –— profitable, growing, safe, with high payout ratios –— against stocks lacking those characteristics. QMJ has been positive and statistically significant in every major developed equity market for which it has been measured. Berkshire’s loading is 0.37, with a t-statistic of 4.6. –– meaning it is highly significant to Berkshire’s results. In plain English: Buffett only buys large, high- quality, low-volatility stocks of the highest quality. But, Berkshire’s results were not, in any way, unusual. Any investor buying these same kinds of stocks would have earned those same returns –– about 16% a year over time. So how did Berkshire compound at 23% a year? To figure that out, AQR’s researchers built a Berkshire replica. They constructed a simple, rules-based, publicly investable portfolio that mechanically tilts toward large-cap, cheap, low-beta, high-quality stocks, and levers it 1.6- to- 1 to match Berkshire’s insurance float leverage. The correlation between their replica’s returns and Berkshire’s were virtually identical. The authors’ conclusion is unambiguous. “In summary, we find that Buffett has developed a unique access to leverage that he has invested in safe, high-quality, cheap stocks and that these key characteristics can largely explain his impressive performance.” Berkshire’s cost of insurance float has averaged almost three percentage points below the Treasury bill rate across 50fifty years of data. In roughly two-thirds of all years, Berkshire has been paid to hold other people’s money. That is not an investment strategy. That is a financing miracle. It is also the living, breathing heart of Berkshire Hathaway. It’s what Buffett built, starting in 1967 when he paid $8.6 million for National Indemnity’s $19.4 million of float. And it is the factor every retail investor admiring Berkshire’s returns has never paid any attention to. The 1.6-to-1 leverage that AQR measured over the full period, financed at this negative cost, explains the dollar magnitude of Berkshire’s returns. How do we know? An unleveraged version of the same stock portfolio –— which you can approximate by looking at the 13F holdings alone –— has earned an average excess return of 12% percent per year. It’s Berkshire’s leverage that magnifies this excess return to 18.6 %percent. How does this square with Berkshire’s reported gains? Berkshire’s 18.6% excess return, plus the T-bill rate that averaged roughly 4.7% over 1976–2017, gives you a total nominal return of roughly 23% per year, which is the figure you usually see quoted for Berkshire’s historical performance. The 23% tells you what Berkshire returned. The 18.6% tells you how much of that return was compensation for taking investment risk, as opposed to the baseline yield every lender to the U.S. government was earning anyway. With both of Berkshire’s “edges” –— systematic factor exposures to cheap, high-quality, low-volatility stocks and roughly 1.6-to-1 leverage delivered with insurance float –— you get Berkshire Hathaway’s 23% annual gains over 60 years. It’s the structure that’s genius, not the stock picking. And that's very important because it means the original Berkshire formula can work for any investor. I show you exactly how, in my new book.
显示更多
0
8
54
10
转发到社区
Workspace agents can work across tools—pulling context from docs, email, chats, code, and systems, and taking approved actions like updating @Linear issues, creating docs, or sending messages. In @SlackHQ, agents can jump into a thread, understand what’s needed, pull the right context, help resolve the issue, and update the right systems. They can also keep going in the background or on a schedule, while you’re away.
显示更多
0
33
575
35
转发到社区
一张界面截图加一句话,就生成了高品质产品宣传图,GPT img-2 真的突破了做图这个事。 提示语: Create a premium 4:3 presentation cover slide introducing Chronicle, the AI-native presentation platform from Style: elegant, minimal, modern, premium startup aesthetic. Similar to high-end brand guideline covers (like Apple / Linear / Notion style). Soft gradient background with subtle depth, clean whitespace, refined typography, polished editorial layout. Main title: CHRONICLE Subtitle: AI PRESENTATION PLATFORM Body copy (small elegant text): Turn raw ideas into polished, high-impact presentations. Start from notes, docs, links, or existing decks. Generate beautiful, on-brand slides with AI. Edit freely on a flexible canvas. Export to PPT, PDF, or publish as a website. Feature highlights (small premium labels): STORY-FIRST ON-BRAND DESIGN AI EDITING FREEFORM CANVAS PPT EXPORT TEAM COLLABORATION Bottom-right elegant logo text: chronicle Visual feeling: business-class premium, strategy deck quality, consulting-grade presentation, slightly futuristic but highly professional. Composition: clean editorial balance, asymmetrical layout, strong whitespace, presentation software hero shot feeling. Aspect ratio: 4:3 Language: English only
显示更多
justin sun vs wlfi the original token deployed sep 2024 had no blacklist and no seizure, but it was upgradable. the blacklist was added in v2 on aug 24, 2025. 11 months after sun invested and one week before trading opened. on nov 19, 2025, another upgrade added batch reallocation, essentially seizing, justified with saving phished funds. whatever the paper contract said, the code for vesting contract supports cliff dates, linear schedules, and up to 8 segments per category. wlfi used none of these to restrict sun. they chose 20% instant lump-sum unlock, then punished him for using a fraction of it. the remaining 80% has no vesting schedule at all, 7+ months later, claimable() returns 0. the vesting contract has per-category schedules to enforce token lockups. what's interesting is wlfi has carved out a special category 3 specifically for justin sun, he's the only user in it. the other 519 investors are in category 1. 14 minutes before sun activated his wallet, wlfi's own 3-of-5 multisig configured category 3 to release 20% of his 3b allocation as freely transferable tokens at trading start. over the next 3 days sun transferred out 55m. a single guardian eoa (also on multisig) blacklisted him. that address is also the sole owner of a second guardian safe with threshold 1. so one person can freeze anyone, while seizing requires 3-of-5. meanwhile, the same multisig is using 5b wlfi as collateral on dolomite to borrow $250m in stablecoins. they represent 98% of all wlfi on dolomite and 86% of the protocol's entire borrow volume. two safes with the same five signers, running a usd1/usdc loop that recycles borrowed usd1 as collateral to borrow usdc and feed it back.
显示更多
0
56
871
109
转发到社区
最近必须把钱包整理了,我的钱啊! 才发原来做Linea质押的协议ZeroLend已经关闭了😂 目前里面的资金已经取不出来了,问surf,只有去DC开票,不知道还能有没有用。
显示更多
开源 Multica:专为 AI-native 团队设计的 Agent + 人的协作平台 为什么做 Multica? Multica 最初是为了解决我们团队自己的问题: 1. 团队间的知识无法共享。 每个人都在用 coding agent,但产出的上下文全部散落在各自的 agent session 里。A 做完了一件事,B 不知道;agent 跑完了一轮,结果只有发起人看得到。团队知识变成了一座座孤岛。 2. 多人 + 多 Agent 的协作缺乏中枢。 当团队同时有多个 agent 在跑任务,谁在做什么、做到哪了、卡住了没有——没有一个地方能看到全貌。人和 agent 之间、agent 和 agent 之间,缺少一个共同的协作界面。 Multica 是什么? 一句话:像 Linear 一样管理任务,但 AI agent 是一等公民。 你可以像分配任务给同事一样,把 issue 分配给 agent。agent 会自动领取任务、在你的本地机器上执行代码、提交结果、更新状态、发表评论——一切都发生在同一个看板里,所有人实时可见。 核心思路很简单:每个人把自己的 coding agent(Claude Code / Codex)注册到团队 workspace,之后就可以像分配任务给同事一样分配给 agent。agent 自动执行、更新状态、发表评论,所有人实时可见。 适合谁? - 1-10 人的 AI-native 小团队 - 正在大量使用 coding agent 但缺少协作中枢的团队 - 希望让 agent 融入日常工作流而不是当作独立工具的团队 官网: 欢迎 star、试用、提 issue,也欢迎 PR。
显示更多
0
106
1K
162
转发到社区
I always lost performance when I tried to use silu/gelu activations in my RL value networks, and I finally understand why. If the pre-activation values are small, the smooth curve through zero is basically a linear activation, destroying the representation power of the network. You need a batch/layer/rms norm on the preactivations to put them in the range the smooth activations are designed for. Internal norms generally hurt performance on our RL tasks, but combining them with a smooth activation at least works basically as well as a raw relu (but slower). So, not actually a win, but the lightbulb of understanding was good!
显示更多
0
37
911
40
转发到社区
On DeepWiki and increasing malleability of software. This starts as partially a post on appreciation to DeepWiki, which I routinely find very useful and I think more people would find useful to know about. I went through a few iterations of use: Their first feature was that it auto-builds wiki pages for github repos (e.g. nanochat here) with quick Q&A: Just swap "github" to "deepwiki" in the URL for any repo and you can instantly Q&A against it. For example, yesterday I was curious about "how does torchao implement fp8 training?". I find that in *many* cases, library docs can be spotty and outdated and bad, but directly asking questions to the code via DeepWiki works very well. The code is the source of truth and LLMs are increasingly able to understand it. But then I realized that in many cases it's even a lot more powerful not being the direct (human) consumer of this information/functionality, but giving your agent access to DeepWiki via MCP. So e.g. yesterday I faced some annoyances with using torchao library for fp8 training and I had the suspicion that the whole thing really shouldn't be that complicated (wait shouldn't this be a Function like Linear except with a few extra casts and 3 calls to torch._scaled_mm?) so I tried: "Use DeepWiki MCP and Github CLI to look at how torchao implements fp8 training. Is it possible to 'rip out' the functionality? Implement nanochat/fp8.py that has identical API but is fully self-contained" Claude went off for 5 minutes and came back with 150 lines of clean code that worked out of the box, with tests proving equivalent results, which allowed me to delete torchao as repo dependency, and for some reason I still don't fully understand (I think it has to do with internals of torch compile) - this simple version runs 3% faster. The agent also found a lot of tiny implementation details that actually do matter, that I may have naively missed otherwise and that would have been very hard for maintainers to keep docs about. Tricks around numerics, dtypes, autocast, meta device, torch compile interactions so I learned a lot from the process too. So this is now the default fp8 training implementation for nanochat Anyway TLDR I find this combo of DeepWiki MCP + GitHub CLI is quite powerful to "rip out" any specific functionality from any github repo and target it for the very specific use case that you have in mind, and it actually kind of works now in some cases. Maybe you don't download, configure and take dependency on a giant monolithic library, maybe you point your agent at it and rip out the exact part you need. Maybe this informs how we write software more generally to actively encourage this workflow - e.g. building more "bacterial code", code that is less tangled, more self-contained, more dependency-free, more stateless, much easier to rip out from the repo ( There's obvious downsides and risks to this, but it is fundamentally a new option that was not possible or economical before (it would have cost too much time) but now with agents, it is. Software might become a lot more fluid and malleable. "Libraries are over, LLMs are the new compiler" :). And does your project really need its 100MB of dependencies?
显示更多
0
300
7.3K
769
转发到社区