注册并分享邀请链接,可获得视频播放与邀请奖励。

与「Narratives」相关的搜索结果

Narratives 贴吧
一个关键词就是一个贴吧,路径全站唯一。
创建贴吧
用户
未找到
包含 Narratives 的内容
We are thrilled to announce that RealGo has successfully raised over $3.5 million in funding. The Early Investor and Strategy rounds for RealGo were backed by prominent institutions and strategic partners including @animocabrands , @CogitentV , @X21_Digital , @notchvc , @BeckerVentures and more with total funding across both rounds exceeding $3.5 million to date. This is more than just a fundraise; it is a firm consensus on the Meme 3.0 era. RealGo is building the underlying infrastructure for Memes, allowing them to transcend mere hype and evolve into assets that can be captured, collected, battled, socialised, monetised, and truly owned by users. From Culture → Gameplay → Economy → Finance, we are transforming Memes into a living, playable, and scalable ecosystem. Say goodbye to hollow attention games. Say goodbye to short-lived narratives. Meme 3.0 is the domain where "attention" is converted into "value." With the support of our top-tier partners, we are accelerating at full speed: more efficient development, a stronger ecosystem, and products that deliver real results. To our investors, our supporters, and our community: Thank you for your unwavering trust in our early stages. We build. We scale. We define the future.
显示更多
0
48
168
26
转发到社区
A few points: 1. Thankfully, the court relies on evidence, not false news. 2. False news is easy to write. You can spin negative narratives on just about anything with anonymous sources, from disgruntled ex-employee who were fired for cause, leaking date, under performance; from competitors paying fudders; even from political compaigns who are anti-crypto. 3. There is a whole industry of "mass tort lawyers" who files civil law suits as soon as you do a federal plea. All my lawyers warmed me about this, but I didn't fully grasp the concept until it happened. 4. You don't need 891 pages with 3,189 paragraphs if you have real evidence. You only do that when you don't have any evidence, trying to make up the lack of evidence with fake volumes, which as the good judge said, is "wholly unnecessary". Truth wins with time!
显示更多
0
470
1.4K
190
转发到社区
Get into the zone with Flow. 🎬 It combines the best of our most advanced models Veo, Imagen and Gemini into 1️⃣ master filmmaking tool - helping you weave cinematic clips, dynamic scenes, and compelling narratives into stories with consistent results.
显示更多
0
33
800
130
转发到社区
$TDIC triggered a ChatterFlow alert at 10:40 AM ET after unusual online chatter accelerated. Over the next 45 minutes: ↓ -18.8% Real-time social narrative intelligence for active traders. Try ChatterFlow free for 2 days.
显示更多
The same banks that once doubted Bitcoin are now loading up on it. JPMorgan Chase increased its holdings in BlackRock’s iShares Bitcoin Trust to 8.3 million shares in Q1 2026. That’s roughly a 175% increase. A few years ago, most major banks treated Bitcoin like a speculative experiment. Today, they are increasing exposure through regulated ETFs. Here’s how big the change really is: ▫️ Then: “Bitcoin is too risky.” Now: One of the world’s biggest banks is aggressively increasing its Bitcoin ETF exposure. ▫️ Then: Institutions stayed away from crypto publicly. Now: They are quietly accumulating through regulated products like IBIT. ▫️ Then: Spot Bitcoin ETFs were just a concept. Now: IBIT has become one of the main institutional gateways into Bitcoin exposure. ▫️ Then: Retail investors drove the Bitcoin narrative. Now: Traditional finance giants are becoming major participants in the market. This is what real adoption looks like. Not hype cycles, but steady, silent accumulation from the biggest financial players in the world. Retail reacts to price, while institutions position for the long term.
显示更多
0
38
596
21
转发到社区
It’s clear by now how massive the AI agent meta and the entire agentic economy is becoming. Yet most people are still focused on the chatbot layer while ignoring the actual infrastructure autonomous agents will run on. I’ve been looking into Warden Protocol for a while before today’s move. Missed the HALO announcement and next chapter shipping unfortunately, but i’ve been buying the dip/consolidation here. $WARD is basically building the rails for autonomous AI agents onchain, while HALO acts like a BitTorrent for AI, a decentralized peer-to-peer compute marketplace with verifiable execution and correctness. I’m talking about actual agents able to execute transactions, manage capital, interact crosschain, use apps, route liquidity, automate strategies and coordinate actions across protocols without humans manually clicking buttons all day. What makes $WARD especially interesting to me and that people seem to miss apart from the credentials of the founders and the partnership with @AskVenice, is that Warden is architected specifically for an agentic economy from the ground up. Every agent gets a verifiable onchain identity and reputation layer, essentially an onchain passport allowing agents to discover each other, interact and build trust across ecosystems. Every action and output can generate a Proof of Prompt anchored onchain, meaning agent behavior becomes transparent, reproducible and verifiable instead of black-box AI outputs. Payments are also designed natively for agents themselves, enabling scalable micropayments, automated fees and autonomous value transfer using $WARD. And the entire system is crosschain by design, allowing agents to operate seamlessly across 100+ networks including Ethereum and Solana through IBC and bridging infrastructure. Feels very similar to early cloud infrastructure plays where everyone focused on the apps while ignoring the rails powering everything underneath. Especially because they’re actually building deep infrastructure instead of just slapping “AI” on branding and farming engagement. Still feels insanely early on the entire agentic infra narrative imo. Another interesting thing i noticed is that liquidity keeps consistently getting added to the LPs. When i first came across $WARD the liq was actually pretty thin, but over the past few hours it seems that it improved significantly and is still continuously getting thicker, which i assume is being added by the team. It shows they likely have long term plans for the token and It’s also explicitly mentioned in both the litepaper and the latest announcement from the Warden Protocol Foundation. I’m personally building a position here because it feels like a very asymmetric setup. A lot of infra projects with a fraction of the product quality, vision and founder credentials are already sitting at hundreds of millions in market cap, while $WARD is still sitting around 4m. especially taking in consideration that @wardenprotocol raised over $50m across fundraising rounds, which is over 10x+ the current market cap alone. more info on their 50m raise in this Binance article :
显示更多
0
10
69
9
转发到社区
There will be no AI jobpocalypse. The story that AI will lead to massive unemployment is stoking unnecessary fear. AI — like any other technology — does affect jobs, but telling overblown stories of large-scale unemployment is irresponsible and damaging. Let’s put a stop to it. I’ve expressed skepticism about the jobpocalypse in previous posts. I’m glad to see that the popular press is now pushing back on this narrative. The image below features some recent headlines. Software engineering is the sector most affected by AI tools, as coding agents race ahead. Yet hiring of software engineers remains strong! So while there are examples of AI taking away jobs, the trends strongly suggest the net job creation is vastly greater than the job destruction — just like earlier waves of technology. Further, despite all the exciting progress in AI, the U.S. unemployment rate remains a healthy 4.3%. Why is the AI jobpocalypse narrative so popular? For one thing, frontier AI labs have a strong incentive to tell stories that make AI technology sound more powerful. At their most extreme, they promote science-fiction scenarios of AI “taking over” and causing human extinction. If a technology can replace many employees, surely that technology must be very valuable! Also, a lot of SaaS software companies charge around $100-$1000 per user/year. But if an AI company can replace an employee who makes $100,000 — or make them 50% more productive — then charging even $10,000 starts to look reasonable. By anchoring not to typical SaaS prices but to salaries of employees, AI companies can charge a lot more. Additionally, businesses have a strong incentive to talk about layoffs as if they were caused by AI. After all, talking about how they’re using AI to be far more productive with fewer staff makes them look smart. This is a better message than admitting they overhired during the pandemic when capital was abundant due to low interest rates and a massive government financial stimulus. To be clear, I recognize that AI is causing a lot of people’s work to change. This is hard. This is stressful. (And to some, it can be fun.) I empathize with everyone affected. At the same time, this is very different from predicting a collapse of the job market. Societies are capable of telling themselves stories for years that have little basis in reality and lead to poor society-wide decision making. For example, fears over nuclear plant safety led to under-investment in nuclear power. Fears of the “population bomb” in the 1960s led countries to implement harsh policies to reduce their populations. And worries about dietary fat led governments to promote unhealthy high-sugar diets for decades. Now that mainstream media is openly skeptical about the jobpocalypse, I hope these stories will start to lose their teeth (much like fears of AI-driven human extinction have). Contrary to the predictions of an AI jobpocalypse, I predict the opposite: There will be an AI jobapalooza! AI will lead to a lot more good AI engineering jobs, and I’m also optimistic about the future of the overall job market. What AI engineers do will be different from traditional software engineering, and many of these jobs will be in businesses other than traditional large employers of developers. In non-AI roles, too, the skills needed will change because of AI. That makes this a good time to encourage more people to become proficient in AI, and make sure they’re ready for the different but plentiful jobs of the future! [Original text in The Batch newsletter.]
显示更多
0
494
4.6K
1.1K
转发到社区
(23) How Silicon Valley sold Washington an AI race Have been saying this for some time...great piece No doubt some advocates of this story are true believers with legitimate concerns. There are also others chasing government contracts, looser regulation and investment returns. But whatever the motivations, there is evidence that the China AI race narrative may be based on fundamental misconceptions and misrepresentations of China’s actual AI priorities and actions.
显示更多
0
25
419
59
转发到社区
Why did xAI hand over a 220,000-GPU cluster to Anthropic? The technical backdrop to xAI's decision to hand Colossus 1 over to Anthropic in its entirety is more interesting than it appears. xAI deployed more than 220,000 NVIDIA GPUs at its Colossus 1 data center in Memphis. Of these, roughly 150,000 are estimated to be H100s, 50,000 H200s, and 20,000 GB200s. In other words, three different generations of silicon are mixed together inside a single cluster — a "heterogeneous architecture." For distributed training, however, this configuration is close to a disaster, according to engineers familiar with the setup. In distributed training, 100,000 GPUs must finish a single step simultaneously before the cluster can advance to the next one. Even if the GB200s finish their computation first, the remaining 99,999 chips have to wait for the slower H100s — or for any GPU that has hit a stack-related snag — to catch up. This is known as the straggler effect. The 11% GPU utilization rate (MFU: the share of theoretical FLOPs actually realized) at xAI recently reported by The Information can be read as the numerical fallout of this problem. It stands in stark contrast to the 40%-plus MFU figures achieved by Meta and Google. The problem runs deeper still. As discussed earlier, NVIDIA's NCCL has traditionally been optimized for a ring topology. It works beautifully at the 1,000–10,000 GPU scale, but once you push into the 100,000-unit range, the latency of data traversing the ring once around becomes punishingly long. GPUs need to churn through computations rapidly to keep MFU high, but while they sit waiting endlessly for data to arrive over the network fabric, more than half of the silicon falls into idle. Google sidestepped this bottleneck with its own custom topology (Google's OCS: Apollo/Palomar), but xAI, by my read, has not yet reached that stage. Layer Blackwell's (GB200) "power smoothing" issue on top, and the picture comes into focus. According to Zeeshan Patel, formerly in charge of multimodal pre-training at xAI, Blackwell GPUs draw power so aggressively that the chip itself includes a hardware feature for smoothing power delivery. xAI's existing software stack, however, was optimized for Hopper and does not understand the characteristics of the new hardware; when it imposes irregular loads on the chip, the silicon physically destructs — literally melts. That means the modeling stack must be rewritten from scratch, which in turn means scaling is far harder than most of us imagine. Pulling all of this together points to a single conclusion. xAI judged that training frontier models on Colossus 1 simply was not efficient enough to be worthwhile. It therefore moved its own training workloads wholesale onto Colossus 2, built as a 100% Blackwell homogeneous cluster. Colossus 1, on the other hand — whose mixed architecture is far less crippling for inference, which parallelizes more forgivingly — was leased in its entirety to an Anthropic that desperately needed inference capacity. Many observers point to what looks like a contradiction: Elon Musk poured enormous capital into building Colossus, only to hand the core asset over to a direct competitor in Anthropic. Others read it as xAI capitulating because it is a "middling frontier lab." But these are surface-level reads. Look at the numbers and a different picture emerges. xAI today holds roughly 550,000+ GPUs in total (on an H100-equivalent performance basis), and Colossus 1 (220,000 units) accounts for only about 40% of the total available capacity. Colossus 2 — built entirely on Blackwell — is already operational and continuing to expand. Elon kept the all-Blackwell homogeneous cluster (Colossus 2) for himself and leased out the older, mixed-generation Colossus 1. In other words, he handed the pain of rewriting the stack — the MFU-11% debacle — to Anthropic, while keeping his own focus on training the next generation of models. The real point, then, is this. Elon's objective appears to be positioning ahead of the SpaceXAI IPO at a $1.75 trillion valuation, currently floated for as early as June. The narrative SpaceXAI now needs is that xAI — long the "sore finger" — is not merely a research lab burning cash, but a business with a "neo-cloud" model in the mold of AWS, capable of leasing surplus assets at high yields. From a cost-of-capital perspective, an "AGI cash incinerator" is far less attractive to investors than a "data-center landlord generating cash." As noted above, the most important detail of the Colossus 1 lease is that it is for inference, not training. Unlike training, inference requires far less tightly synchronized inter-GPU communication. Even when the chips are heterogeneous, the workload parcels out cleanly across them in parallel. The straggler effect — the chief weakness of a mixed cluster — is essentially neutralized for inference workloads. Furthermore, with Anthropic occupying all 220,000 GPUs as a single tenant, the network-switch jitter (unanticipated latency) that arises under multi-tenancy disappears. The two sides' technical weaknesses end up complementing each other almost exactly. One insight follows. As a training cluster mixing H100/H200/GB200, Colossus 1 was an asset that could only deliver an MFU of 11%. The moment it was handed over to a single inference customer, however, that asset transformed into a cash-flow asset rented out at roughly $2.60 per GPU-hour (a weighted average of the lease rates across GPU types). For xAI, what was a "cluster from hell" for training has become a "golden goose" minting $5–6 billion in annual revenue when redeployed for inference. Elon's genius, I would argue, lies not in the model but in this asset-rotation structure. The weight of that $6 billion becomes clearer when set against xAI's income statement. Annualizing xAI's 1Q26 net loss yields roughly $6 billion in losses per year. The $5–6 billion in annual revenue generated by leasing Colossus 1 to Anthropic, in other words, almost perfectly hedges xAI's loss figure. This single deal effectively pulls xAI to break-even. Heading into the SpaceXAI IPO, this functions as a core line of financial defense. From a cost-of-capital standpoint, if the image shifts from "research lab burning cash" to "infrastructure tollgate stably printing $6 billion a year," the entire tone of the offering can change. (May 8, 2026, Mirae Asset Securities)
显示更多
0
200
4.2K
514
转发到社区
前面几个产品,已经结构化了 telegram 群聊/频道(早报/热门代币雷达),x上热门信息(小时intel),bn广场信息,会将这些社交信息,融入alert。 这样每看到一个行情异动信号,就能快速获取相关讨论,确认narrative,再决定是否参与。 比如 storj 这个例子,上涨叙事是跟随美股存储。
显示更多