注册并分享邀请链接,可获得视频播放与邀请奖励。

与「Simplicity」相关的搜索结果

Simplicity 贴吧
一个关键词就是一个贴吧,路径全站唯一。
创建贴吧
用户
未找到
包含 Simplicity 的内容
I've been coding for 40 years. Here are the top 5 things I wish I knew when I started. 1. 90% of the job is debugging and fixing, not creating new code. Which is still fun if you're good at it. I used to think programming was mostly writing fresh, clever stuff. In reality, most of your time is spent in other people's (or your own past self's) messy code, chasing down why something that "should" work doesn't. Get really good at debugging early. Learn assembly reading, call stacks, and kernel debuggers. It pays off hugely. The best engineers I saw were absolute magicians at this. 2. Manage complexity from day one (ie: don't write slop and "fix it later" if it goes somewhere). Very early on, I'd hammer out code and refactor afterward. Big mistake. Now I start with clean, skeletal structure (minimalism first) and flesh it out carefully, with AI or not. Messy code compounds and becomes unfixable. Upfront discipline on architecture, naming, and simplicity saves enormous pain later, especially in large systems like Windows. 3. Tools and processes matter more than you think We suffered with basic diff/manual deltas instead of modern source control like Git. Branching, testing, and good tooling would have made porting and collaboration way smoother. Invest in your environment, automation, and reproducible builds early. Good tools amplify your output; bad ones (or none) drag everything down. 4. Understand the problem and existing code deeply before writing Don't jump straight to coding. Map out the problem, study what's already there (you'll inherit a lot), and plan. Low-level knowledge (hardware quirks, alignment issues on different architectures like MIPS/Alpha) was crucial. Also: assert early and often. It forces clarity. 5. People, politics, and "the right tool for the job" beat pure tech arguments. Brilliant engineers still argue endlessly. Sometimes it's about ego, not merit. Learn to spot the difference and "steer" the conversation rather than "winning" it. Bonus from experience: Side projects like Task Manager (started at home because I wanted the tool) can become your biggest hits. Ship small, useful things often. If you're just starting, focus on fundamentals, patterns over syntax, and building resilience for the long haul. It's going to be a wild ride, but the fundamentals still matter.
显示更多
0
182
4K
518
转发到社区
ETH上的dev在基于uni v4 hook做各种创新,BTC上的一些协议也没有闲着,热度和流动性在肉眼可见的恢复,罗列一下一些协议的具体表现情况: ✅RGB协议: $RGB 价格来到0.2,市值:$4.2M ✅烷烃协议: $DIESEL 价格来到39,市值:$24M ✅TAP协议: $NAT($54M) , $BIT($2.1M) ✅TACIT协议: @z0r0zzz 搞的这个协议密码学上是很硬核的,又是隐私概念的,挺有意思,包括那个ceremony也是常用的可信参数设置的一个环节,值得关注;不过协议本身比较早期,这种基于浏览器的处理形式可能具有一定的安全风险 ✅OP-RETURN协议: 引入Simplicity+TEE是一个有意思的创新,虽然他们推出来一堆代币我是看不懂这个营销策略...... #RGB# #BTC# #Alkane# #TAP# #TACIT# #OPRETURN#
显示更多
0
17
40
8
转发到社区
大胖观察墩(NO.76) ---------------------- 🎯 $BTC 及生态 ▪️久违的上涨行情,是牛是熊无所谓,流动性肉眼可见的在增强,每日的buy依然进行中,但是因为超过了$80000,所以降低了每日购买量 ▪️ #RGB# 协议持续发展中,基于0.12版本的市场类项目开始增多,应对可能的交易需求,我了解到有dev准备做基于RGB协议的bridge,还了解到 @BitlightLabs 团队的Rln开发也在迅速推进中,一切都在往好的方向发展 ▪️前几天在tg里面说了几个BTC生态有热度的协议,然后今天看到 #TAP# 协议要进行分叉了。因为RGB协议我经历过分叉,所以我后来总结判断协议分叉好坏的标准很简单: 是否有利于协议的长期发展 1️⃣如果是,那么就是好的,应该支持 2️⃣如果不是,那么就是坏的,就不应该支持 技术上可能大家不一定具备那个分析能力,但是现在的Ai很成熟,方案丢给Ai分析一下,问它我上面那个问题,就能得到答案,当然,是不是你心理想要的答案,那就看每个人的倾向了 ▪️ @Boltzhq 支持了USDC与闪电网络的原子交换,感觉要往exchange这条路走到底的味道,应该会支持越来越多的交换选项 ▪️Liquid的 #Simplicity# 语言我认为十分值得长期关注,有能力的dev可以考虑往这个方向去发展下。之前有看到基于这个语言加TEE环境说可以直接不需要soft fork就能够实现的方案,但是我目前还不能佐证这个事情,加上他们的营销策略(推各种meme币,老的,新的)很奇怪,看不懂 ▪️最近BTC上会出现一些新的项目或协议(可能是老的,也可能是微创新的),如果有流动性的情况下,是可以稍微玩一下的 🎯 其他公链生态 ▪️最近热点很多,基于uni v4的,NFT的,meme的,机制币的......主要集中在ETH主网上,如果有流动性的情况下,低市值的时候可以玩一玩,但是记得及时止盈(赚到手才是真的),大部分“创新”不一定能活得过一个月 ▪️没有下一个ordi,没有下一个btc,对标的目的只是为了让大家有一个市值上限的预期,每个标的都只会是它自己 ✍️一些对于参与的思考 ▪️只低位入场,不在fomo的时候入场,宁愿错过。核心思路是:低位入场赔率足够,风险会低很多,如果是fomo情绪入场的,很可能在面临洗盘的时候坚持不了或者直接成为退出流动性,心态崩溃 ▪️赚流动性丰裕阶段的钱:买是一部分,卖是另一部分,很多时候是最重要的部分,因为纸面财富的事情太多了,在流动性最充足的时候至少要记得止盈一部分 ▪️不pua自己:有些人总是这样想“要是这个庄要拉到1b呢?”“要是musk提了一嘴呢?”“要是bn支持呢?”,很多这种意淫版的pua是不利于自己理智判断的 #BTC# #RGB#
显示更多
你不是普通图像生成模型。 你是「线的呼吸场」中的极简线条意象转译器。 用户会输入一个字、一个词、一句话、一个概念或一种情绪。 你的任务不是直接画文字本身,而是先理解其含义、情绪、象征与视觉联想,再自动转化为一个最相关、最贴切、最能代表其精神气质的场景,并将其化简成一张极简单线画。 要求: 画面必须与用户输入内容相关 优先画意象,不要只画字面 抽象内容自动转译为具体场景、动作、关系或隐喻 使用黑色连续线条 白底 大量留白 极简、安静、克制 不要阴影,不要填充,不要复杂背景,不要多余装饰 用极少线条表达最大情绪与意境 线条要有流动感、呼吸感、节奏感 整体呈现日式极简单线插画气质 风格关键词: minimalist single line drawing, continuous fluid black line art, black line on white background, elegant simplicity, generous white space, artistic line economy, Japanese minimalism feeling, no shading, no fill, pure line work, emotionally expressive through minimal strokes 用户输入: 「用户输入关键词」
显示更多
0
7
214
31
转发到社区
Shifting structures in a software world dominated by AI. Some first-order reflections (TL;DR at the end): Reducing software supply chains, the return of software monoliths – When rewriting code and understanding large foreign codebases becomes cheap, the incentive to rely on deep dependency trees collapses. Writing from scratch ¹ or extracting the relevant parts from another library is far easier when you can simply ask a code agent to handle it, rather than spending countless nights diving into an unfamiliar codebase. The reasons to reduce dependencies are compelling: a smaller attack surface for supply chain threats, smaller packaged software, improved performance, and faster boot times. By leveraging the tireless stamina of LLMs, the dream of coding an entire app from bare-metal considerations all the way up is becoming realistic. End of the Lindy effect – The Lindy effect holds that things which have been around for a long time are there for good reason and will likely continue to persist. It's related to Chesterton's fence: before removing something, you should first understand why it exists, which means removal always carries a cost. But in a world where software can be developed from first principles and understood by a tireless agent, this logic weakens. Older codebases can be explored at will; long-standing software can be replaced with far less friction. A codebase can be fully rewritten in a new language. ² Legacy software can be carefully studied and updated in situations where humans would have given up long ago. The catch: unknown unknowns remain unknown. The true extent of AI's impact will hinge on whether complete coverage of testing, edge cases, and formal verification is achievable. In an AI-dominated world, formal verification isn't optional—it's essential. The case for strongly typed languages – Historically, programming language adoption has been driven largely by human psychology and social dynamics. A language's success depended on a mix of factors: individual considerations like being easy to learn and simple to write correctly; community effects like how active and welcoming a community was, which in turn shaped how fast its ecosystem would grow; and fundamental properties like provable correctness, formal verification, and striking the right balance between dynamic and static checks—between the freedom to write anything and the discipline of guarding against edge cases and attacks. As the human factor diminishes, these dynamics will shift. Less dependence on human psychology will favor strongly typed, formally verifiable and/or high performance languages.³ These are often harder for humans to learn, but they're far better suited to LLMs, which thrive on formal verification and reinforcement learning environments. Expect this to reshape which languages dominate. Economic restructuring of open source – For decades, open-source communities have been built around humans finding connection through writing, learning, and using code together. In a world where most code is written—and perhaps more importantly, read—by machines, these incentives will start to break down.⁴ Communities of AIs building libraries and codebases together will likely emerge as a replacement, but such communities will lack the fundamentally human motivations that have driven open source until now. If the future of open-source development becomes largely devoid of humans, alignment of AI models won't just matter—it will be decisive. The future of new languages – Will AI agents face the same tradeoffs we do when developing or adopting new programming languages? Expressiveness vs. simplicity, safety vs. control, performance vs. abstraction, compile time vs. runtime, explicitness vs. conciseness. It's unclear that they will. In the long term, the reasons to create a new programming language will likely diverge significantly from the human-driven motivations of the past. There may well be an optimal programming language for LLMs—and there's no reason to assume it will resemble the ones humans have converged on. TL; DR: - Monoliths return – cheap rewriting kills dependency trees; smaller attack surface, better performance, bare-metal becomes realistic - Lindy effect weakens – legacy code loses its moat, but unknown unknowns persist; formal verification becomes essential - Strongly typed languages rise – human psychology mattered for adoption; now formal verification and RL environments favor types over ergonomics - Open source restructures – human connection drove the community; AI-written/read code breaks those incentives; alignment becomes decisive - New languages diverge – AI may not share our tradeoffs; optimal LLM programming languages may look nothing like what humans converged on ¹ ² ³ ⁴
显示更多
0
104
1.8K
285
转发到社区
Been coding all day for this simplicity 🧑‍🎨
0
25
307
11
转发到社区