Hacker Newsnew | past | comments | ask | show | jobs | submit | tremarley's commentslogin

This is due to the Cloudflare outage.


Don’t trust any of your sensitive data with any AI platform that you do not own & control right now.

They’re all vulnerable.

There is an abundance of unpatched RAG exploits out in the wild.


A 400mb+ install of bloat will upset many people

This needs to be justified asap to help people understand and reconsider installing it.


Strangely it's the actual binary's .text section that's about 400MB. Time to dive in!


The Rust compiler always produces quite large binaries compared to other programming language. I notice there's a (closed) issue on the Zed github [https://github.com/zed-industries/zed/issues/34376],

> At this time, we prioritize performance and out-of-the-box functionality over minimal binary size. As is, this issue isn't very actionable, but if you have concrete optimization ideas that don't compromise these priorities, we'd be happy to consider them in a new issue.


Welcome to static linking of large applications.

The world moved into dynamic linking in the 1980's for a reason.

It is cool to advocate for a return to static linking when it is basic CLI tools.


All those beautiful dlls will anyways sit comfortably in the same folder as your "dynamically" linked executable on Windows.


They might be, or not.


I think there should be a best-of-both-worlds type of linking - during compilation, the linker places a statically compiled library at a certain address, but doesn't include it in the binary. Then, during startup, the OS maps the same library to the given address (sharing the data between processes). This would improve memory use and startup time both and performance, avoiding dynamic linking. Of course you need to match the exact versions between the compiled executable and the dependency, but this should be a best practice anyways.


Static linkers generally don't compile a "full copy" of the library. Just the code paths the compiled application uses. The compiler may have also made optimizations based on the apppication's usage patterns.


I say "strangely" because honestly it just seems large for any application. I thought they might not be doing LTO or something but they do thin LTO. It's just really that much code.


> The world moved into dynamic linking in the 1980's for a reason.

Reasons that no longer exist. Storage is cheap, update distribution is free, time spent debugging various shared lib versions across OSes is expensive.


Yet everyone is complaining on this thread about Zed distribution size, go figure.

They should shut up and just buy bigger drives. Ah, they can't on their laptops, bummer.

Also try to develop mobile apps with that mentality,

https://www.abbacustechnologies.com/why-your-app-keeps-getti...


Tbh, the rights and wrongs aside, I suspect "everyone" is complaining about it because it's the easiest thing to talk about. Much like how feature discussions tend towards bikeshedding.


Precisely. It seems like the people who say storage is cheap assume everyone is using desktop PCs


Storage is cheap and upgradeable on all but very very few Windows laptops.


> Storage is cheap

My /usr is 15G already, and /var/lib/docker isn't that far off despite people's obsession with alpine images. If more people would dismiss storage as cheap it'll quickly become expensive, just not per GiB.

> update distribution is free

I wouldn't be surprised if at one point Github would start restricting asset downloads for very popular projects simply because of how much traffic they'd generate.

Also, there's still plenty of places on the planet with relatively slow internet connectivity.


Storage doesn't really feel cheap. I'm considering buying a new laptop, and Apple charges $600 per TB. Sure, it's cheaper than it was in the '80s, but wasting a few gigabytes here and a few gigabytes there is quickly enough to at least force you to go from a 500GB drive to a 1TB drive, which costs $300.


That's more of an Apple problem? Storage is under $50/TB.


It's the reality of storage pricing. The general statement "storage is cheap" is incorrect. For some practically relevant purposes, such as Apple laptops, it's $600/TB. For other purposes, it's significantly below $50/TB.

You could say "just don't buy Apple products". And sure, that might be a solution for some. But the question of what laptop to buy is an extremely complicated one, where storage pricing is just one of many, many, many different factors. I personally have landed on Apple laptops, for a whole host of reasons which have nothing to do with storage. That means that if I have to bump my storage from 1TB to 2TB, it directly costs me $600.


If you're buying Apple then you should expect inflated prices. I got a 4TB NVMe SSD for like 350€, a 2TB one goes from 122 - 220 € depending on read/write speeds.

I don't check the installation size of applications anymore.


I'm just saying that $600/TB is a real storage price that lots of people deal with. Storage isn't universally cheap.

This feels especially relevant since we're discussing Zed here, the Mac-focused developer tool, and developers working on Mac are the exact people who pay $600/TB.


A 2TB SSD for the Framework 13 cost me 200 euros. But I agree that it's not cheap, files are getting bigger, games are big, apps are huge, and then you need backups and external storage and always some free space as temp storage so you can move files around.


Bro, with this mentality, you won't get far in Apple universe.

Embrace your wallet will be owned by Apple. Then you can continue.

Sorry, but people buying Apple products are different bread :D


I don't need to "get far in the Apple universe", I need a laptop. My current MacBook Pro cost about the same as the Dell XPS I was using before it, I like nice laptops


RAM isn't cheap (it may be for your tasks and wallet depth, but generally it isn't, especially since DDR5). Shared objects also get "deduplicated" in RAM, not just on disk.


What objects is the Zed process using that would even be shared with any other process on my system? Language support is mostly via external language servers. It uses its own graphics framework, so the UI code wouldn't be shared. A huge amount of the executable size is tree-sitter related.


I 100% agree. As soon as you step outside of the comfort of your Linux distributions' package manager, dynamic linking turns into dependency hell. And the magic solution to that problem our industry has come up with is packaging half an OS inside of a container...


> Storage is cheap

I'll be very grateful if you stopped using all my RAM for two buttons and a scrollbar thank you.


OSes don't load the full executable into physical RAM, only the pages in the working set. Most of the Zed executable's size is tree-sitter code for all the supported languages, and only needs to page in if those languages are being used in a project.


Maybe for this particular case but the comment shows a certain mindset...


Big sigh. I wish we still had pride in our field, rather than this race to the bottom mentality.


I really like this article "How Swift Achieved Dynamic Linking Where Rust Couldn't" https://faultlore.com/blah/swift-abi


I was a little sus, so I checked: https://imgur.com/a/AJFQjfL

897MB! But it appears to have installed itself twice for some reason. Maybe one is an 'update' which it didn't clean up...? I'm not sure.

Edit: I just opened it and it cleaned itself up. 408MB now. I guess it was in the process of upgrading.


So the upgrades are not delta diffs either?


Even if it’s delta, it cannot patch itself when running on Windows. So it runs the updater, creates a new exec and switches to it after relaunch. Same as Chrome or Firefox.


OS deficiency. And maybe programs shouldn't be allowed to update themselves.


Is it? On Linux, you can overwrite the file, but the underlying inode will still be open, and the 'invisble' old version will linger around - you don't have any easy way short of restarting everything to make sure the new versions are being used.

And with Chromium this directly leads to crashes - when you update the browser as its open, the new tabs will open with the new version of the binary, with the old ones still using the old binary - which usually leads to crash.

I prefer 'you cannot do X' instead of 'we allow you to do it, but it might misbehave in unpredictable ways'.


I don't use Chromium. I never had issues with Apache, MySQLd, Firefox, Thunderbird, ... . You can even swap out the Linux kernel under userspace it still keeps all running.


> maybe programs shouldn't be allowed to update themselves.

Honestly I'd be all for this if the OS had a good autoupdate mechanism for 3rd party applications. But that's not the world we live in. Certainly not on windows - which is too busy adding antivax conspiracy articles to the start menu.


Will it though? I mean it's a lot for a "text editor", but much less than a classical IDE. And 400M is pretty negligible if you're on Windows, where your OS takes up dozens of GB for no reason.


Yeah I don't think 400M is really that big a deal. My `.emacs.d/` dir weighs in at over 1G and I've never thought twice about it.

For people who are serious about their text editors, 400m is a small price to pay for something that works for you.


If the OS is already bloated, that leaves LESS space for your editor!


From first hand experience, every British millionaire I grew up with or know, have left the UK for a country that treats them better.

The only ones that haven’t moved, are those who are considering it, can’t move cause their business is dependant on the UK, or their family/kids need them in the UK.

If your income doesn’t require you to stay in the UK, why would you stay there?


Why would stay in Britain after the rich and powerful asset stripped the country and ran it into the ground?

Good question, and I guess “to fix the mess they caused” doesn’t usually come up.


And yet none of the richer people I know have left[0], despite some of them making a lot of noise about it .

Sometimes an anecdote is just an anecdote I guess.

[0] or at least haven't spent any less time in the UK, and are still resident, as some already spend a lot of time in a number of different countries


This website seems to be broken. The page is full grey, until we scroll down to half way down the page.


Yes


And how many people consume the long form content compared to YT? Does it span all age categories?


There was a few spelling mistakes


The website is broken


Sorry to hear that! I'm happy to investigate if you let me know your browser/platform.


Make it compatible with more browsers please


The more constructive way of putting this would be to mention which browser you are using. Judging from the comments, it works in plenty of browsers


If you wanted to know more about a new programming language named “Frob” or a plane crash that happened today, couldn’t you use an LLM like grok?

Or any other LLM that’s continuously trained on trending news?


How do I know the LLM isn't lying to me? AIs lie all the time, it's impossible for me to trust them. I'd rather just go to the actual source and decide whether to trust it. Odds are pretty good that a programming language's homepage is not lying to me about the language; and I have my trust level for various news sites already calibrated. AIs are garbage-in garbage-out, and a whole boatload of garbage goes into them.


They could provide verbatim snippets surrounded by explanations of relevance.

Instead of the core of the answer coming from the LLM, it could piece together a few relevant contexts and just provide the glue.


They do this already, but the problem is it takes me more time to verify if what they're saying is correct than to just use a search engine. All the LLMs constantly make stuff up & have extremely low precision & recall of information


I don't understand how that's an improvement over a link to a project homepage or a news article. I also don't trust the "verbatim snippet" to actually be verbatim. These things lie a lot.


> How do I know the LLM isn't lying to me?

How do you know the media isn't lying to you ? It's happened many times before (think pre-war propaganda)


We’re talking about the official website for a programming language, which has to reason to lie.


>Odds are pretty good that a programming language's homepage is not lying to me about the language

Odds are pretty good that, at least for not very popular projects, the homepage's themselves would soon be produced by some LLM, and left at that, warts and all...


None of the LLMs (not even Grok) are "continuously trained" on news. A lot of them can run searches for questions that aren't handled by their training data. Here's Grok's page explaining that: https://help.x.com/en/using-x/about-grok

> In responding to user queries, Grok has a unique feature that allows it to decide whether or not to search X public posts and conduct a real-time web search on the Internet. Grok’s access to real-time public X posts allows Grok to respond to user queries with up-to-date information and insights on a wide range of topics.


i can also use my human brain to read a webpage from the source, as the authors intended. not EVERY question on this planet needs to be answered by a high resource intensive LLM. Energy isn’t free you know. :)

Other considerations:

- Visiting the actual website, you’ll see the programming languages logo. That may be a useful memory aide when learning.

- The real website may have diagrams and other things that may not be available in your LLM tool of choice (grok).

- The ACT of browsing to a different web page may help some learners better “compartmentalize” their new knowledge. The human brain works in funny ways.

- i have 0 concerns of a hallucination when readings docs directly from the author/source. Unless they also jumped on the LLM bandwagon lol.

Just because you have a hammer in your hand doesn’t mean you should start trying to hammer everything around you friend. Every tool has its place.


It's just a different kind of data. Even without LLMs, sometimes I want a tutorial, sometimes I want the raw API specification.

For some cases I absolutely prefer an LLM, like discoverability of certain language features or toolkits. But for the details, I'll just google the documentation site (for the new terms that the LLM just taught me about) and then read the actual docs.


Search is best viewed as a black box to transform {user intention} into {desired information}.

I'm hard pressed to construction an argument where, with widely-accessible LLM/LAM technology, that still looks like:

   1. User types in query
   2. Search returns hits
   3. User selects a hit
   4. User looks for information in hit
   5. User has information
Summarization and deep-indexing are too powerful and remove the necessity of steps 2-4.

F.ex. with the API example, why doesn't your future IDE directly surface the API (from its documentation)? Or your future search directly summarize exactly the part of the API spec you need?


I don't know the exact word for this case, but sometimes you want the surrounding information to what you're looking for. Often I skim documentations, books, articles,... not in search for a specific answer but to get the overview of what it discusses. I don't need a summary of a table of contents. But it's a very good tool for quickly locating some specific information. Something like

  Language Implementation Patterns (the book) |> Analyzing Languages (the part) |> Tracking and Identifying Program Symbols (the chapter) |> Resolving Symbols (the section)
or

  Unit Testing: Principles, Practices,and Patterns (the book) |> Making your tests work for you (the part) |> Mocks and test fragility (the chapter) |> The relationship between mocks and test fragility (the section) |> Intra-system vs. inter-system communications
or

  Python 3.13.3 Documentation (docs.python.org) |> The Python Standard Library |> Text Processing Services |> string


Could never understand that obsession with summmarization. Sure, it may be be useful for long-form articles or for low-quality content farms, but most of the time you are not reading those.

And if you are reading technical docs, especially good ones, each word is there for a reason. LLM throw some that information away, but they don't have your context to know if the stuff they throw away is useful or not. The text the summary omitted may likely contain an important caveat or detail you really should have known before starting to use that API.


And if you go to a nicely formatted doc page (laravel) or something with diagrams (postgres), it throws all of these away too.


Yes, you can use grok but you could also use a search engine. Their point is that grok would be less convenient than a search engine for the use case of finding Frob's website's homepage.


Perplexity solves this problem perfectly for me. It does the web search, reads the pages, and summarizes the content it found related to my question. Or if it didn't find it, it says that.

I recently configured Chrome to only use google if I prefix my search with a "g ".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: