Hacker Newsnew | past | comments | ask | show | jobs | submit | tbrockman's commentslogin

There's nothing that compels you to "purity spiral" other than attempting to appease cynics who insist that all decisions must be completely binary and consistent, with no room for nuance or practicality, and that anything else is virtue signaling (which is somehow less defensible than enabling harm in the first place).

Reducing harm where feasible is still meaningful, and certainly better than no attempt at all.


Please don't try to spread the idea that it "does not work", it's incorrect and discourages one of the most effective non-violent mechanisms consumers have for driving change in market economies. It may not necessarily be sufficient (coordinated boycotts, for instance, are much more effective than individual decisions), it may not always be an option (particularly when there aren't viable alternatives), it may not work immediately, there may not be enough people who "vote" a certain way, and there may be insufficient information to make informed decisions--but consumers absolutely decide which products and companies live and die, and every single dollar you spend allocates power.


Valve is a "flat" organization, where your compensation is determined based on peer review.

Rockstar, and owner Take-Two (largely owned by institutional investors--well known for their historical championing of workers rights and fondness of unions), both seem to have your typical corporate hierarchies, where executives are fairly and correctly compensated for being more productive than over 200 software engineers combined.


If you think one person can be as productive as 200, you're sniffing glue, my friend.

Executives make more money because they are the only ones with the power to set wages. Workers do not have the power to set wages.


Oh, I don't, I was being very sarcastic, but I appreciate that your response was so measured under the assumption that I was serious, haha.


It seems like there are a lot of people who are desperate for reality to conform to their cynicism, out of fear that they (and their world view) are part of the problem.

For whatever reason, for some it's more gratifying to see others fail to prosper if it confirms their beliefs than it is to watch others succeed and have their beliefs challenged (even if it's to their own detriment).

In many cases, I imagine those who would see themselves as "good" use their world view as a way of absolving themselves of guilt for their actions. If I believe that there was never enough for most to lead dignified lives and that society rewards only self-interest, I don't have to regret taking more than necessary, and I can justify my apathy to the suffering of others. "It is the way of things," I can think to myself, "anything else would be foolish and naive." In this way I can find satisfaction even in inequality, comforted by its inevitability--and my own cleverness in understanding it.


This is exactly it, beautifully put. It's very easy and tempting to hide behind an ideology that makes broad-stroke descriptions about the entire world. "It's just how the world works", "it's simply basic economics" (really, nearly all claims about "basic" anything with no further nuance). But for people who are entrenched in their opinions, I don't think there's a lot of absolution or guilt going on. To the most dedicated believers, the belief had become a part of their identity, so vindicating it is almost integral to their being. Of course they'd want anyone who disagrees to suffer - they don't just view that as an inevitability, but also the morally correct thing, the appropriate punishment to the people for not accepting their obvious truth.


If you have sufficient knowledge in the subject matter you're questioning ChatGPT about, you can fairly reliably discern complete bullshit from something plausibly true that warrants additional investigation (which I'd say is more useful than your typical horoscope). In isolation it seems worth the gamble to me, so long as you don't view it as much more than consulting the tea leaves.


You could also spend 5 minutes thinking instead of using a 10th of the Texas power grid to do it for you. Remember when we used to not know shit, and it would stay with you for days - before Google happened? The same is about to happen with mental acuity and AI. Use it or lose it - and I hope every fanboys' brain turns to mush before they witness irreversible AI-caused climate catastrophes solely to be able to speak natural english to a search engine. If you outsource your thinking process, don't come bitching when it's gone.


Yeah, I’m actually fairly sympathetic to your perspective, that’s why I said “in isolation”. I might think it’s possible to balance use in a way that’s not as detrimental to myself personally or others (self-hosted models, judicious use, etc.), but I definitely don’t disagree that it seems likely to do more harm than good currently.


Do you have a link to where he mentions this?


Given that Go can already be compiled to WebAssembly (with the ability to use TinyGo if you want to trade-off some language features for efficiency), is there anything that would make this more attractive than the alternatives? That it's written in Rust and can be used as a library by Rust code?


The Go-in-Go compiler was significantly slower than the Go-in-C compiler that it replaced, although most users didn't notice it because the new compiler contained many algorithmic improvements that were judiciously not backported to the old compiler in order to make the transition smoother. A compiler written in Rust could conceivably be faster than the current Go compiler.


If the Go compiler was twice as fast, I wouldn't really notice.

If the Go linker was twice as fast, that would be a minor convenience, sometimes.

I wouldn't expect much more that twice, maybe thrice at the very outside. And it'd be a long journey to get there with bugs and such to work through. The blow-your-socks-off improvements come from when you start with scripting languages. Go may be among the slower compiled languages, but it's still a compiled language with performance in the compiled-language class; there's not a factor of 10 or 20 sitting on the table.

But having another implementation could be useful on its own merits. I haven't heard much about gccgo lately, though the project [1] seems to be getting commits still. A highly-compatible Go compiler that also did a lot of compile-time optimizations, the sort of code that may be more fun and somewhat more safe to write in Rust (though I would perceive that the challenge of such code is for the optimizations themselves to be correct rather than the optimization process not crashing, and Rust's ability to help with that is marginal). The resulting compiler would be slower but might be able to create much faster executables.

[1]: https://github.com/golang/gofrontend


The Go compiler is already ridiculously fast. As far as I know the garbage collector usually doesn't even activate for short-lived programs, which compilation usually is. Turning garbage collection off entirely doesn't have much of an impact on build times.

What significant opportunities exist for performance with a Rust implementation that aren't possible in Go?


Yes, and with time improvements were made.

Compilation speed is not something I worry about in Go, versus Rust, which I seldom bother with nowadays, compilation speed being one of the reasons.


It really puzzles me that people complain about compilation speed in Rust these days: I've worked on pretty big Rust code bases with lots of dependencies also, and cargo check has always been pretty much instant for me, including when I'm traveling and I use my mid-range laptop from 2012! (my main desktop is from 2018, I bought it because my previous desktop, from 2009 struggled to compile servo, mostly due to having too little RAM).

Debug build take a bit longer (a few seconds) on the desktop, while still staying below a minute on the laptop (remember, I'm talking about a 12 years old Clevo laptop, not a recent Macbook). It's definitely not worse than Typescript compilation or even Javascript bundling, yet we pretty much never hear complains about how typescript has too big compile times.

Yes, it could be faster with a different compiler architecture, especially on clean release builds and that would be nice, but it's a very minor annoyance (I don't do a full release build unless I've updated my compiler version, which only happens a few times a year).

The contrast between the discourse and my day-to-day experience on near obsolete hardware is very striking.

(Compilation artifact eating up hundreds of GB of my hard drive are a much, much bigger nuisance in practice, yet nobody seem to talk about that here on HN).


> I don't do a full release build unless I've updated my compiler version, which only happens a few times a year

That's probably part of the difference. I do tens of these every single day.

GUI apps can be quite slow in debug mode, and as you say, the compilation artifacts build up quickly, which requires a cargo clean and then a fresh build.


> I do tens of these every single day.

Tens of clean builds? I'm very curious: why? (because obviously that puts you in a completely different situation compared to someone who can rely on incremental builds)

> GUI apps can be quite slow in debug mode

Full debug mode, definitely, but in that case I've always found that building the dependencies in release mode was enough, but YMMV. But then that's what incremental rebuild are about.

> and as you say, the compilation artifacts build up quickly, which requires a cargo clean and then a fresh build.

I've mostly experienced the PITA when working with multiple code bases over time or in parallel, but surely it doesn't happen every day, let alone multiple times per day, does it?


> Tens of clean builds? I'm very curious: why?

It's partly a privilege of being able to. I have an MacBook M1 Pro machine with 10 cores, so clean release builds are tolerable. The slowest project to compile I work on regularly in Servo and I can do a clean release build of that in 3-4 minutes. Most of the other projects I work on it's more 30s to 2m max.

It's also a disk space thing. Between working on multiple different projects (I have 200 projects in total in my "open source repos" directory, most of those I only interact with very occasionally, but 5-10 in a day wouldn't be particuarly unusual for me) and switching between branches within projects I can build up 10s of GBs of data in the target dir within a few hours. And I don't have the largest SSD, so that can be a problem! So it's become habit to cargo clean reasonably regularly.

Finally, sometimes I am explicitly testing compile time performance (which requires a clean build each time) or binary size (which involves using additional cargo profiles, exacerbating the disk space issues).


It's still my number one complaint about Rust, even though it has definitely gotten better over time. Partly my fault - I'm stuck on a slightly underpowered Windows machine at work. My Macs at home compile significantly faster. But as soon as I add certain crates like serde, tokio, windows, and some others, the compile times grow quickly. It also means that tasks Rust isn't necessarily designed for but can be used for (like web backends) become frustrating enough to dissuade me from using it as a do-it-all language despite certain aspects of the language being really nice. Even a 30-45 second tweak-test loop becomes annoying after a while. Again more of a personal problem than anything, but the point is I personally am constantly frustrated with the compile times.


This sample code took 12 minutes on a clean build on my travel netbook, now dead.

https://github.com/pjmlp/gwc-rs

Maybe nowadays it is faster, I have not bothered since I made the RIR exercise.

Get the community editions of Delphi, FreePascal, or D and see what a fast build means.

Better yet, take the lastest version of Turbo Pascal for MS-DOS, meaning 7, and try it out on FreeDOS.


> This sample code took 12 minutes on a clean build on my travel netbook, now dead.

Clean builds are slow indeed. But they are also once every six week at most if switch to the latest compiler at every release.

> Get the community editions of Delphi, FreePascal, or D and see what a fast build means.

Honestly, who cares about the difference between 1s vs 100ms vs 10ms for a build though? Rust compilation isn't optimal by any means, and it wouldn't have been workable at all in the 90s, but computers are so fast today (even 13-years old computers) it rarely matters in practice IMHO.


Roc team does, that was one of the reasons they dropped Rust for Zig, even though Zig is yet to reach 1.0.

As do many of us, as we know how fast builds can be with complex languages, e.g. add OCaml to the list of faster than Rust compiler toolchains, while having a ML type system.


> Honestly, who cares about the difference between 1s vs 100ms vs 10ms for a build though?

I definitely do. Not necessarily because of the 10ms vs 1s. But because of the later stage when it becomes 600ms vs 60s.


> But because of the later stage when it becomes 600ms vs 60s.

What later stage though, as I said I worked with big code bases on old hardware without issues.

I'm simply not convinced that there exist a situation where incremental rebuild of the crate you're working on is going to take 60s, at all, especially if you're using hardware from this decade.


I must be doing something wrong because incremental builds regularly take 30-60 seconds for me. Much more if I add a dependency. And I try to keep my crates small.


As a sibling comment points out, it's likely to be mostly link time, not compilation time.

The most recent Rust version ships with `lld` so it shouldn't be the case anymore (afaik `lld` is a bit slower than the `mold` linker, but it's close, much closer than the system linker that was previously being used by default).


PSA: try https://github.com/rui314/mold

(Not affiliated with the project. Just switched to it and never looked back.)


I have a fairly small go project (100k loc) and it takes ~30sec to build.

I'd be thrilled to have it build in 300ms.

(Using a macbook pro 2019)


That's strange. Humongous k8s Go projects (>500k LOC) build in a third of that time. Do you have lots of deps in your `go.mod` ? Is the project abusing codegen ?


61 dependencies, no codegen. Nothing special really.


On a Macbook M2 Pro, on a project with loads of services, 210k loc, a full rebuild takes 70 seconds. Incremental takes 36s. For one service, full rebuild in 16s and incremental 0.6s. It's not blazing fast but considering the scale of the project, it's not that bad, especially since I rarely rebuild every service at the same time.


Consider upgrading your hardware if/when you get a chance to (obviously this is expensive). My builds (Rust not Go, but it might well be similar?) got 10x faster when I upgraded from a 2015 MBP to an M1. I suspect 2019 to M4 might be similar.


> I have a fairly small go project (100k loc) and it takes ~30sec to build.

Wait, aren't Go builds supposed to be fast?


Are you using CGo? C compilation slowness doesn't count.


Apparently Bevy doesn't need that big tutorial on how to customise Rust toolchains for usable builds then.


Come on.

There's no “big tutorial” though. There's a section about compilation time performance[1] but it's arguably not “big”, and the most impactful parts of it is about linking time, not compilation time. And half of the section is now obsolete since rust uses `lld` by default.

[1] https://bevy.org/learn/quick-start/getting-started/setup/#en...


Only if you happen to be on a platform where lld is supported.


Which platform does that description excludes?

Edit: oh I get it you probably meant “where lld is set as default ” which is currently Linux only.

Lld is supported by the other platforms though, so you can just copy-paste the three lines of configuration given on the Bevy page and call it a day.


> This sample code took 12 minutes on a clean build on my travel netbook, now dead.

> Maybe nowadays it is faster, I have not bothered since I made the RIR exercise.

Took me 18 seconds on a M4 Pro.

Please stop spreading FUD about Rust. Compile times are much better now then what they were and are constantly improving. Maybe it will never be as fast as one of those old languages that you like that nobody uses anymore but it's plenty usable.


Do you have M4 Pros to offer everyone using Rust?

I would gladly take one.

And the Roc team as well, maybe they would revert back their decision on moving away from Rust to Zig due to compile times.


> Do you have M4 Pros to offer everyone using Rust?

> I would gladly take one.

Do you have 10 year old netbooks to give to everyone? because this seems to be required to have slow compile times in Rust.

> And the Roc team as well, maybe they would revert back their decision on moving away from Rust to Zig due to compile times.

More cherry picked examples, you sure love those.

Like whats the point of bringing this up? Do you want me to show you the thousands of software projects that do use rust as a counter example?

Obviously no programming language is one size fits all.


> Do you have 10 year old netbooks to give to everyone? because this seems to be required to have slow compile times in Rust.

Unfortunately not all of us have an economical situation that allow us to sponsor Trump gifts every couple of years.

How many of those thousands of software projects that do use Rust, can be show as counter example to slow compilation times on hardware that common people usually buy and keep around?

Especially in those countries that are outside tier 1 in world economy, getting computers from whatever parts western no longer considers usable for their daily tasks.

Maybe they can afford to wait.


> Took me 18 seconds on a M4 Pro.

M4 pro isn't your average computer though.

But as I said, clean builds aren't the most common experience either.


A 10 year old netbook is also not the average computer and yet we are to believe that 12 minute compile times for some small hobby project are the normal and rust sucks.


It is when people have more important things to spend money on.

It is also not normal to expect people to spend 2 000 euros to enjoy fast compilation times, when other programming languages require cheaper budgets with faster compilation times, since MS-DOS on lousy hardware from today's standards.

You don't care, other people's do, and who cares most drives adoption.


Just because someone made a terrible argument shouldn't be taken as an invitation to pile up your own terrible argument though…


The default serial build of V (tcc backend) takes 0.6 (with production compiler) to 1.3 seconds (with tcc backend compiler).

The production (clang backend) parallel build of V language takes about 3.2 seconds. All on an m1 mac. Even the go compiler seems slow in comparison.


The original port was slower because it was a near straight transpile impl of the original C compiler. It didn't do anything to try to speed things up, they went for correctness first. Then in subsequent releases they worked on speed improvements.


> A compiler written in Rust could conceivably be faster than the current Go compiler.

Is that really relevant, though? A compiler written in Rust is unlikely to be that much faster than a compiler written in Go. Most users might not notice a tiny difference in build times.


Semi related: there is an active proposal of having a go OS Target of "none" (or noos (No-OS)).

https://github.com/golang/go/issues/73608

Sounds like they want to maybe include https://github.com/usbarmory/tamago in the compiler.


Not to comment on the rest of article or the author's goals, but it's absolutely possible to use a content script (dynamically injected into the `main` world, as opposed to the default `isolated`, for example: https://github.com/tbrockman/browser-extension-for-opentelem...) and Proxy's (https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...) to hook (most? if not all) Javascript being executed in the webpage transparently.

Which for some functionality would have been a bit more portable and involved less effort.


I have a project (in my rather long project backlog) that involves hooking JS APIs to download youtube videos. I'm worried that if my extension (or a similar extension) gained enough popularity, youtube would start inspecting the relevant JS objects to see if they'd been replaced with proxy instances.

Aside from playing a hooking/patching game of cat and mouse, I don't think this is fully solvable without modifying the browser engine itself - then you can hook things in a way that's completely transparent to the JS in webpages.


Was just about to comment this I’ve played that exact cat and mouse game before there’s also another fun way to hook I used to like by doing something like Object.defineProperty on Object.prototype to globally hook onto something and you can do lots of stuff with that it’s pretty useful in user scripts


Thanks for sharing some examples! Someone shared a similar project in the other thread. I didn’t realize this at the time of writing haha.

FWIW I still think modifying the browser has some positives wrt stealth and hooking out of process frames (could be wrong on the second part, haven’t actually tested!)

Still good to know though will leave a note in the article :-)


Yeah, there's a pretty overwhelming amount of browser APIs and functionality which isn't always (well-)documented to learn about. If I recall correctly Proxies wouldn't be detectable (seems to be supported by https://exploringjs.com/es6/ch_proxies.html#sec_detect-proxi...) so long as your injected content script runs first (otherwise other code could presumably override the Proxy constructor). You should also be able to hook any embedded frames by setting `target: { ..., allFrames: true }`.


To note, there are undocumented detections to even Proxys, for example using `in` operator in v8 (such as `proxiedFunc in 1` for some proxied function). Really cool to see a project like this.


How do you use `in` in v8 to detect proxies? I assume its a difference in the exception, but the message and the cause were the same in both direct and proxied `x in 1`.


Ah wow, good catch- yeah, you're right, this technique seems to be patched


> SDS was a C string I developed in the past for my everyday C programming needs, later it was moved into Redis where it is used extensively and where it was modified in order to be suitable for high performance operations. Now it was extracted from Redis and forked as a stand alone project.


Also recently found it unnecessarily difficult to do profiling of page workers using Playwright's CDPSession wrapper (and they don't seem to have any plans to improve it: https://github.com/microsoft/playwright/issues/22992#issueco...), whereas it was pretty painless in Puppeteer.

So, definitely more useful if you care about more than just your main thread.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: