It is worth noting that although the result here is visually impressive for erosion aesthetics, it is also not practical for the generation of physically-plausible lakes and rivers. Proper hydrological simulation is required because non-local information is crucial, something which this shader technique doesn't attempt to simulate. Without that, you're likely to end up with rivers flow uphill and lakes that don't properly overflow from valley passes and suchlike.
Source: I'm a core dev for Veloren, which uses a very detailed hydrological simulation for its world generation. More info here: https://veloren.net/blog/devblog-43/
Thanks, I'm glad to hear it :) Unfortunately I've not had much time to work on it recently for personal reasons, but the project is still very much active and receives regular updates!
No, nano is not my daily driver. It's what I use when I want to quickly edit a file with root access because, funnily enough, I'm not in the habit of running my primary editor with superuser permissions :) Nano is a low-hanging fruit that was the first of many tools I gradually massaged the editor into replacing.
Nice work! And yes, that gradual acceleration of productivity where your fixes and tweaks from the past compound on your ability to get things done in the future is a great feeling.
A thing that shocked me as I was working on the text editor was how capable modern terminal emulators are when you account for ANSI extensions. First-class clipboard access, mouse events, precise text styling, focus tracking, system notifications, key press/release events, etc. are all possible with a modern terminal emulator. There's not really anything else you need to build a very competent, ergonomic editor UI.
You can even use tools like trolley to wrap the entire application up in a ghostty-powered shim that presents the application as a native UI application: https://github.com/weedonandscott/trolley
I appreciate this, but I'm not concerned with the capabilities of the terminal or the GUI. What would be unhelpful, to me, would be to build a TUI because then if I wanted to send the actual app state to - for instance, a web browser which runs the library in WASM - the only way would be to pipe the terminal output across the shared buffer, instead of just blitting the app/editor state into it (or the relevant messages, like CRDTs).
Contrast that with a library: I could capture the inputs from any source - browser, native app, network, etc - work with the data using the single library, and then render the result in whatever client (or as many clients) as I wanted.
Yes, absolutely. I've since switched to rope-backed buffers, but I don't think the rope itself is actually adding much from a performance standpoint, even for really very large files.
We talk about big-O complexity a lot when talking about things like this, but modern machines are scarily good at copying around enormous linear buffers of data. Shifting even hundreds of megabytes of text might not even be visible in your benchmark profiling, if done right.
When benchmarking, I discovered that the `to_pos`/`to_coord` functions, which translate between buffer byte positions and screen coordinates, were by far the heaviest operation. I could have solved that problem entirely simply by maintaining a list of line offsets and binary-searching through it.
Unmentioned in the post, but I have since switched to a third-party rope library (crop, not ropey). At some point I'd like to implement one myself, but for now this does the job.
That is certainly true! If your target is end users, use the off the shelf solution that has been inspected by many eyeballs. The best part of building tools for yourself or a small community of people is that you only need to cover the relatively tiny subset of functionality that you actually use.
- Software is simpler than you think when you boil it down. There's a massive incentive to over-sell the complexity of the problem a solution is trying to solve, to pull in users. This is true both for proprietary products and, to a lesser degree, FOSS. You can probably replace most of the tools you use day-to-day in a weekend or two - provided you keep practising the art of just building stuff. I'm not saying that you should, but it's worth keeping in the back of your head if a tool is driving you mad.
- You can achieve 80% of the functionality with 20% of the work required to build an off-the-shelf solution. In a surprising number of cases, you can do the same with 20% of the integration cost of an off-the-shelf solution. A lot of software is - to put it quite bluntly - shit (I include a lot of my own libraries in this list!). There are probably only a few hundred really valuable reusable software components out there.
- Aggressively chase simplicity and avoid modularity if you want to actually achieve anything. The absolute best way to never get anything useful out of a project is to start off by splitting it into a dozen components/crates/repositories. You will waste 75% of your time babysitting the interfaces between the components rather than making the thing work.
- Delete code, often. If you look at the repo activity (https://git.jsbarretto.com/zesterer/zte/activity/code-freque...) you'll see that I'm deleting code almost as much as I'm adding it, especially now that I've got the core nailed down. This is not wasted effort: your first whack at solving a problem is usually filled with blunders so favour throwaway code that's small enough to keep in your head when the time comes to bin it and make it better.
- It is absolutely critical that you understand the fundamental mode of operation of the code you've already written if you want to maintain development velocity. As Peter Naur said, programming is theory-building and the most important aspect of a program is the ineffable model of it you hold in your head. Every other effort must be in deference to maintaining the mental model.
Just wanted to thank you for sharing thoughts here and on your website. The article about making your own text editor, the one about how "toy software" is a joy, another about language models, and this comment.. I've been programming since I was a child, and have gone through ups and downs in the industry as well as personally, how I relate to computing - in the context of that experience, I've appreciated your insight. I often find myself nodding in agreement and glad to see the ideas articulated well.
If notation is a tool of thought, and programming is theory-building, the way you're communicating your experience in words is a kind of knowledge transfer to an audience of indefinite scale, a public service that contributes to collective understanding.
Frankly, I spend a lot of time feeling similarly uncomfortable about my relationship with computers and the industry at large. I think, perhaps surprisingly, I'd call myself a 'technophobe' for this reason.
I think there's a parallel universe out there in which the arc of technology bends toward a future I actually want to live in, but I'm fairly sure we aren't in that universe today. But perhaps if we talk more about how to use the darned things in a manner that enhances the human experience rather than detracts, we can get closer to it.
Yeah, it's a pretty blatant cult masquerading as a consensus - but they're all singing from the same hymn sheet in lieu of any actual evidence to support their claims. A lot of it is heavily quasi-religious and falls apart under examination from external perspectives.
We're gonna die, but it's not going to be AI that does it: it'll be the oceans boiling and C3 carbon fixation flatlining that does it.
Source: I'm a core dev for Veloren, which uses a very detailed hydrological simulation for its world generation. More info here: https://veloren.net/blog/devblog-43/
reply