Hacker Newsnew | past | comments | ask | show | jobs | submit | bobajeff's commentslogin

I still use vscodium because there are some features that it has not in the Theia IDE (namely inserting links and images in markdown through drag and drop *)

I like the idea of Theia as a platform. Maybe it would be worth it for me to make a Theia based IDE and implement the drag and drop markdown links feature myself? I am worried about the lack of use of the platform by projects though.

Edit: Actually, it does seem to be used by other projects though not very well known ones. So it might not be that bad of an idea to use as the bases for mine.

Edit 2: Then again some language servers and debug adapters don't work fully with Theia **.

* https://code.visualstudio.com/Docs/languages/markdown#_inser...

** https://discourse.julialang.org/t/compatibility-between-thei... ** https://github.com/eclipse-theia/theia/issues/8472


I wonder what the psychological effect of having little or no privacy would do to people. Are we all going to be paranoid schizophrenics? How would a world of paranoid schizophrenics work? How insane are world events going to be from that point on?

China is an example of this. Somewhere that, according to the UN's data, executed "undesirable" people with such gusto that it incidentally decreased the organ donor waitlist time so low that it couldn't be explained by any other factor.

"Perfect" security is only attainable with zero dissent, zero individuality, zero privacy, and zero freedom.


"Involuntary organ harvesting[3][4][5] was once legal on criminals, but outlawed in 2015"

https://en.wikipedia.org/wiki/Organ_transplantation_in_China


You think you have privacy?

At best, you go back and forth between no privacy, a heavily condition privacy. At best.

Let’s take privacy back, but that’s a big process.

If you haven’t internalized surveillance, start working on it!



> Are we all going to be paranoid schizophrenics?

Paranoid, maybe. Schizophrenics? No. Firstly, "paranoid schizophrenia" is an outdated diagnosis. Paranoia is a common symptom of schizophrenia, but schizophrenics exhibiting paranoia are not considered to have separate mental illness from those who are not. Secondly, schizophrenia is not caused simply by psychological stress, and is associated with a large cluster of positive and negative symptoms, with paranoia being only one of them.


About garbage collection:

Are there a lot of Unity/Godot devs unaware that their engines are using GC? I would assume they'd have accepted the cost of GC already.

Unreal devs I can understand having an issue with it though.


GDScript in Godot doesn't use GC, it uses reference counting and doesn't "stop the world".

Other languages that bind into the engine do this too, (C++, SwiftGodot, Rust-Godot)

C# obviously does, Miguel de Icaza actually started SwiftGodot because he (ironically) ended up hating GC pauses after promoting C# for so long


Go does surprisingly well at keeping GC freezes to a minimal in a way that you're unlikely to notice... C# has gotten a lot better since the core split as well. That said, there's a lot that comes down to how a developer creates a game.

I was added late to a project working on a training simulation engine, similar to games, where each avatar in the game was a separate thread... man, the GC pauses on the server would sometimes freeze for literally 10-15s, and it was not good at all. I refactored it to use an event-loop model and only 2 other threads, which ran much better overall. Even though it wasn't strictly a game itself, the techniques still matter. Funny how running through a list of a few hundred things is significantly better than a few hundred threads each with their own timers, etc.


> C# has gotten a lot better since the core split as well.

It has improved but the majority of games using C# are using Unity which does not use .NET (Core). It uses Mono or IL2CPP specifically with the Boehm GC so it performs significantly worse than .NET and even standalone Mono (SGen GC).


> Funny how running through a list of a few hundred things is significantly better than a few hundred threads each with their own timers, etc.

State machines are not in fashion. Exposed event loops are not in fashion. Most frameworks do their damnedest to hide those components.

As for GC freezes, if you're doing a game like project you can always just allocate a few huge arrays at startup and reuse those with no deallocation/allocation in most garbage collected environments.


Reference counting is a GC algorithm from CS point of view, as looking into any worthwhile reference will show.

It's not what people mean when they say GC though, especially in reference to games, where you care about your peak frame time more than about your average frame time.

Reference counting can also have very bursty performance. Consider what happens when you decrement the last reference to an object which is the sole remaining reference to an entire large tree of other objects. This will trigger a whole cascade of subsequent decrements and deallocations, which can be arbitrarily large.

Of course, you might say, "Well, sure, but your reference counting implementation doesn't need to eagerly deallocate on dereference." That's true! You can write a ref counter that defers some of those deallocations or amortizes them across multiple operations.

And when you do that, now you really do have a garbage collector.

See: https://web.eecs.umich.edu/~weimerw/2008-415/reading/bacon-g...


People should learn their subjects properly, not street knowledge.

You should watch some of the more recent Gamers Nexus videos... the average frame pacing counts for a lot, and they're making a concerted effort to show this, as it does represent the level of "jank" in games very well.

Got a link? I can't work out which ones you're referring to.

most recently https://www.youtube.com/watch?v=qDnXe6N8h_c on why FPS is flawed specifically for GPU benchmarking

most specifically, an ongoing attempt to understand and debunk frame generation (DLSS, etc.) as a performance gain due to introducing latency despite high FPS: https://www.youtube.com/watch?v=Nh1FHR9fkJk, https://www.youtube.com/watch?v=GDvfIbRIb3U

More broadly than frame pacing, https://www.youtube.com/watch?v=Fj-wZ_KGcsg is a recent example of one of _many_ interviews going back years on why both frame times and frame rates are all flawed for explaining why some games feel smoother/lag more than others (there are GN videos dating back to 2016 on the subject)


I haven't dug deep enough into C# to say this with certainty, but I believe later C# versions allows you to do enough manual allocation to "almost" get around the garbage collector. As well as new calls to try and nudge the GC away from hot paths.

You need to be very disciplined to pull this off, though. LINQ is basically off limits, for example. And of course, Godot's C# is likely much older than these modern techniques to begin with.


Godot's C# is fairly recent, C#12/.NET 8.

Yes, as long as you're not using Godot 3.x. Some still use 3.x (Mono) because 4.x (.NET) does not support web exports.

That's good to know. So it probably has the capability if you really wanted to dig in.

But that effort on an active engine would quite a long time to comb through. Really comes down to if a highly invested contributor wants to push it through and gets the go ahead.


Unreal devs have Unreal C++ dialect with GC, Blueprints and soon Verve to worry about.

The times of pure manual memory management game engines, across all layers of what requires to draw a frame are long gone.

Naturally someone is going to point out some game engine using compiled C dynamic libraries for scripting, those are the exception.


>The times of pure manual memory management game engines, across all layers of what requires to draw a frame are long gone.

That's what makes me curious about Rust engines like Bevy. Could is truly pull it off and bring back that kind of thought to game development? It's not "pure manual memory management", but the mindset of Rust requires that kind of thinking.

It will definitely be niche for times to come, since most (non-AAA) games simply aren't going to worry about performance. But it might carve a solid community for those concerned with optimization.


Thing is, FPS don't make fun games, what makes games fun is a great design, delivered in a way that overall performance doesn't hinder the experience.

That is why games like Minecraft, Roblox, Celeste, Balantro make it big. None of them would have happened if the creators followed the advice 100% C coding (or C++ for that matter) was the only way, and yet their design is what made them shine.


You're not wrong. But consider a different lens:

Celeste isn't a game that would need to worry about performance in 2018. It's 2d sprites with box collisions and relatively minimal particle effects. Your toaster can run Celeste.

But a game like Factorio with heavy simulations and complex interactions and pathing absolutely needs to consider performance to pull off a seemless experience.

Those are the kinds of games I'd hope engines like Bevy could enable farther down the line. Design is still key, but some game types are a larger technical challenge than others.


Issue isn't about game devs it's about non-game devs backseat programming.

If you spend a week in these engines you're well aware of the garbage collector.


In my experience, when using Unity, I became acutely aware of creating garbage and how to avoid it. I used to see a lot of object pooling which is basically a makeshift arena allocator.

Can you explain? AFAIK Godot uses C++ under the hood which does not have garbage collection. Other languages such as C# and GDScript use bindings.

Most people using Godot will be using GDScript or C# to make their games.

Funnily enough whilst trying to Google gdscript and godot, I found this post I wrote in 2018 (subcomments mention gdscript and gc).

https://news.ycombinator.com/item?id=16673751


I've been keeping an eye on Mojo/Max as it's one of the only things out there that doesn't depend on the Cuda library.

Blockers for me right now, are:

1) still closed source

2) it doesn't support my low-end hardware (just because I don't have a 5090 or whatever doesn't mean I shouldn't be able to do GPU compute things)

I'm guessing if those barriers are ever removed it'll probably be years from now. But hopefully it'll inspire other languages/ecosystems to take on their own Cuda replacement.


1. It'll become open-source with 1.0, as the article mentions.

2. Just requires someone (or you) to write a kernel for your GPU, which is done in Mojo itself. I'd double check the supported GPUs or if someone else has already done it.


I understand. Until it is open source it's still a blocker for me though.

I watched a community video for the roadmap and it sounds like hardware is not the focus until sometime after 1.0 release. So I think I can assume it'll be a while (if ever) before I can even think about using it.


I hope these really do make a dent in Nvidia's market control. It sucks that it's come routing for Broadcom and Google. I hope at least other companies can license the TPU architecture and supply chips.


I love those demo games!


I'm curious as someone who's thinking of making a blender plug-in that will need to use some native-ish (not C++ though) libraries/modules for performance. What are the issues with using a Python interface instead of a dedicated C++ SDK?


The Python API is limited by Python itself. You're restricted to a GIL environment, so your ability to maximize throughput and reduce latency will be limited. For small/average scenes this may not matter for your addon, however larger scenes will suffer. There are a few popular options to developing Blender functionality:

1. Extend Blender itself. This will net you the maximum performance, but you essentially need to maintain your own custom fork of Blender. Generally not recommended outside of large pipeline environments with dedicated support engineers.

2. Native Python addon. This is what 99% of addons are, just accessing scene data via Blender's Python interface. Drawbacks mentioned above, though there are some helper utilities to batch process information to regain some performance.

3. Hybrid Python Addon. You use the Python API as a glue layer to pass information between Blender and a natively compiled library via Python's C Extension API. With the exception of extracting scene data info, this will give you back the compute performance and host resource scalability you'd get from building on Blender directly. Being able to escape the GIL opens a lot of doors for parallel computation.


I'm always in the looking for an alternative to vscodium. This one is surprisingly feature complete for something meant to embed in 3D renderers. I'd like to see what people do with it.


Have you tried zed? I've tried many alternatives and zed is a really cool project.


I have recently tried it again and it seems to have gotten worse that what I remembered. The interface was really hard on my eyes. I spent a little time trying to tweak the theme but I don't think that was the problem but the font rendering. I also noticed some weird layout bugs in the tab buttons. So not as smooth as I previously remembered.

I do appreciate they have the option to disable the big sign-in button however there are other account related UI I still can't disable.

The main feature that keeps me from moving to any other editor is the vscode/vscodium feature of clicking and dragging files to insert links and embed images in markdown files. Weird that no other editor has this.


It's good to see more effort for making things not device specific but I only see benchmarks for NVIDIA B200 and AMD MI350X. Also what's the experience of using one of these Python DSLs like? Are the tools good enough to make code completion, jump to definition, setting breakpoints, watching variables, copying as expression etc. nice?


Generally you are unlikely to get Python-level debugging for code that is going to run on GPUs.


That's Mojo's selling point.

https://www.modular.com/mojo


I use this all the time when editing markdown in vscodium. It's fast enough for the side preview and supports all the LaTeX commands I need so far. When I need a PDF Pandoc handles the conversion well enough for me. I've tried using Quarto's preview but it's so slow in comparison.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: