I think that it's a generic programming problem: pointers are easier because the type of the pointee is easy to get (a deref) and also its location (memory) but with index-based handles into containers you can no longer say that given a handle `H` (type H = u32) I can use it to get a type `T` and not only that, you've also introduced the notion of "where", that even if for each type `T` there exists a unique handle type `H` you don't know into which container instance does that handle belong. What you need is a unique handle type per container instance. So "Handle of Pool<T>" != "Handle of Pool<T>" unless the Pool is bound to the same variable.
As far as I know no language allows expressing that kind of thing.
But from what I understand (being a nonexpert on Scala), this scheme actually causes a lot of problems. I think I've even heard that it adds more undecidability to the type system? So I'm exploring ways of managing context that don't depend on inferring backward from the type.
> What you need is a unique handle type per container instance.
You can do this with path-dependent types in Scala, or more verbosely with modules in OCaml. The hard part is keeping the container name in scope wherever these handle types are used: many type definitions will need to reference the container handle types. I'm currently trying to structure code this way in my pet compiler written in OCaml.
Off topic, but Notion is a perfect example of how badly you can abuse web standards. This webpage, which is a document with some markup and links (the very thing the web was made for) takes ~600MB RAM, about 10 seconds load, and lags terribly. Just unusable.
Wow it is horrible. I clicked on the link to load it, tabbed back to this comment page and read this comment, went back to the page to see how it was doing and got 99% blank page, scrolled for a solid 10 seconds and just as I was about to come back and say the page is broken for me it popped up a proper scroll bar for a window about 1/3 of my browser size. Scrolled through about 5-6 pages worth of that which still looked broken, then the window finally resized and images started popping in, but it still took another 7 seconds or so for those to load an actual image instead of just a placeholder icon while everything shifted around like mad.
Whenever I use Notion I can feel the PM working there being pushed to ship a new feature this quarter; you can almost hear the engineers asking “why are we building this?”
A few years back Notion was excellent modulo a few small UX things that could be improved.
Now those small things still haven’t been improved but there’s way more clutter worsening the UX notably over time.
Notion sites aren’t my favorite and this website has some annoying quirks (like scrolling to the top after fully loading)
But if this is what it takes for someone to generously share so much information with us for free then I really don’t care if I have to wait a couple extra seconds for a page load or if a tab takes up 600MB of RAM. I know this thinking makes the web purists angry, but the majority of people who visit these sites to learn aren’t going to be impeded or even bothered. Even on my older iPhone on non-5G cellular it loads in a couple of seconds.
If MIT were responsible, sure! But Notion is a $10b company that shouldn't be shitting up my device's free memory just to show a basic webpage. Very much the same deal with FB marketplace which is probably the worst offender.
It's more than a few seconds. On a desktop with a stable fast connection, it takes up to 1GB of RAM and ~17 seconds to finish loading, including around 2.5 seconds of processing time.
On the network side it makes ~650 requests.
That's an exceptionally resource hungry way to load the content.
This varies more than I'd guess by machine. On my Comcast Wifi, it takes 37 seconds to load on my tablet but only 4 seconds to load on my work M3 Macbook Pro with 64 GB of RAM. Maybe Notion developers are issued the latter.
It was such a breath of fresh air in the beginning, when it was simple, elegant, and focused. Shame they had to cram it to the gills with half-considered cruft.
That’s Notion in a nutshell really. Nice UX when it works, but no attention to quality. I’ve lost count of the number of big, obvious bugs I’ve tripped over that they seemingly have no interest in fixing.
Even the app itself is getting slower. I loved it, and I've been using it for about three years, mostly for taking D&D notes - and it's getting slower and slower on my laptop (though it's still quite good on the phone.)
i have slighlty old iphone and boy the notion app is crazy laggy. You are right, i saw a video where the guys spent everything literally on UI design, even that video had comments saying, should have invested in an efficient backend.
Not to disagree with you, but even C++ is going through great efforts to improve compile-times through C++20 modules and C++23 standard library modules (import std;). Although no compiler fully supports both, you can get an idea of how they can improve compile-times with clang and libc++
$ # No modules
$ clang++ -std=c++23 -stdlib=libc++ a.cpp # 4.8s
$ # With modules
$ clang++ -std=c++23 -stdlib=libc++ --precompile -o std.pcm /path/to/libc++/v1/std.cppm # 4.6s but this is done once
$ clang++ -std=c++23 -stdlib=libc++ -fmodule-file=std=std.pcm b.cpp # 1.5s
a.cpp and b.cpp are equivalent but b.cpp does `import std;` and a.cpp imports every standard C++ header file (same thing as import std, you can find them in libc++' std.cppm).
Notice that this is an extreme example since we're importing the whole standard library and is actually discouraged [^1]. Instead you can get through the day with just these flags: `-stdlib=libc++ -fimplicit-modules -fimplicit-module-maps` and of course -std=c++20 or later, no extra files/commands required! but you are only restricted to doing import <vector>; and such, no import std.
[^1]: non-standard headers like `bits/stdc++.h` which does the same thing (#including the whole standard library) is what is actually discouraged because a. non-standard and b. compile-times, but I can see `import std` solving these two and being encouraged once it's widely available!
Makes you wonder, how many webpages are dependant on such services. The Web has always been brittle, but it's a little sad seeing a website not able to survive ~50k users on its first day online.
Even the least offenders, GitHub Pages, broke links before [0].
>>> s="1"+"0"*4300
>>> json.loads(s)
...
ValueError: Exceeds the limit (4300 digits) for integer string conversion:
value has 4301 digits; use sys.set_int_max_str_digits() to increase the limit
This was done to prevent DoS attacks 3 years ago and have been backported to at least CPython 3.9 as it was considered a CVE.
It has been kind of made into a movie! The Heptapods [0] in Arrival (2016) written script is a circular shape with each subsection of the shape conveying a different meaning ultimately representing a concept or thought. A quote from the movie:
> Like their ship or their bodies, their written language has no forward or backward direction. Linguists call this "nonlinear orthography", which raises the question: Is this how they think?
While the movie explores philosophical questions other than "Arrival" and does a quiet beautiful job at that, actual linguistic experts have helped making it and it has been praised for its accuracy. I suggest you give it a go.
I'm not quiet sure why a non-technical person would be engaging in a technical matter such as compiling LLVM, they say they are involved with some Arch Linux derivative but again the question persists.
I disagree, many businesses that put their software in maintenance mode (fix/upgrade on breakage) will be losing money in the long run.
Consider a hospital, many statistics can be collected o provide insights and make immediate decisions, faster algorithms and new ones to problems we couldn't solve back then have been discovered, the UI/UX can always be improved for productivity, etc. All of that makes money.
Software customers aren't, and shouldn't be, one-time shoppers; there's always room for improvement and new needs pop up all the time.
None of that really makes money for a hospital. Most of what hospitals do is direct, hands-on patient care. Software improvements can at best deliver some small cost savings or slight reductions in clinical errors. And many hospitals are non-profit or government run, so there's not even a direct management incentive to improve financial performance.
Do you know what makes money for a hospital? Buying a new MRI machine. They can directly charge customers for scans. In budget planning cycles any proposed IT upgrades have to compete against stuff like that.
As far as I know no language allows expressing that kind of thing.