> It never made sense to me how such a ruthless and inhumane culture is sustainable in the long run.
It’s pretty simple, actually. Once such a dominant market position is achieved, you can get away with almost anything, whether with customers or employees. This is true of all the BigTech companies.
I think there's more to it. When you're dominant, you make money whatever. Think of Amazon et al. as huge spigots of money. Now, it becomes optimal to fight for more of that money coming your way. It's like the resource curse for countries. Nobody gains from growing the pie; they gain from stealing the pie. At some point, parasites and parasitic behaviours invade.
Really? Premature stabilization is a much more recognisable problem. This is after all quietly why there's so much Rust adoption. Safety is nice, but perf is $$$.
C++ had this unhealthy commitment to stabilization and it meant that their "Don't pay for what you don't use" doctrine has always had so many asterisks it's practically Zalgo text or like you're reading an advert for prescription medicine. You're paying for that stabilization, everywhere, all the time, and too bad if you didn't need it.
Unwinding stabilizations you regret is a lot more work even in Rust, consider the Range types, landing improved Range types is considerable work and then they'll need an Edition to change which types you get for the syntax sugar like 1..=10
Or an example which happened a long time ago, the associated constants. There are whole stdlib sub-packages in Rust which exist either primarily or entirely as a place for constants of a type, std::f32::MAX is just f32::MAX but the associated constant f32::MAX only finally stabilized in 1.43 so older code uses std::f32::MAX nevertheless if you learned to write std::f32::MAX you may only finally get deprecation messages years from now and they're stuck with keeping the alias forever of course because it's stable.
This is probably my favorite programming language I would like to use if it had more backing. Their reference capabilities in particular seem like a very good match for the actor model. Alas, it does not appear to have a large corporation behind it pushing it forward, nor a compelling niche use case (e.g. it is still GC'd).
The guy behind the language - Sylvan Clebsch - seems to have a very solid background, and current professional situation. He works at "Microsoft Research Cambridge" in the Programming Language Principles group.
My point is - sure, it doesn't have a handful of massive companies stewarding it like Rust. But, on the other hand, it's made by a guy with really serious chops, who has a solid programming language related job. So while not being as industry-sanctified as Rust, or Java, it seems nonetheless like a language that could go places!
This is alas the chicken and egg scenario and the most common reason I hear for people not wanting to invest the time in pony.
The vast majority of people I discuss it with understand the value and the problems it is designed to solve, but they either don't have domain-problems that require pony's approach more than any other language - or the lack of supporting libraries makes them nervous over adoption.
As a pony developer for 5+ years, it can be frustrating - but I do understand.
I'm not sure I understand what you mean by "real-time safe GC algorithms", but pony is not a language that has being a "real time system" as a design goal.
This pony paper here describes the pony GC algorithms and compares their performance under various scenarios to other language GCs.
Real-time safe GC algorithms and data structures provide strong guarantees that any interruptions to the running program caused by GC activity will have a bounded duration, with strong guarantees on the upper bound of any pause. This is important in latency sensitive software and real-time software that needs to meet deadlines.
Thanks for the ORCA link. I'll have to study it more closely but from Fig 17 it looks to have quite unpredictable jitter up to at least 20ms. Which is obviously fine for many things but not useful for other things (e.g. processing AVB streaming audio packets at line rate every 125 us).
Yes, drastically. This means I'll have to wear Zuck's glasses I think, because the AI currently doesn't know what was discussed at the coffee machine or what management is planning to do with new features. It's like a speed typing goblin living in an isolated basement, always out of the loop.
I have been using it for coding for some time, but I don't think I'm getting much value out of it. It's useful for some boilerplate generation, but for more complex stuff I find that it's more tedious to explain to the AI what I'm trying to do. The issue, I think, is lack of big picture context in a large codebase. It's not useless, but I wouldn't trade it for say access to StackOverflow.
My non-technical friends are essentially using ChatGPT as a search engine. They like the interface, but in the end it's used to find information. I personally just still use a search engine, and I almost always go to straight to Wikipedia, where I think the real value is. Wikipedia has added much more value to the world than AI, but you don't see it reflected in stock market valuations.
My conclusion is that the technology is currently very overhyped, but I'm also excited for where the general AI space may go in the medium term. For chat bots (including voice) in particular, I think it could already offer some very clear improvements.
That might be a good thing after all, at least in a certain sense. Stack Overflow has been dying for the last ten years or so. In the first years there where a lot of good questions that were interesting to answer but that changed with popularity and it became an endless sea of low effort do my homework duplicates that were not interesting to answer and annoying to moderate. If this now gets handled by large language models, it could maybe become similar to the beginning again, only those questions that are not easily answerable by looking into the documentation or asking a chat bot will end up on Stack Overflow, it could be fun again to answer questions on Stack Overflow. On the other hand if nobody looks up things on Stack Overflow, it will be hard to sustain the business, maybe even when downscaled accordingly.
Is it really dying? There's only so much growth possible without having it flooded with low effort stuff. They try to instead grow by introducing more topics, but that's limited as well.
I personally didn't use it so much (meaning: Writing content) because it always felt a bit over engineered. From what I remember the only possible entry point is writing questions that get up voted. Not allowed to even write comments, vote and of course not allowed to answer questions.
Maybe that's not correct but that has always been my impression.
In general stack exchange seems to be a great platform. I think "it's dying" has an unfortunate connotation. It's not like content just vanishes, just the amount of new stuff is shrinking.
Maybe it changed, but you could ask and answer questions right after signing up, just commenting, voting, and moderation rights required a certain amount of points. I only answered questions and was among the top 100 contributors at my peak activity but that was at least a decade ago.
I stopped contributing when the question quality fell of a cliff. Existing contributors got annoyed by the low effort questions, new users got annoyed because their questions got immediately closed, it was no longer fun. There were a lot of discussions on meta how to handle the situation but I just left.
So admittedly things might have changed again, I do not really know much about the development in the last ten or so years.
To be honest, I used Stackoverflow less and less over the years. Not sure that was because I learned more. I just think most times I went there I was looking for snippets to save time with boilerplate. As better frameworks, tools, packages, languages came along I just had less need to go to Stackoverflow.
But yes, AI put the nail in its coffin. Sadly, ironically, AI trained off it. I mean AI quite literally stole from Stackoverflow and others and somehow got away with it.
That's why I don't really admire people like Sam Altman or the whack job doomer at Anthropic whatever his name is. They're crooks.
>I have been using it for coding for some time, but I don't think I'm getting much value out of it.
I find this perspective so hard to relate to. LLMs have completely changed my workflow; the majority of my coding has been replaced by writing a detailed textual description of the change I want, letting an LLM make the change and add tests, then just reviewing the code it wrote and fixing anything stupid it did. This saves so much time, especially since multiple such LLM tasks can be run simultaneously. But maybe it's because I'm not working on giant, monolithic code bases.
> I find this perspective so hard to relate to. LLMs have completely changed my workflow; the majority of my coding has been replaced by writing a detailed textual description of the change I want, letting an LLM make the change and add tests, then just reviewing the code it wrote and fixing anything stupid it did.
I use it in much the same way you describe, but I find that it doesn't save me that much time. It may save some brain processing power, but that is not something I typically need saving.
I extract more from LLM asking it to write code Infind tedious to write (unit tests, glue code for APIs, scaffolding for new modules, that sort of thing). Recently I started asking it to review the code I write and suggest improvements, try to spot bugs and so on (which I also find useful).
Reviewing the code it writes to fix the inevitable mistakes and making adjustments takes time too, and it will always be a required step due to the nature of LLMs.
Running tasks simultaneously don't help much unless you are giving it instructions that are too general that will take it a long time executiny - and the bottleneck will be your ability to review all the output anyway. I also gind that the broader is the scope of what I need it to do, the less precise it tends to be. I achieve most success by being more granular in what I ask of it.
My take is that while LLMs are useful, they are massively overhyped, and the productivity gains are largely overstated.
Of course, you can also "vibe code" (what an awful terminology) and not inspect the output. I find it unacceptable in professional settings, where you are expected to release code with some minimum quality.
>Reviewing the code it writes to fix the inevitable mistakes and making adjustments takes time too, and it will always be a required step due to the nature of LLMs.
Yep but this is much less time than writing the code, compiling it, fixing compiler errors, writing tests, fixing the code, fixing the compilation, all that busy-work. LLMs make mistakes but with Gemini 2.5 Pro at least most of these are due to under-specification, and you get better at specification over time. It's like the LLM is a C compiler developer and you're writing the C spec; if you don't specify something clearly, it's undefined behaviour and there's no guarantee the LLM will implement it sensibly.
I'd go so far as to say if you're not seeing any significant increase in your productivity, you're using LLMs wrong.
> I'd go so far as to say if you're not seeing any significant increase in your productivity, you're using LLMs wrong.
It's always the easy cop out for whoever wants to hype AI. I can preface it with "I'd go so far as to say", but that is just a silly cover for the actual meaning.
Properly reviewing code, if you are reviewing it meaningfully instead of just glancing through it, takes time. Writing good prompts that cover all the ground you need in terms of specificity, also takes time.
Are there gains in terms of speed? Yeah. Are they meaningful? Kind of.
GPs main point is that you (need to) learn to specify (and document) very, very well. That has always been a huge factor for productivity, but due to the fucked up nature of how software engineering is often approached on an organisational level we're collectively very much used to 'winging it': Documentation is an afterthought, specs are half baked and the true store of knowledge on the codebase is in the heads of several senior devs who need to hold the hands of other devs all the time.
If you do software engineering the way you learned you were supposed to do it long, long ago, the process actually works pretty well with LLMs.
> GPs main point is that you (need to) learn to specify (and document) very, very well. That has always been a huge factor for productivity
That's the thing, I do. At work we keep numerous diagrams, dashboards and design documents. It helps me and the other developers understand and and have a good mental model of the system, but it does not help LLMs all that much. The LLMs won't understand our dashboard or diagrams. It could read the design documents, but it wokdn't help it to not make the mistakes it does when coding, and definitely would not reduce my need to review the code it produces.
I said before, I'll say it again - I find it unacceptable in a professional setting to not review the code LLMs produce properly, because I have seem the sort of errors it produces (and I have access to the latest Claude and Gemini models, that I understand to be the top models as of now).
Are they useful? Yeah. Do they speed me up? Sort of, especially when I have to write a lot of boring code (as mentioned before, glue code for APIs, unit tests, scaffolding for new modules, etc). Are the productivity gains massive? Not really, due to the nature of how it generates output, and mainly due to the fact that writing code is only part of my responsibilities, and frequently not the one that takes up most of my time.
Do you have any example prompts of the level of specificity and task difficulty you usually do? I oscillate between finding them useful and finding it annoying to get output that is actually good enough.
How many iterations does it normally take to get a feature correctly implemented? How much manual code cleanup do you do?
If you ever end up working on large complicated code bases you'll likely have an easier time relating to the sentiment. LLMs are vastly better at small greenfield coding than for working on large projects. I think 100% of the people I've heard rave about AI coding are using them for small isolated projects. Among people who work on large projects sentiment seems to range from mildly useful to providing negative value.
Fun test: ask chatgtp to find where Wikipedia is wrong about a subject. It does not go well, proving that it is far less trustworthy than wikipedia alone.
(Most AI will simply find where twitter disagrees with Wikipedia and spout out ridiculousness conspiracy junk.)
It is over hyped for sure. This is among the biggest hype cycles we've seen yet. When it bursts, it'll be absolutely devastating. Make no mistake. Many companies will go out of business, many people affected.
However. It doesn't mean AI will go away. AI is really useful. It can do a lot actually. It is a slow adoption because it's somehow not the most intuitive to use. I think that may have a lot to do with tooling and human communication style - or the way we use it.
Once people learn how to use it, I think it'll just become ubiquitous. I don't see it taking anyone's job. The doomers who like to say that are people pushing their own agenda, trolling, or explaining away mass layoffs that were happening BEFORE AI. The layoffs are a result of losing a tax credit for R&D, over hiring, and the economy. Forgetting the tax thing for a moment, is anyone really surprised that companies over hired?? I mean come on. People BARELY do any work at all at large companies like Google, Apple, Amazon, etc. I mean that not quite fair. Don't get me wrong, SOME people there do. They work their tails off and do great things. That's not all of the company's employees though. So what do you expect is going to happen? Eventually the company prunes. They go and mass hire again years later, see who works out, and they prune again. This strategy is why hiring is broken. It's a horrible grind.
Sorry, back to AI adoption. AI is now seen by some caught in this grind as the "enemy." So that's another reason for slow adoption. A big one.
It does work though. I can see how it'll help and I think it's great. If you know how everything gets put together then you can provide the instructions for it to work well. If you don't, then you're not going to get great results. Sorry, if you don't know how software is built, what good code looks like, AND you don't "rub it the right way." Or as people say "prompt engineering."
I think for writing blog posts, getting info, it's easier. Though there's EXTREME dangers with it for other use cases. It can give incredibly dangerous medical advice. My wife is a psychiatrist and she's been keeping an eye on it, testing it, etc. To date AI has done more to harm people than it has help them in terms of mental health. It's also too inaccurate to use for mental health as well. So that field isn't adopting it so quickly. BUT they are trying and experimenting. It's just going to take some time and rightfully so. They don't want to rush start using something that hasn't been tested and validated. That's an understaffed field though, so I'm sure they will love any productivity gain and help they can get.
All said, I don't know what "slow" means for adoption. It feels like it's progressing quickly.
I used to donate to Wikipedia, but it has been completely overrun by activists pushing their preferred narrative. I don't trust it any more.
I guess it had to happen at some point. If a site is used as ground truth by everyone while being open to contributions, it has to become a magnet and a battleground for groups trying to influence other people.
LLMs don't fix that of course. But at least they are not as much a single point of failure as a specific site can be.
> at least they are not as much a single point of failure
Yes, network effects and hyper scale produce perverse incentives. It sucks that Wikipedia can be gamed. Saying that, you'd need to be actively colluding with other contributors to maintain control.
Imagining that AI is somehow more neutral or resistant to influence is incredibly naive. Isn't it obvious that they can be "aligned" to favor the interests of whoever trains them?
>Imagining that AI is somehow more neutral or resistant to influence is incredibly naive
The point is well taken. I just feel that at this point in time the reliance on Wikipedia as a source of objective truth is disproportionate and increasingly undeserved.
As I said, I don't think AI is a panacea at all. But the way in which LLMs can be influenced is different. It's more like bias in Google search. But I'm not naive enough to believe that this couldn't turn into a huge problem eventually.
Not convinced that would be any better, but I did just have a go and got the response:
> I'm currently unable to fetch data directly from YouTube due to a tool issue. This means I can't access the transcript, timestamps, or metadata from the video at this time.
I thought about giving examples because I understand why people would ask for them, but I decided very deliberately not to give any. It would inevitably turn into a flame war about the politics/ethics of the specific examples and distract from the reasons why I no longer trust Wikipedia.
I understand that this is unsatisfactory, but the only way to "prove" that the motivations of the people contributing to Wikipedia have shifted would be to run a systematic study for which I have neither the time nor the skills nor indeed the motivation.
Perhaps I should say that am a politically centrist person whose main interests are outside of politics.
Let me guess: you hold some crank views that aren't shared by the people who maintain Wikipedia, and you find that upsetting? That's not a conspiracy, it's just people not agreeing with you.
Not in a technical sense. What I mean is that Wikipedia is very widely used as an authoritative source of objective truth. Manipulating this single source regarding some subject would have an outsize influence on what is considered to be true.
Note that many companies are pretending to hire in order to look successful/growing. They might be willing to hire if some unicorn candidate comes along, but in practice the job ad is just marketing.
Saw that article a while back where even Glassdoor was admitting something like a third of their job posts were probably ghost jobs. I think that's hurting everyone though, all for the same reasons.
I think it's basically become de rigueur, standard HR advice at this point. The the amount of fake job listings is off the charts, to the point you should assume it's fake until proven otherwise (on this side of the pond anyway)
It is funny how 10 years ago, this situation would be perfect for some startup to disrupt the market but they just don't come anymore. Innovation is dead.
The problem is that now you have to package for N distros. And the people who run the distro may not want to spend time on it, so you have to do it yourself.
It doesn't have to be gated by "the people who run the distro". I started packaging a few pieces of software for a distro I use because I wanted to use that software, and I don't "run" the distros in any capacity. Package maintainers aren't born that way, they become that way by volunteering, just like most everything in Linux.
If you don't have even one user willing to do that for the distro they use, you probably weren't going to have users on that distro anyway.
> Package maintainers aren't born that way, they become that way by volunteering, just like most everything in Linux.
I feel like there's a constant tug of war on this issue. If you leave it up to app developers then they have to package their app for N distros. If you leave it up to the distro maintainers then they have to compile N apps for their distro. I don't envy either group given how different distros are and how varied apps are in quality, methodology, etc.
I look at Podman. In my opinion it could be (could have been?) a huge disruptor, but its RedHat (or Fedora or CentOS or whatever the hell those guys do now) versions are way higher than versions for other distributions, which creates for me (just a home user) an interoperability problem between all my different Linux boxes. RedHat if anybody has the resources to fix this but I guess they'd rather try to use it as a way to force adoption of their distro? I don't even know.
Both the apps and the distros are volunteer-heavy. App packaging is a big job for either side. I'm still hopeful that Flatpak can help that job
If you are unwilling to use tools like Flatpak, then that limits what distros you can make. e.g., in a world without Flatpak, only distros with X users can exist. In a world with Flatpak, distros with X/10 users can exist.
Another way to think about it: if you want to make/use your own distro, then using Flatpak will cut down the amount of work you have to do by some large multiple. You're free to not use it, just like you're free to install custom electrical sockets in your house and make custom adaptors for every single appliance you buy.
Standardization/centralization exists for a reason.
You're saying the exact opposite of the original point, which is: you should not package for distros, distros should package for themselves. You just distribute your sources.
You are a good candidate to package for your distro, so there's that. And then for a random distro, if nobody feels like packaging for it, then it's just not there. Either there is not enough interest in your project, or there is not enough interest in the distro itself.
> distros should package for themselves. You just distribute your sources.
Is Devault basically saying that the application developer should just throw their source code over the wall and hope that other parties notice and figure out how to build it correctly? I would find that model of software distribution unsatisfying as a developer because merely distributing a source tarball and leaving the rest to middlemen makes it difficult for me to predict how my users will experience the final product. Even if my product is fully open source and free to fork, it's my reputation on the line when things don't work as intended. I would prefer to establish a more direct relationship with my users; to personally build and test my software in all environments that I support; and to hear directly from users whenever they experience problems.
> Even if my product is fully open source and free to fork, it's my reputation on the line when things don't work as intended
I think that everyone who is worrying about that wants to apply corporate thinking on the open source model. Meaning they want to be a special thing where everything is supposed to be interchangeable. Just yesterday, I was compiling a program that hard depends on the GNU C library for just 2 functions and not even critical one. To be fair, the author said that they only test on Debian.
While the linux world may be fragmented, the true differences are mostly minimal (systemd vs other init system, glibc vs musl, networking manager,…) So it’s possible to decouple yourself from these concerns if you want to. But often the developer hard depends on decision made by their preferred distro team, and create a complicated build script that only works there.
I don't know what Devault says, but here is my opinion: do not ship something you don't understand/test/use yourself.
Distros should not package random open source projects they don't use/understand, and developers should not package their project for distros they don't use/understand. For both, it's like shipping untested code and the conclusion is always going to be "you should all run the same system I do" or "we should all have the exact same system, let's implement Flatpak".
Developers should package their project for the distros they support (often that's just Ubuntu). Random people should package the open source projects they want to use in their distro of choice (the more popular the distro, the higher the chance that someone else has done it already). All that under the supervision of distro maintainers.
> distros should package for themselves. You just distribute your sources.
That's how you ended up with Erlang being split into 20+ packages on Ubuntu/Debian in the past. Because it was packaged by people who know little about erlang, and had too much time on their hands probably.
And that is the main issue: you want distro maintainers to compile and package every single pieces of software under the sun, but they can't possibly know every piece of software, how it works, or how it's supposed to work. Times that by the number of distros.
> you want distro maintainers to compile and package every single pieces of software under the sun
No. I want people who will actually use the package to package the software they need, and distro maintainer to supervise that.
> Because it was packaged by people who know little about erlang
Yep, people who won't use Erlang shouldn't package Erlang. But on the other hand, developers who won't use Erlang on platform X shouldn't package Erlang on platform X.
The "we absolutely need flatpak because otherwise it fundamentally doesn't work" philosophy is, to me, very close to saying "we must consolidate everything under one single OS. Everybody should use the exact same thing otherwise it doesn't work". That's not what I want. I want to have freedom, and the cost of it is that I may have to package stuff from time to time.
If you don't want to contribute to your distro, choose a super popular distro where everything is already packaged (and used!). Or use macOS. Or use Windows. You don't get to complain about Alpine Linux not having a package you want: you chose Alpine, that was part of the deal.
Alpine is a great litmus test for programs that unnecessarily depends on glibc and systemd. More often than not, it’s easy to take the arch build script, and create a package for alpine. When that fails, it’s usually for the above reason.
> I want people who will actually use the package to package the software they need, and distro maintainer to supervise that.
Erm... Your original comment said "you should not package for distros, distros should package for themselves. You just distribute your sources."
> Yep, people who won't use Erlang shouldn't package Erlang. But on the other hand, developers who won't use Erlang on platform X shouldn't package Erlang on platform X.
So... Who's gonna package it if you say that distros should package it?
> The "we absolutely need flatpak because otherwise it fundamentally doesn't work" philosophy is, to me, very close to saying "we must consolidate everything under one single OS.
Bullshit.
What you advocate for is "why bother with ease of use and convenience, everyone should learn how to compile and package everything from scratch"
> If you don't want to contribute to your distro
The user of a package doesn't necessarily know how to package something, and shouldn't need to.
> Erm... Your original comment said "you should not package for distros, distros should package for themselves. You just distribute your sources."
Yes. I said "distros", not "the distro maintainers". The distro is the maintainers + the packagers, and packagers can be random contributors (I package stuff for my distro when needed, but I am not a distro maintainer).
> So... Who's gonna package it if you say that distros should package it?
People who will use Erlang on that particular distro. Under the supervision of the distro maintainers. There is typically some kind of hierarchy where there are the "community" packages that are just "untested" (sometimes they can get promoted to a more trusted level), and the "core" packages that are handled by the distro maintainers.
> What you advocate for is "why bother with ease of use and convenience, everyone should learn how to compile and package everything from scratch"
Not at all, but it seems like you don't know how it currently works in traditional distros, and you don't understand what I'm saying (probably I'm not being clear, that's on me).
What I advocate seems absolute common sense: "the package maintainer(s) should understand and use the package on the distro for which is is packaged".
The vast majority (probably almost the totality of) users of Ubuntu or Arch have never had a need to package anything, because everything is already there. Because those distros are very popular. Depending on your choice of distro, it may happen that a package hasn't been contributed or even that it doesn't compile (e.g. if you use musl). In that case, if you want it, you need to contribute it. But if you use musl, you implicitly accept this and are supposed to know what you are doing.
> The user of a package doesn't necessarily know how to package something, and shouldn't need to.
That's your opinion. I would say that a Gentoo user is expected to have some idea about compiling packages, otherwise they should not use Gentoo. Ubuntu is targetting people who don't want to know how it works, that's fine too. Diversity is good.
What I don't like, is Windows-minded people ("I shouldn't have to understand how my computer works") who come to Linux and push for everybody to become like them. "We should all use systemd and Flatpak, and pay one team of 50 people who know how that works, and the rest of us should just use it and not know about it" -> I disagree with that. Those who think that should just use Ubuntu/Windows/macOS and leave me alone. And for those who use Ubuntu, they should remember that they don't pay for it next time they say "it's shit because it doesn't do exactly what I want".
So who's going to maintain the packages? Who's going to test them against other packages? Against distro upgrades? Who's going to fix issues?
> Not at all, but it seems like you don't know how it currently works in traditional distros
I do. A small number of people are doing the thankless job of packaging, maintaining, fixing, testing a multitude of packages.
And their efforts are needlessly duplicated across several packaging systems.
> What I don't like, is Windows-minded people ("I shouldn't have to understand how my computer works") who come to Linux and push for everybody to become like them
What I don't like is people assuming ill intent behind "you know what would be great? If we didn't assume that every user has to package their own packages across 15 different incompatible packaging systems".
> So who's going to maintain the packages? Who's going to test them against other packages? Against distro upgrades? Who's going to fix issues?
I feel like you're not reading what I'm writing. The community.
That's how open source works: if you use an open source project and it has a bug, you can fix it and open an MR. If the upstream project doesn't want your fix, you can fork. Nothing forces the upstream project to accept your contributions. When they do, they take the responsibility for them (to some extent, as in: it is now part of their codebase).
If your distribution doesn't have a package you want, you can make it for yourself, locally. You can contribute it to a community repo (most distros have that). Maybe at some point, the distro maintainers will decide to take over your package in a more official repo, maybe not. Even if you are not the official maintainer of a package, if you use it and see a problem, you can contribute a fix.
In the open source world, most people are freeriders. A (big) subset of those feel entitled and are simply jerks. And a minority of people are not freeriders and actually contribute. That's the deal.
> And their efforts are needlessly duplicated across several packaging systems.
No! No no no no! If they don't want to put efforts into that, they don't have to. They could use Ubuntu, or Windows, or macOS. If they contribute to, say, Alpine or Gentoo, that's because they want to. I am not on Gentoo in the hope that it will become Ubuntu, that would be weird. But you sound like you want to solve "my Gentoo problems" by making it look more like Ubuntu (in the idea). Don't use Gentoo if you don't want to, and leave me alone! Don't try to solve my problems, you're not even a Gentoo user.
Funny how in reality it's not how open source works. Packages are en masse packaged and maintained by a very small number of maintainers doing a thankless job. Not by some "community" where "a person who uses the package" suddenly wakes up nad says "you know, I'm going to package this piece of software"
This is literally the reason for my exmaple with Erlang in my original comment.
> n the open source world, most people are freeriders.
I'm getting tired of your rants and tangents
> No! No no no no! If they don't want to put efforts into that, they don't have to. They could use Ubuntu
You're deliberately missing and/or ignoring the point.
Ho many package managers and package format are there? Packaging some code for each of them is wasted/duplicated effort because it's doing the same thing (packaging) for the same code (for example, Erlang) for literally the same operating system (Linux) just because someone has a very subjective view of "the one true correct way".
So now you have someone packaging, say, Erlang, for dpkg, flatpack, nix, pacman, rpm, snap and probably a few others because "people are not freeloaders" or "non-windows-minded people" or some other stream of consciousness.
> Don't use Gentoo if you don't want to, and leave me alone! Don't try to solve my problems, you're not even a Gentoo user.
I've said all I had to say. You deliberately chose to talk only to the voices in your head. Sorry, I'm not privy to those voices.
> Funny how in reality it's not how open source works.
Let me copy the full sentence, with the part that you conveniently truncated: "That's how open source works: if you use an open source project and it has a bug, you can fix it and open an MR. If the upstream project doesn't want your fix, you can fork. Nothing forces the upstream project to accept your contributions. When they do, they take the responsibility for them (to some extent, as in: it is now part of their codebase)."
Can you explain to me how this is wrong?
> I'm getting tired of your rants and tangents
How is that a rant? That's almost by design: I make my code open source so that people can benefit from it for free under some conditions. Take the billions of computers running Linux. Which proportion of those are run by people who actually contribute to Linux, do you think? As a good approximation, it's ~0%. Almost all users of Linux don't contribute to Linux. It's a fact, not a rant.
Nowhere did I say that people should contribute.
> Ho many package managers and package format are there?
Who cares? If I want to create a new package manager with a new package format, why would you forbid me from doing it? That's my whole point: people are free to do whatever they want. Are you saying that I must use Flatpak instead of my favourite package manager because you have decided that it was better for everybody?
Why do you stop at package managers? In your view, isn't having different OSes is wasted/duplicated effort? Should we all just use Windows because it's your favourite and you don't understand why other people may have other preferences?
> Sorry, I'm not privy to those voices.
My point is that whenever somebody says "it's stupid, we should all use X", my answer is always "If Y, Z, A, B, C, ... exist, it's because other people, for some reasons, don't want X. Because you like X doesn't mean that everybody should like X. I see how it would be convenient for you if everybody used exactly your favourite system, but the reality is that we can't all love exactly the same things. Hence there are alternatives. Diversity is good".
Diversity is good. I don't say that Flatpak should not exist. I just say that whoever wants me to use Flatpak is fundamentally missing something.
Please do it. It will not solve the problem (which is SS/Medicare spending), but will guarantee that they lose the midterms, which will grind the rest of the MAGA agenda to a halt.
Right. Scary stuff. I'm not excited to drive a cheap second-hand ICE car, but the fanciness stops at AC and 3.5 mm AUX-jack on the stereo, and that's pretty nice. If I wanted to I could do service and repairs myself.
You can also just have a dumb EV and thus do a favor to both your own safety and the survivability of the planet. EV does not automatically entail AI-assist.
I'm not very well informed in this area but I suspect there are no serious alternatives. Ignoring price, are there EV:s that can travel at least 500-600 kilometers on a charge but only weigh 1500 kilos and hence are rather simple to lift with consumer or improvised tooling? Are there EV:s without remote control and 'upgrades'? Can I change lamps and shift tires on such EV:s? Do they fit at least two child seats or is that amount of space more of a premium feature?
The existing EVs' specs are more than enough, don't worry.
And yes most of these EVs are still pretty dumb, so you'd like them. It's just that Tesla got the hype.
By the way, when you realize that you actually never drive for 500km straight without e.g. several 30min pauses to rest, which can be used for charging if you have an EV, you discover that you have much more options than you'd think to choose a model to buy. (most governments officially advise it, as not doing pauses would endanger the lives of everybody on the road)
Seems it's rather common that they're Internet connected when I look at some ads. Do you have an example?
30 min pauses? Why? I stop for five-ten minutes every four to five hours when I need to pee, and I can't rely on there being charging stations along the way.
At least all the Toyota and Mitsubishi electric prior to 2018 or so. And I did not even look up anything in particular, it's just well known that none of those had any kind of remote upgrade.
I would advise you to do longer pauses when you are on such a long trip (not only me, the road safety specialists too). It's not for the bladder, it's for the fatigue, particularly of the central nervous system.
The risk is not to be killed by a bladder shrapnel, it's to switch off the attention and be crushed by an unnoticed lorry.
About road safety: Yes. Seriously lol please put a bit of your own energy in searching this.
And it's not "to make the specialists happy", it's to increase safety according to the knowledge accumulated thanks to such specialists.
(ok I see "neo-luddite" in your profile, but usually that does not entail "not trusting scientists/specialists")
You have a flesh brain like everybody else, with biological limitations that lead to fatigue and decreased attention after a long drive without sufficient pause.
Image segmentation is almost a solved problem. There is no reason why it should get confused even with a vision only system. Their problem is most likely that they don't have enough compute to process a history of frames and instead process a single image at a time leading to jumps in the segmentation results and those random jumps cause unpredictable braking.
In my experience it's almost always shadows. I can't be sure of course, but I've definitively noticed a correlation: both shadows from overpasses or shadows from semis, this happens more often when the sun is low too.
It never happens at night though, which in my mind makes the shadows hypothesis weaker.
It’s pretty simple, actually. Once such a dominant market position is achieved, you can get away with almost anything, whether with customers or employees. This is true of all the BigTech companies.