Cachy pushed a Limine update last weekend without any testing.
It broke everyone with secure boot signing.
Head proton versions are great, but games tend to turn into a laggy mess after a couple of hours and need regular restarts.
It's decent, but it's not all roses at all, and I wouldn't inflict it on non-techies yet.
that's not a statement from a lawyer, and it's confused. there is one true thing in there which is that at least under US considerations the LLM output may not be copyrightable due to insufficient human involvement, but the rest of the implications are poorly extrapolated.
there are lots of portions of code today, prior to AI authorship, that are already not copyrightable due to the way they are produced. the existence of such code does not decimate the copyright of an overall collective work.
I got a SARS virus flying to Udon Thani in 2019. We were seated next to two thai guys who were so sick they could barely sit up straight. We offered them help and treats because they looked like they were about to vomit.
Plane lands, next day I'm sick. I was laid up for 2 weeks with fever, the shits, and I had a weird spontaneous cough for over 1 month after I got better.
I bet most of that plane got sick, and it was so damn avoidable.
The problem is there can he huge penalties for not flying when you booked. You might not be able to rebook your flight or hotel or days off so you're stuck either getting everyone sick or perhaps being out thousands of dollars or not going on vacation at all.
If that occurs and it’s a substantial enough body of output that it is itself copyrightable and not covered by fair use. Confluence of those conditions is intentionally rare.
Deployments like bedrock have no where near SOTA operational efficiency, 1-2 OOM behind. The hardware is much closer, but pipeline, schedule, cache, recomposition, routing etc optimizations blow naive end to end architectures out of the water.
Many techniques are documented in papers, particularly those coming out of the Asian teams. I know of work going on in western providers that is similarly advanced. In short, read the papers.
Look, forget the details, step back and consider the implications of the principle.
Someone should not be able to write a semi-common core utility, provide it as a public good, abandon it for over a decade, and yet continue to hold the rest of the world hostage just because of provenance. That’s a trap and it’s not in any public interest.
The true value of these things only comes from use. The extreme positions for ideals might be nice at times, but for example we still don’t have public access to printer firmware. Most of this ideology has failed in key originating goals and continues to cause headaches.
If we’re going to share, share.
If you don’t want to share, don’t.
But let’s not setup terminal traps, no one benefits from that.
If we flip this back around though, shouldn’t this all be MPL and Netscape communications? (Edit: turns out they had an argument about that in the past on their own issue tracker: https://github.com/chardet/chardet/issues/36)
LGPL applies to the LGPL’d code, not to every piece of code someone might add to the repository or under the same name implicitly.
The claim being made is that because some prior implementation was licensed one way, all other implementations must also be licensed as such.
AIUI the code has provenance in Netscape, prior to the chardet library, and the Netscape code has provenance in academic literature.
Now the question of what constitutes a rewrite is complex, and maybe somewhat more complex with the AI involvement, but if we take the current maintainers story as honest they almost certainly passed the bar of independence for the code.
It’s not a good idea and that’s where I’d really start with the dated commentary here rather than focusing on the polling mechanism. It depends on the application but if the buffers are large (>=64kb) such as a common TCP workload then uring won’t necessarily help that much. You’ll gain a lot of scalability regardless of polling mechanism by making sure you can utilize rss and xss optimizations.
It's been a while but why is uring not helpful for larger buffers? I'd think the zero-copy I/O capabilities would make it more helpful for larger payloads, not less
uring supports zero-copy, but is not a copy-reduction mechanism; it is a syscall-reduction mechanism. Large buffers mean less syscalls to start with, so less benefit.
Exactly this. The kernel alloc’d buffers can help but if that was a primary concern you’re in driver territory. Anything still userspace kind of optimization domain the portion of syscalls for large buffers in a buffered flow is heavily amortized and not overly relevant.
The human driver of the project has a comment that is reporting that the project has no structural overlap as analyzed by a plagarism analysis tool. Were comments excluded from that analysis? Is your comment here based on the data in the repo?
It's decent, but it's not all roses at all, and I wouldn't inflict it on non-techies yet.
reply