Apparently Uruguay is LGBT friendly, and a destination for some Americans fleeing the Trump/Republican regime (LGBT and straight). I'd imagine either of those things would annoy Dear Leader.
It was common, on the left (i.e., not Liberals and not so-called Democrats), to call Obama, the "Deporter in Chief".
Democratic voters always circle the wagons to protect the administration, regardless of the administration's actions, when one of their own is POTUS. The Republican voters do the exact same thing.
Accessibility is apparently a big problem with wayland. E.g., the most popular / ?only? app that supports hardware eye trackers on Linux does not work with wayland, and states that it likely never will as wayland does not provide what it needs to add support (it is also the most popular app for voice/noise control). Even basic things like screen readers are apparently still an issue with wayland. Without a strong accessibility story, systems running wayland would have been banned at my last employer (a college).
Personally, I have a 3200x2400 e-ink monitor that has a bezel that covers the outer few columns of pixels. I use a custom modeline to exclude those columns from use. And, a fractional scaling of .603x.5 on this now 3184x2400 monitor to get 1920x1200 effective resolution. Zero idea how to accomplish this with wayland-- I do not think it is possible, but if anyone knows a way, I am all ears.
I ran into, at least, ten issues without solutions/work-arounds (like the issue with my monitor) when I tried to switch this year, after getting a new laptop. Reverted to a functional, and productively familiar, setup with X.
If your laptop is using a recent AMD Ryzen based SoC:
ACPI C4 power state (for powering down more of the SoC during S0ix suspend) is not supported on Linux yet, for recent (last couple years) AMD processors.
Patches submitted for 6.18 were described as "laying the foundation for AMD C4 support". So, maybe won't be fully supported until 6.19 or even later; Sorry, I haven't followed up to see what has actually landed.
Debian requires that packages be able to be built entirely offline.
> Debian guarantees every binary package can be built from the available source packages for licensing and security reasons. For example, if your build system downloaded dependencies from an external site, the owner of the project could release a new version of that dependency with a different license. An attacker could even serve a malicious version of the dependency when the request comes from Debian's build servers. [1]
This is such a wonderful guarantee to offer to users. In most cases, I trust the Debian maintainers more than a trust the upstream devs (especially once you take into account supply chain attacks).
It's sad how much Linux stuff is moving away from apt to systems like snap and flatpak that ship directly from upstream.
So do Gentoo and Nix, yet they have packaging separate from the source code. The source is fetched, but the build is sandboxed from the network during the configure, build and install phases. So it's technically possible.
Nix definitely does not allow most things to be built offline (at least in the way Debian means it).
With Nix, any fetcher will download the source. It does so in a way that guarantees the shasum of what is fetched is identical, and if you already have something in the nix store with that shasum, it won't have to fetch it.
However, with just a mirror of the debian source tree, you can build everything without hitting the internet. This is assuredly not true with just a mirror of nixpkgs.
OK, I see how the Debian idea differs from the Portage/Nix/etc. idea. For Portage and Nix it is enough that the build proper be offline, but the source code is fetched at the beginning of the package build. Not only do I find this sufficient, I prefer it because IMO it makes the package easier to work with (since you're only wrangling the packaging code, not upstream's).
There are probably still ways to maintain a source archive with a ports system. Just analyze the sources used by builds, create a mirror, and redirect fetches to use the mirror. It's not that crazy. The packaging would still be a separate affair.
This is correct; in Nix lingo these are referred to as "fixed output derivations". For some other interesting context, see this Nix forum post from last year in which they discussed deleting some stuff from cache.nixos.org to save money, but were clear that they'd keep all fixed output derivations and only delete other things that aren't derivable from those: https://discourse.nixos.org/t/upcoming-garbage-collection-fo...
Nix and specifically nixpkgs is IMO very bad at this. It's not a distro: it's a collection of random links that in many cases now only exists in cache.nixos.org. The tarball server frequently doesn't have content, can't represent some content at all (recursive hash types), links have rotted away completely (broadcom driver zips referencing a domain which is now advertising online gambling).
Nix isn't functional: it's a functional core that moved every bit of the imperative part to an even less parseable stage, labelled it "evaluation" and then ignored any sense of hygiene about it.
No: your dependency tree for packaging should absolutely not include an opaque binary from a cache server, or a link to a years old patch posted on someone else's bugzilla instance (frequently link rotted as well).
Nothing has made me appreciate the decisions of mainstream distributions more then dealing with an alternative like this.
The heavy asterisk here is that none of this actually makes using NixOS impossible because it obviously still works. But when you get into the problem I am - and one of the major purported benefits of it which is reproducibility and traceability - this is a pretty serious issue.
So long as the NAR files in cache.nixos.org exist, everything will work - that's not a problem. But if you actually choose to exercise that traceability - which is what I've been working on - suddenly you start finding all this stuff. The problem is nixpkgs doesn't expose or archive the code: it archives a reference to code that existed somewhere at some time, and worse it obfuscates what the code was - I can obviously still go get it from the NAR files, but I can't get any of the context surrounding it.
By contrast, things like the Fedora and Debian patching systems have - crucially - actual archives of what they're building, the patches they're building them with, and the commit messages or other notes on why those patches are being applied and the change record of them. With NixOS you get a bunch of hashes that terminates on "wefu123r23hjcowiejcwe.nar" and you don't know what that is until nixpkgs happens to evaluate it and calculate it, which means it's impossible to even know up-front what's going to be pulled in.
Then of course you get to practical matters: just because you can exactly specify dependencies doesn't mean you should - we all realized with containers that having a couple dozen versions of libraries kicking around is a bad idea (and lo and behold that's what traditional distro packaging tries to minimize) - and that's where all those calculated paths burn you anyway. Nix is a fairly freeform programming language, so it's nigh impossible to stop some snowflake package from pulling in a different version of a compiler or library even if I can see it happening (example I currently have: 5 different version of Rust, 5 different versions of Golang - and the invariant I want on that is "no, it's this version and you deal with it" - but there's a lot of ways nix will let you make this which are very resistant to static analysis or automated correction).
This doesn't say what you think it does. It says that every binary package should only depend on its declared source packages. It does not say that source packages must be constructed without an upstream connection.
What the OP was referring to, is that Debian's tooling stores the upstream code along with the debian build code. There is support tooling for downloading new upstream versions (uscan) and for incorporating the upstream changes into Debian's version control (uupdate) to manage this complexity, but it does mean that Debian effectively mirrors the upstream code twice: in its source management system (mostly salsa.debian.org nowadays), and in its archive, as Debian source archives.
All that is required for this to work (building offline) and be immune to all bad thing you wrote: package build part must contain checksum of source code archive and mirror that source code.
The TV news show, "60 Minutes" tested several companies that provided polygraph testing services.
The show claimed some one stole some equipment. For each testing service the show said they suspected a different employee. In every case, the polygraph operator claimed they detected deception in the person they had been told by the show was the person already under suspicion. Polygraph is total bullshit, just used to add a pseudo-scientific shine to prejudice.
I think you meant to say /dev/random, not /dev/urandom.
/dev/random, on linux used to stall waiting for entropy from sources of randomness like network jitter, mouse movement, keyboard typing. /dev/urandom has always been fast on Linux.
Today, linux /dev/random mainly uses an RNG after initial seeding. The BSDs always did this. On my laptop, I get over 500MB/s (kernel 6.12) .
IIRC, on modern linux kernels, /dev/urandom is now just an alias to /dev/random for backward compatibility.
There's no reason for normal userland code not part of the distribution itself ever to use /dev/random, and getrandom(2) with GRND_RANDOM unset is probably the right answer for everything.
Both Linux and BSD use a CSPRNG to satisfy /dev/{urandom,random} and getrandom, and, for future-secrecy/compromise-protection continually update their entropy pools with hashed high-entropy events (there's ~essentially no practical cryptographic reason a "seeded" CSPRNG ever needs to be rekeyed, but there are practical systems security reasons to do it).
This does nothing to alleviate my privacy concerns, as a bystander, about someone rudely pointing a recording camera at me. The only thing that alleviates these concerns about "smart" glasses wearers recording video, is not having "smart" glasses wearers. I.e., not having people rudely walking around with cameras strapped to their faces recording everyone and everything around them. I can't know/trust that there is some tech behind the camera that will protect my privacy.
A lot of privacy invasions have become normalized and accepted by the majority of the population. But, I think/hope a camera strapped to someone's face being shoved into other peoples' faces will be a tough sell. Google Glass wearers risked having the camera ripped off their faces / being punched in the face. I expect this will continue.
Perhaps your tech would have use in a more controlled business/military environment? Or, to post-process police body camera footage, to remove images of bystanders before public release?
Something like this tool is ridiculous against companies like Google or Meta. Just with their phone apps and OS control with a video like the displayed those companies could know exactly who each person in the video is, what are they doing, who they are with, and record that information forever.
In the video I see three young women, another woman near the zebra crossing. A young woman with a young man, a woman walking with two men on the sides, and another young couple. I know their heights, if they are fat or slim, the type of their hair and so an AI could know that and with that information and a little more like someone of one group having location activated it is enough for a computer to automatically decode the remaining information.
If enough people wear those stupid glasses it means in a city everyone is surveilled on real time with second and centimetre accuracy, included indoors in places like restaurants or malls.
This is too much power that no company or institution should have. If Meta or google have the ability to do that, they will be required by the US government to give that info automatically with some excuse like "national security".
I feel quote uneasy about stuff being recorded and sent to big corps routinely with cameras strapped to random bystander faces. I'm much more bothered by the fact this gets sent to a central location and processed than mere fact of being recorded without consent.
However, even with this uneasy feeling, one has to recognise a street is a public space and I don't see how one can have reasonable expectation of complete privacy there. There is nothing rude about recording what you can see.
The privacy expectation I have is not that my picture will not be captured, but that such recordings from many unrelated people will not be aggregated to trace my movements .
So in summary, I think everyone has a right to photograph or record whatever they like in a public space, but the action of pooling all such recordings, and running face tracking on them to track individual people (or build a database of movements, whatever) is breaching these people's privacy and there should be laws against it.
Seriously. There has been so much progress in the area of non-consensual recording and processing of data, and so little in the area of countermeasures. You can do a web search for the former and find tons of hardware and software that'll help you spy on folks. Searching for adversarial design gives scientific papers. It implies that there is little-to-no measurable demand for privacy (at least of this sort) in the marketplace.
I agree with all you said, but I don't believe there is any way you could protect yourself from being recorded.
The only way for this to work are legal regulations.
But those can be easily dismissed as not possible to implement. So this is good PoC to show what is possible and way to discover how this could function. Without such implementation I don't believe you are able to convince anybody to start working on such regulations.
Perhaps "ungodly" explains current refusal, but original reason U.S. does not use metric is pirates stole the metric standards as they were being shipped over from France.
reply