The intel stack is more mature right now as someone pointed out in the comments above. Down the road AMD will get better with better drivers and newer chips. If wifi speeds matter to someone and they are upgrading from an older intel board then keeping the old wifi card would make sense as of now.
No one is debating it's not ready for production use. It's just that it's a lot easier to configure clients in dual stack than do a 6 to 4 translation.
Positive or negative ? Positive comments don’t generate interaction. If a comment was positive I’d like and move on not comment. Negative comments in the other had generate a lot of interaction. If I was to guess, Google as a marketing company loves interaction.
I've had an awful lot of engagement that's produced little value. (As I get older, I try to avoid that, increasingly. Not always successfully.)
I've had tremendous value from some very brief engagements, often one-liners or casual remarks, though also when someone shares a deep knowledge of a subject or a truly insightful personal experience.
Those are all exceptionally valuable, but in terms of "engagement metrics" such as replies, time-on-site, etc., they're often negatives. To turn a phrase: feed a person endless questions and challenges, and you'll keep them on site for a day. Provide them the answer or tools they need, and they disappear forever.
(Dating / matchmaking sites face a related version of this problem.)
With time, people who do value useful information come to realise the timesuck nature of high-engagement, low-value sites (o hai redditz) and avoid them like the plague.
Side observation: I'm playing with FastGPT (largely because it doesn't require registration to use, so I can't compare it to ChatGPT or other registration-required generative AIs), and one of the things that's useful about it is that it gives specific answers to specific questions, rather than sending me off on an endless quest through low-grade online sources.
Or even relatively high-grade ones such as Wikipedia, which might answer the immediate question but tend to prompt more. Curiosity ain't necessarily bad, except for cats....
What generative AI does that General Web Search does not is actually quench the thirst. Which is useful from a personal value perspective, though possibly a shock to the system for both online search and content providers.
I believe their point was that we no longer see as much nonsensical hate in YouTube comments anymore - or at least that rings true to me and I've actually wondered about it for awhile
Speaking as both the original author and someone who's been studiously pseudonymous online for well over a decade (after several decades of generally-public disclosure): anonymity and pseudonymity are exceedingly challenging.
I know I've left trails, and that if this were something my life absolutely depended on I'd probably not be writing this now. There are any number of ways to determine who a person is, or even to narrow down the probable set of individuals, often with only the thinnest of data. Given the prevalence of sensors, tracking, and physical-space monitoring (facial recognition, device tracking by WiFi and Bluetooth sensors, license plate readers, purchase and credit card data, and more), odds are pretty good that an online persona could be narrowed down to a few score of potential targets reasonably quickly by a motivated entity. Doing that at scale might be more challenging, but seems to be at least roughly possible by some state-level actors.
And that level of surveillance many well not be necessary, only the threat of such actions.
For discussion, semantic analysis, time(s) of activity, correlation with other known factors (travel or commute patterns, power or communications outages correlating with non-active periods, and the like), there's a lot of data to go on.
The biggest protections seem to me to be far less technical measures such as encryption, obfuscation, and pseudonymity, than they are strong privacy laws, civil rights, legal protections, rule of law, and civil institutions which are strong, robust, highly-trusted and trustworthy, effective, and dedicated to their mission.
That's not to say that technical protections aren't necessary; I absolutely believe that they are. However they are not sufficient, and often prove to be highly brittle: affording strong protection until at some point, whether due to a technical fault or lapse in tradecraft, they aren't. At that point the jig is up, and absent the social institutions in my previous 'graph, vulnerability is absolute.
We cannot live without trust. An absolute faith in anonymity is the false belief that we can.
A while back I stumbled upon google chromes privacy settings and found things like serial port on your computer to be accessed by websites. Turns out google has thrown everything in the mix because they probably want their 'Chromebook' users like children in school to use motion sensors for convertibles to maybe play games via a browser. Websites are just taking advantage of these things. The chrome browser has ruined the internet.
The browser is essentially the operating system for most computing today so access to peripherals is reasonable.
My current job uses USB security keys and I assumed I'd have to configure them in the OS before the browser was aware of them -nope! Chrome knows if the key is in the USB port and can interact with it with my approval, which is exactly right.
The leap from access to USB to access to serial is minimal. As long as the right permission checks are in place.
> The browser is essentially the operating system for most computing today
You're right, and it's such a bummer. I often think about how interesting it would be if we didn't end up with the Chrome/Safari browser duopoly and Windows/macOS duopoly on the desktop and Android/iOS duopoly for mobile. How cool would it be to see what the Amiga, Atari ST, Spectrum, OS/2, BeOS, etc... could have become with another couple of decades development. Even Windows and macOS would probably be different if they had to compete in a healthy, diverse ecosystem.
Instead, further concentration is probably going to happen once Apple allows alternate browsers. At that point, there isn't much to stop Google's Chrome from becoming the only application platform that really matters.
If we didn't have an OS duopoly, we'd have a programming language duopoly, GUI library duopoly or something of that sort.
It's just not reasonable to expect every company to maintain more than two or three completely different versions of their apps, and most would vastly prefer to maintain just one, hence Electron and React Native.
It would be a constant incompatibility hell, and most code would be littered with #ifdefs and polyfills.
You can argue that companies producing software tools would specialize, so you would use Microsoft image editing tools on Windows, Foo's image editing tools on Amiga, Adobe's image editing tools on Mac etc, but that argument breaks down when it comes to banks, movie and music streaming companies, games etc.
I think as software matures, we will settle on free software. We more or less already did server side.
Then it will be up to the OS maintainers to make sure the software is compatible with their operating system, like how it works with FOSS systems already.
Working in the streaming media space, I can tell you what happens when there isn't a duopoly. It sucks.
Making an app means:
Android (and Android TV being more work), iOS, web, Roku, Fire TV, Tizen, Vizio, WebOS (LG), and multiple set top box vendors who all have horrible underpowered CPUs.
Some companies try do to cross platform, and that sort of works, but it is janky and customer complain of the sorts of UX issues that always pop up with cross platform apps, and for any decent functionality you end up writing per-platform shims. Also some platforms (Roku) you have to write an app for anyway because the platform requires using a custom language. Other platforms (Set top boxes) are so underpowered that you can't really run anything resembling modern code on them.
It sucks. It is a huge waste of engineering effort for no real gain. Most customers don't choose a smart TV based on its OS, a large % of people choose based on what is on sale at Costco, and another demographic chooses whatever they are told is "the best" by reviewers.
Mobile app developers dealing with a duopoly have it easy, but even that dramatically increases barrier to entry compared to the 90s where you just had to write one app for Windows and so long as you only used documented APIs, Microsoft would move heaven and earth to make sure your app kept working between major OS updates.
> Instead, further concentration is probably going to happen once Apple allows alternate browsers. At
Not if the DoJ forces Google to abandon Chrome. Which they should.
Apple and Google should lose their app store monopolies (including first party default preference), Google should lose the Chrome monopoly. These are incredibly harmful to technology and competition.
Each company has plenty of money, attached user base, and engineering headcount to continue to be wildly successful and profitable without operating in a way that damages the rest of the tech sector.
> So the market decided they were not what was wanted.
No, the competition decided what they wanted, by using shady-as-shit (as well as out right illegal) tactics to squash everyone else.
People _loved_ their Amigas, STs, Be boxes, etc. They loved them so much that there are still some nutjobs out there trying to keep Amiga alive! Do you think there'd be that kind of devotion for Windows 40 years later, if it died around 3.1?
No, the users didn't choose. A loose hand on monopoly law did.
How much of that was the market decision and how much was illegal anticompetitive practices that got Microsoft in trouble a few decades ago? Paying manufacturers that used them while penalizing this that made other OS's available, amongst other practices. Hell the only reason Apple is around today is Microsoft bailed them out so the could point at apple and claim in court there was a compatator and therfore were not a monopoly back in the 2000s
People using invisible hand / the market decided arguments gloss over the fact modern capitalism is yet to produce truly fair markers without corruption.
If only it was as simple as letting buying power decide.
Is it really? Many people today seem to be living in a world almost purely of apps. Besides using The Google to find a piece of trivia, I hardly see anyone living in the browser to the extent that they are treating it like an OS in and of itself. If anything, the browser is seen as antiquated. The decision of browser makers to expose so many non-document APIs seems to not be closely connected to direct consumer demand for them.
Kind of doesn't matter since such wrappers routinely use native code or "plugins" to allow for behavior nonstandard to browsers, although your point is totally fair.
The only viable alternative to iOS/Android apps are web apps.
Apple fights it by limiting number of features you can use in the browser on mobile phones and no alternative browsers.
Google - by saying, ok, go with it, you will use tech, that we control anyway.
The current amount of hacks needed to make the native desktop apps compatible across even the same operation system, but different versions, is kinda scary.
Pretty sure the similar situation for mobile apps too.
Huh? But there's so much diversity in the desktop space. You have Windows/Mac, but then Debian/Rhel, Free/Net/OpenBSD, SteamOS, ChromeOS, Tails, NixOS, Qubes, Solaris Family, ReactOS and that's just the ones I've actually seen people use at conferences.
The browser space has never been more diverse as well, most of them use Chromium under the hood but who cares, Chrome was Webkit was KHTML when it started too. A browser's success is only somewhat related to its engine. Having a base you can build on that guarantees all current and future website will work and be performant on has allowed for crazy levels of experimentation.
> most of them use Chromium under the hood but who cares
We should all care, because people start writing apps that work on Safari and Chrome only rather than to a standard. The web wasn't meant to be controlled by two companies, the idea was using standards anybody can implement.
Use Firefox and see what sites you are using regularly that doen't work because they are chrome sepcific.
I've been Firefox only for more than a decade now (although tbf not on iOS) and I've still yet to find a site that straight up doesn't work. I've had some sites where I've had to tell it I'm using Chrome because of poor user agent sniffing but it's been a long time since that was necessary. Ahh Netflix when it still used Silverlight.
That first one "works on my computer", and the second one also works (but is purported not to).
I've long been confused when reading how Firefox doesn't work everywhere. Now I'm even more confused, because you posted an example that doesn't work on your computer, but does on mine. Do I have some kind of Ultra Firefox or something?
On Firefox (I am using the most recent version for macOS) I see only only about a paragraph of text, plus header and sidebar.
In Safari I also see pull-down menus for "Kommun" and "Tidsperiod för bokningsbara tider", plus other input items, and a description like "Antal mottagningar: 23" plus a list of locations.
I see menus at the top, a heading, a paragraph of text, some dropdowns like "kommun" and "Tidsperiod för bokningsbara tider", checkboxes under "Visa mottagningar som har", and then what I think are a bunch of boxes for locations for appointments? When I click on one, I go to that location.
Got the same response with Firefox, temporarily disabled uMatrix and it no longer complained.
It's just a shitty as website by a shitty corporation for shitty ends I guess... I never have such issues with sites that benefit me when I visit them :P
I don't have uMatrix or other ad blocker installed.
I turned off all "Enhanced Tracking Protection" and still see nothing in Firefox.
I don't know what you mean by "by a shitty corporation for shitty ends ... sites that benefit me". It's run by the regional council (https://en.wikipedia.org/wiki/V%C3%A4stra_G%C3%B6taland_Regi...), which is the political organization responsible for the area's public healthcare system. The page lists available COVID vaccination appointments, which benefits me as I was looking to get a booster shot and wanted to know where to go.
I'm sorry, I should have made clear I was referring to the Adobe site :/ Which, to be fair, I didn't attempt to actually use, it's just that disabling uMatrix made the message that it doesn't work with Firefox go away, but I didn't attempt to use it further.
My bank is doing some security theater fingerprinting (instead of something actually secure, like 2FA, but that's a different story), which in the end means I can't login to my bank account using Firefox anymore these days.
WebUSB is actually a W3C open standard. For instance, the BBC:MicroBIT educational dev environment runs in a web browser and allows python code to be pushed to the microcontroller straight from the browser.
> This specification was published by the Web Platform Incubator Community Group. It is not a W3C Standard nor is it on the W3C Standards Track.
It’s an experimental spec by Google (observe the affiliation of the three editors: all Google); Mozilla has adopted a negative position on it <https://mozilla.github.io/standards-positions/#webusb>; WebKit has not remarked upon it.
To my knowledge, no browser allows any usage of WebUSB without a prompt.
WebAuthN is different, since it does not provide sites low-level peripheral access – WebAuthN and CTAP have been designed for specifically this environment and go to great lengths to make fingerprinting hard.
As long as you don’t actually use an authenticator on a site to store a credential, it won’t be able learn anything about it.
Not sure about this, but I think from JavaScript you can absolutely probe stuff without explicit user consent.
For instance, without accessing any USB device I can try:
if(!navigator.usb) {
console.log("learned that browser does not have USB capability");
} else {
console.log("learned that browser has USB capability");
navigator.usb.getDevices().then((devices) => {
devices.forEach((device) => {
console.log(device.productName);
console.log(device.manufacturerName);
});
});
}
Hahah, so you think. But now you have additional telemetry to show that this wasn't cURL forging a Chrome (or Firefox) user-agent header.
Finger printing sounds sophisticated, but it's just collecting the bits and pieces into something that (mostly, probabilistically) identifies you. And then tracking you, surveilling you till you're somewhere where they can identify you.
> this wasn't cURL forging a Chrome (or Firefox) user-agent header.
There must be a million different ways to establish that, though.
I get the general idea, but this particular data point seems highly correlated with just the family of browser, as GP suggests.
It's also very easy to fix – just make your non-WebUSB-supporting browser expose that object, but always behave as if the user had declined that particular prompt.
That still allows website to distinguish between webUSB-aware clients and older browsers. The point being, that it would be great if extentions like WebUSB were developed such that nothing about capabilities could be learned without the users' awareness and explicit consent.
Unfortunately, instead, new capabilities are added to browsers constantly and the interfaces commonly are silently made available as part of a regular software upgrade. Sure, thought is given to security and the user is prompted just before something horrible is about to happen (access camera, mic).
But don't underestimate the shitload of "niceties" in the grabbag of APIs that in aggregate reveal more or less a supercookie of your browser instance.
Yes, enumerating available capabilities helps fingerprinting. And this is not good, and APIs should be designed better.
But there are easier avenues that are harder to mitigate. Hashing an image that relies on the browsers rendering of (default) fonts. Highly instance specific, lots of entropy.
> more or less a supercookie of your browser instance.
That's really not what it is though, is it?
These capabilities will be rolled out for all users of a given browser, or even for a given rendering engine, and I'd assume that your browser family is already easily fingerprintable. In other words, they are all highly correlated.
Things like installed fonts, window sizes, your clock drift etc. are a different story. These lower-correlation measurable properties are the real supercookie problem.
It lets you enumerate all the USB crap on the bus.
My desktop has 12 things on the bus. 8 are soldered on to the motherboard, and 4 are plugged in. There are at least 32 choices for each of the things, so that’s 5 bits of entropy per device — 7 bits, ignoring the motherboard.
I mean, if you grant USB access to an untrustworthy web site, it's game over – you can probably just read the serial number of at least one of these devices over USB.
Why? Isn't the web basically the perfect fully virtualized and sandboxed environment with a highly standardized and open API and a sophisticated, accessible UI toolkit, with elaborate development tools built right in, like we always dreamed of? Isn't the web basically the perfect OS?
I don't think so at all. Web-based applications tend to suck, and it seems to me that much of the reason is because the browser is very imperfect as an OS.
It is on the web, and it is interactive, i.e. not just a static blog. The average social media site is clearly a web app, right? What is HN missing? Not looking like it's from the early web 2.0 era?
> The average social media site is clearly a web app, right?
I never considered them as such, no. Those (including HN) are just ordinary websites with some amount of interactivity.
To me, a "web app" is a thing that replicates a normal application in web form. Things like GMail, Office, etc. In other words, they aren't things that uniquely leverage the web, they're things that are using the web as a shortcut to platform independence.
But perhaps the definition has changed, and I need to be much more explicit and specific instead of using the term "web app". I could buy that, but it also means that I don't actually know what a "web app" is anymore.
This seems like an artificially restrictive definition that necessarily excludes anything that you might actually enjoy using. Doesn't it just naturally make sense that the software you enjoy using most is the software that's designed idiomatically around the platform it runs on? If "web app" is defined to mean "software not meant for the web, but shoehorned onto it" then of course all web apps will suck.
HN mostly doesn't have JavaScript. I think vote buttons and the collapsing comments are the only things that are JavaScript. Everything else is HTML and links.
I don't think it makes a functional difference, except for "more" links instead of universal scrolling. Just like can't tell between static blog and dynamic blog, can't tell between static HTML and dynamic JavaScript. I would say "web app" is where download JavaScript, and the JavaScript builds the page.
>And yet you are posting this on HN, which one could argue is a Web application.
Which has 0 to do with the virtues of the Web as OS and much more to do with catharting the pain and frustration induced by sharing the digital world with people with shockingly bad points of view through acting in kind. A game nobody wins; alas...
Sure, we all complain about Chrome and its outsized influence, but at the end of the day the standards are more open than not and Safari and Firefox mostly work most of the time on most of the pages. That's a stark contrast to, say, .NET vs Cocoa or Android vs Apple app stores.
"Comply or we will break your shit" works better in closed ecosystems. I'll take a little mess over a 30% tax and heavy-handed tempramental moderation any day of the week.
Not fully disagreeing, but the web feels more heavy on RAM and other resources than native software. Also, the only programming language being Javascript, which is just starting to sort of change with WebAssembly is also far from ideal. Some other stuff like storage is also comparatively recent AFAIK.
Not to mention the sandboxing. I'm glad a lot of the "apps" I use are just "webapps", so that I can trust them less. A user process on a desktop OS is given an insane amount of permissions by default, though this is being fixed, slowly
That's a fashionable observation; I think it's a kind of illness. The idea that you can take over anyone's computer, and make it do things the user doesn't want done, and doesn't know are being done, makes some web-developer's heads swim; they can turn the whole internet into a sort of distributed supercomputer for their own private use. WHATWG bears a lot of responsibility for this.
A real operating system doesn't download and execute code from unverified remote locations. Nearly every website nowadays tries to load and execute in the browser code from any number of remote locations, without the user's approval or even knowledge. By default, I only allow 1st-party JS, which I consider to be an extremely liberal policy.
> Sorry, but that is pretty much the standard way to install apps on windows.
Maybe now, but when I was on XP and, later, Windows 7, you only had a handful of software you would use (I have all of them on a CD, and later on an HDD). Things like VLC, Notepad++, Codeblocks, Office, and others. It requires trust, but these programs did not phone home, AFAIK, every second. That's what we lost, trust in our computer and the software programs running on it. And now, it is a hostile relation between customers and software developers. I wasn't concerned about VLC tracking the file I opened with it, or Office scanning my documents.
> That the browsers execute untrusted code all the time and still are secure
But they aren't secure. Most of that untrusted code is doing stuff that's of no value to the user, and is positively against the express interests of many users.
> The browser is essentially the operating system for most computing today so access to peripherals is reasonable
I suppose. Not for me, though, as I don't (and won't) use web apps or complex websites. I sorely wish there was a browser that simply didn't have that capability.
I guess I don’t know how you got from A to B there. I love the idea of kids being able to experiment with serial ports (though I’m not sure what you mean in that context, WebUSB?) in a safe, locked down programming environment.
Ideally it wouldn’t mean random web sites request motion data from you but I really don’t see this as ruining the internet.
Webserial let's Home Assistant users flash their ESPHome devices without downloading or compiling any software. WebUSB let Google update my Stadia Controller to a normal controller after they shut down their cloud services. It also offers firmware updates for some Pixel phones.
These are all quite useful tools. I've never used WebMIDI but it's older than the other Web* APIs. When you have a use case for them, the APIs are a lot better than figuring out a cross platform serial port protocol (or, more realistically, writing a Windows application and letting the Linux/macOS/Android users figure it out themselves).
WebSerial/USB/Bluetooth doesn't do anything unless you permit it to. If websites used this feature, you've clicked "okay" when mapquest.com asked to use your serial port.
My students were able to program Arduino devices from their Chromebooks because of this tech. That would have been inaccessible to them if they had to use a "real" OS, which the school did not provide.
The existence of the Web Serial API is a godsend for working with many embedded devices. The ability to flash a device directly from the web instead of futzing around with a commandline tool feels like magic.
It is a shame, because the overlap between people who use Firefox as their main browser, and people who tinker with microcontrollers is likely pretty large.
Serial ports are everywhere and these APIs can provide quite a lot of fingerprinting capabilities.
I understand why Mozilla is hesitant. "Why does a browser need to give access to a serial port" is a good question. Certain web tools have definitely proven useful (especially when using an Android device to flash microcontrollers!) but if you asked the average internet user 20 years ago if their browser should provide websites with access to their serial ports, you'd get laughed at.
I hope Mozilla reconsiders their positions on this, because this is just one of those reasons I keep Chrome installed. I need it very rarely, but when I do, it's often because Mozilla made a choice I disagreed with (like their decision to remove anything resembling PWAs on desktop Firefox, which is why I have a bunch of Chrome shortcuts in my application launcher now).
Web browsers used to be about websites, not applications. That's my point. It took years even after Gmail discovered the XmlHttpRequest for in-browser HTML applications to even become a thing people would just use.
> What if you included "Only if you allow it"?
You'd probably hear something like "IE/Opera is bloated enough already", I just want my downloads to finish faster.
My thoughts too. A bug existed, the maintainer looked at a contribution and as the maintainer picked the best solution which happened to be his own solution. That’s how the kernel community works. The community picks what’s best for the kernel. Not what looks good on your resume.
Someone spent 5 days debugging and giving the explicit details of the problem and a proposed solution. They should be given some credit, even if the exact lines of code that fix the problem weren't the ones proposed. The maintainer could've just given the feedback required for the contributor to submit the desired patch. Or some metadata on the commit could've offered attribution to the contributor. From the article, this was 90% the work of the contributor and 10% (maybe?) the work of the maintainer. Considering that being mentioned as a contributor to the kernel is a Big Thing, completely neglecting to give any mention is poor taste.
Surely if the FaceID module provides a key to decrypt the encrypted contents of the phone, if you swap a module then another module might be able to verify a face but not provide the correct key, and the phone remains locked? If, before you remove the module, you wiped the phone then of course no key is required..
Having a module which could be removed and replaced just say Yes or No would seem to be a very poor design. Also in that case, Apple could presumably authorise a new module, meaning they would retain the capability to break into any phone (which I understood they did not want)
That wouldn’t prevent the case of “the module is swapped for one that unlocks no matter what, and upon noticing the phone isn’t unlocking, the owner resets and sets up Face ID again” right?
Upon noticing the phone isn't unlocking and reading the big warning message that the Face ID module was replaced, which doesn't seem like a big threat vector to me.
I think it is because it’s not “the module is swapped for one that unlocks no matter what, and upon noticing the phone isn’t unlocking, the OWNER resets and sets up Face ID again”, but “the module is swapped for one that unlocks no matter what, and upon noticing the phone isn’t unlocking, the THIEF resets and lets the owner set up Face ID again”.
They’re also is the case of “Steal two phones, swap a few parts, reset the phones, and sell them second-hand”. Both phones will have 100% genuine parts.
> “the module is swapped for one that unlocks no matter what, and upon noticing the phone isn’t unlocking, the THIEF resets and lets the owner set up Face ID again”
The thief wouldn't have been able to reset Face ID, would they? Also it would make sense to warn a second time when you go to set up Face ID again.
If they reset the entire phone, uh, they could have handed you a different phone entirely. I don't see how part swapping is the problem here.
> They’re also is the case of “Steal two phones, swap a few parts, reset the phones, and sell them second-hand”. Both phones will have 100% genuine parts.
What role does the part swap have in this scenario? What stops me from simplifying it to "Steal two phones, reset the phones, and sell them second-hand."? Because if that simplification is valid, then this scenario has nothing to do with repairability.
Apple could still use keys to validate the module is genuine. Then you just need to trust Apple to not release compromised modules. They need to just stop pairing the individual modules to the phone.
I was under the impression that it was all stored in the iPhone's secure element, which is part of the main processor? But they're paired cryptographically - to ensure the data isn't faked. And I would think there is some calibration data. Maybe that's wrong though - are there any docs you can link?
>Can’t swap it else anyone can unlock your phone with a swapped FaceID module.
I think some very highly paid engineer at Apple could figure out this simple solution. "If the FaceId, Fingerprint Reader is compromised you fallback to the password, there should always be a password/PIN for special cases".
Just in case those engineers could not coem up with such ideas , Apple(and others) you can use my idea for free, I will donate it to you for the environment sake.
There is: Take it to Apple and pay to have it fixed by them ;) I dunno if Touch ID is on the list of things they let you do yourself these days, but if so you "only" have to use their kit, which also lets you verify the parts IIUC
Apple tends to overcharge for repairs and even if not it’s better to have competition. The kit isn’t really practical for most people or repair shops, it’s probably mostly a PR stunt
OK, some people downvoted this, so let me explain:
Overcharging: Apple tends to replace whole assemblies rather than individual parts, and don't do board-level repairs or anything. Apple staff are generally just following a procedure and aren't allowed to/aren't trained to solve problems in the best way. Here's an example of them charging for a whole motherboard replacement when the issue was a bent pin: https://www.youtube.com/watch?v=o2_SZ4tfLns
Some people might be OK with this, but not everyone; competition is important!
About the self/independent repair program: The self repair program allows you to order one part at a time and you have to have the device to do it. Realistically, almost no one will do it themselves, and will use a repair shop. The program is impractical for repair shops, because they can't stock parts in advance. The other option is the independent repair program, which effectively turns you into a shipping centre for Apple; it bars you from doing anything but the most basic repairs without sending it to sending things off to Apple and they will do random inspections on your store and fine you if you're actually offering good service like board level repair or using cheaper aftermarket parts. So it's likely that both are mostly just PR stunts to get ahead of regulation while also not making a significant change to their business
I was sarcastic dude, it is clear that Apple is anti independent repair, I should be able to sell my old broken phone for parts, those highly paid engineers should be able to figure it out if management gives them the task to do it.
Not an extreme case. There are times when I can barely get above 100kbps in spotty areas in some buildings. Loading the gmail web page is painful. HTML button appears immediately at the bottom while the website is struggling to load and clicking it will load emails. Very helpful if you ask me.
Out of those million email clients, probably less than 50 are regularly maintained with good support. And I am probably exaggerating the number 50. Could be single digits.