Apple is extremely dumb with power management and power supply. That's because they pretended to innovate all the way back at the start and want to pretend, they still have the expertise.
But I have had 2 iMac power supply die one me, the grounding problem on a MBP and a major annoyance with power noise leaking from a Mac Mini (makes for some nasty audio output, hilarious when you consider they supposedly target creative who clearly need good audio output).
You always find people raving about Apple's engineering prowess but my experience is that it's mostly a smoke show, they make things look good, miniaturise/oversimplify beyond what is reasonable and you often end up with major hardware flaws that are just a pain to deal with.
They always managed to have good performance and a premium feeling package but I don't think their engineering tradeoffs are actually very good most of the time.
As far as I can tell, the new Mac Mini design still has grounding issues, and you will get humming issues, which is beyond stupid for a product of that caliber. At this point I don't care about having the power supply inside the dam box, just use a brick if you must to prevent that sort of problem. This is particularly infuriating since they made the iMac PSU external, which is beyond stupid for an AiO.
But common sense left Apple a long time ago and now they just chase specs benchmarks and fashionnable UIs above everything.
> Not my experience. I've used LLMs to write highly specific scientific/niche code and they did great, but obviously I had to feed them the right context (compiled from various websites and books convered to markdown in my case) to understand the problem well enough. That adds additional work on my part, but the net productivity is still very much positive because it's one-time setup cost.
I've been genuinely surprised how well GPT5 does with rust! I've done some hairy stuff with Tokio/Arena/SIMD that I thought I would have to hand hold it through, and it got it.
Yeah, it has been really good in my experience. I've done some niche WASM stuff with custom memory layouts and parallelism and it did great there too, probably better than I could've done without spending several hours reading up on stuff.
It's pretty good at Rust, but it doesn't understand locking. When I tried it. It just put a lock on everything and then didn't take care to make sure the locks were released as soon as possible. This severely limited the scalability of the system it produced.
But I guess it passed the tests it wrote so win? Though it didn't seem to understand why the test it wrote where the client used TLS and the server didn't wouldn't pass and required a lot of hand holding along the way.
I've experienced similar things, but my conclusion has usually been that the model is not receiving enough context in such cases. I don't know your specific example, but in general it may not be incorrect to put an Arc/Lock on many things at once (or using Arc isntead of Rc, etc) if your future plans are parallelize several parts of your codebase. The model just doesn't know what your future plans are, and in errs on the side of "overengineering" solutions for all kinds of future possibilities. I found that this is a bias that these models tend to have, many times their code is overengineered for features I will never need and I need to tell them to simplify - but that's expected. How would the model know what I do and don't need in the future without me giving all the right context?
The same thing is true for tests. I found their tests to be massively overengineered, but that's easily fixed by telling them to adopt the testing style from the rest of the codebase.
Rust has been an outlier in my experience as well. I have a pet theory that it is due to rust code that's been pushed to github generally compiles. And if it compiles it generally works.
Yeah I think Nvidia were hostile to Linux when they saw no value in it. Now it's where the machine learning is. It's the OS powering the whole AI hype train. Then there is also Steamdeck making Linux gaming not a complete write off anymore.
> Excel single-handedly redeems Microsoft from being a pure drain on human existence
Debatable. Excel can't even open CSV files properly. You need to run the import wizard. But loads of people don't do this. They see a file on their desktop and double click it. Why can't double clicking a CSV file just open the import wizard!? (Because they want people to share xlsx files as a data format.)
I assume most Americans don't run into the CSV hell that other countries do. In my current country, whether CSVs open as a comma-separated or semi-colon seperated document depends on whether the OS is set to use a , or a . for decimal numbers. It's absolutely annoying.
Right but the import wizard can fix things. They just don't make the double-click go through the import wizard - and people use 'open' or double-click their files. LibreOffice Calc opens the import wizard when you open a csv and it's fine.
For the life of me I cannot comprehend why they cannot let us choose the decimal separator independently from the locale. Or for fuck’s sake, just be smart about it. My desktop is for boring administrative tasks, of course I want it in my language. No, I don’t want to manually change settings in Word for every fucking document I create because ~none of them will be in English. But then why do I have to search-and-replace . with , or click 12 times through an inane bullshit wizard just to paste some data in Excel?
Respecting regional settings is so inconsistent among Office applications. The desktop ones usually get it, but online is a crapshoot. Whenever there's a date like 3/4/25 I get the play the fun guessing game of whether that's March or April.
For Project Online, the most reliable way I found to fix it was to manually edit the URL to replace en-US with en-AU, then bookmark that.
Depending on whether your OS uses a , or a . for decimal numbers changes how excel will parse a CSV file. Americans use a . for decimal numbers, so it will parse it as a CSV. Other countries use a , for decimal numbers, so it will parse it as a SSV (semi-colon separated) and everything will be in a single column.
To make matters worse, randomly, employees will have their OS using US or GB locales so that if you distribute a CSV, it will work for some employees, but not for others.
No. Excel changes the SEPERATOR when parsing depending on the locale settings. This means a CSV generated or saved with a decimal of . will not be able to be opened by one with a , and vice-versa. This is an Excel issue, as it doesn’t even try to determine or ask which separator to use. Hence why the comment above said you need to use the import wizard and not double click.
The syntax that MS Office uses to read/write a CSV is defined by the Regional Settings of your PC.
Open control-panel for regional settings, select "Advanced settings" button on the bottom
control.exe intl.cpl
If you don't know any of these problems, then all the people and systems you work with have a "." as decimal and "," as separator, and you are spared from the hell of MS Office being unable to overrule these OS-settings when treating a CSV
Honestly as this always was an obvious issue I usually just used ; and never got a complain. Obviously both . And , are used way to often not only for numbers. I am surprised this is problem enough (in 2025) that people emotionally discuss it.
> Honestly as this always was an obvious issue I usually just used ; and never got a complain.
Thing is, it is not about what you used, you are not able to control this from happening when your CSV should work for people in other countries. Whatever configuration you used which never got a complain, if your recipients also used Excel to work with those documents, they probably have the same regional setting on Windows for list/thousands/decimal separator.
If you use ";" as separator, i.e. Excel in UK, US, Japan, China, Korea will not be able to correctly open your CSV.
But even better: If you created this CSV on a France or Sweden regional setting, the thousands separator will be a whitespace ("1 000" instead "1,000" or "1.000"), so Excel in e.g. Italy will not detect those properly.
> I am surprised this is problem enough (in 2025) that people emotionally discuss it.
It is a (intentional) weakness of MS Office for those who work in an international environment, because Excel links itself to .csv files to hinder the experience, as it is neither able to properly detect them nor guide their users through a process to properly handle them.
CSV already solved this problem with quotes. Maybe not the most convenient solution for some users but that's no excuse for the Excel behavior of making up a different format depending on the locale.
Excel really doesn't care what users think. I mean, in biology, we've already had to change the names of genes to accommodate Excel's auto-date conversion routines. So, why would it care to have globally consistent CSV formats?
OMG--we had a worfklow where less-techy folks were supposed to edit a csv, then check it in to github, which would kick off a whole process automatically for them. I kid you not--anyone who edited the csv in Excel would eff the whole file up every single time! They just needed a text editor, which we told them to use, and the changes were literally simple, either editing an existing entry or adding a new entry. Nope, these college educated "IT" workers could not handle it! We ended up having to scrap the entire automation workflow because the employees were simply too dumb to use a text editor and github.
Maybe I’m just not understanding the nuances of what you were working on, but is it possible that there was something wrong with the solution if literally every person was screwing it up?
CSV is data only. Excel handles way more than that. XLSX is the preferred file format because it's compressed XML that can hold all kinds of things.
Also, CSVs seem to open just fine on my Excel. If it's not formatted with a standard delimiter or isn't handing quoted strings the proper way, sure maybe the data wizard is needed.
Excel is terrible in a lot of aspects, but CSVs seem to be something it handles as well as anything else in my experience.