Hacker Newsnew | past | comments | ask | show | jobs | submit | more pornel's commentslogin

HDR when it works properly is nice, but nearly all HDR LCD monitors are so bad, they're basically a scam.

The high-end LCD monitors (with full-array local dimming) barely make any difference, while you'll get a lot of downsides from bad HDR software implementations that struggle to get the correct brightness/gamma and saturation.

IMHO HDR is only worth viewing on OLED screens, and requires a dimly lit environment. Otherwise either the hardware is not capable enough, or the content is mastered for wrong brightness levels, and the software trying to fix that makes it look even worse.


Most "HDR" monitors are junk that can't display HDR. The HDR formats/signals are designed for brightness levels and viewing conditions that nobody uses.

The end result is a complete chaos. Every piece of the pipeline doing something wrong, and then the software tries to compensate for it by emitting doubly wrong data, without even having reliable information about what it needs to compensate for.

https://docs.google.com/document/d/1A__vvTDKXt4qcuCcSN-vLzcQ...


What we really need is some standards that everybody follows. The reason normal displays work so well is that everyone settled on sRGB, and as long as a display gets close to that, say 95% sRGB, everyone except maybe a few graphics designers will have a n equivalent experience.

But HDR, it's a minefield of different display qualities, color spaces, standards. It's no wonder that nobody gets it right and everyone feels confused.

HDR on a display that has peak brightness of 2000 nits will look completely different than a display with 800 nits, and they both get to claim they are HDR.

We should have a standard equivalent to color spaces. Set, say, 2000 nits as 100% of HDR. Then a 2000 nit display gets to claim it's 100% HDR. A 800 nit display gets to claim 40% HDR, etc. A 2500 nit display could even use 125% HDR in it's marketing.

It's still not perfect - some displays (OLED) can only show peak brightness over a portion of the screen. But it would be an improvement.


DisplayHDR standard is supposed to be it, but they've ruined its reputation by allowing HDR400 to exist when HDR1000 should have been the minimum.

Besides, HDR quality is more complex than just max nits, because it depends on viewing conditions and black levels (and everyone cheats with their contrast metrics).

OLEDs can peak at 600 nits and look awesome — in a pitch black room. LCD monitors could boost to 2000 nits and display white on grey.

We have sRGB kinda working for color primaries and gamma, but it's not the real sRGB at 80 nits. It ended up being relative instead of absolute.

A lot of the mess is caused by the need to adapt content mastered for pitch black cinema at 2000 nits to 800-1000 nits in daylight, which needs very careful processing to preserve highlights and saturation, but software can't rely on the display doing it properly, and doing it in software sends false signal and risks display correcting it twice.


It is! I think that's not an accident.

CPUs evolved to execute C-like code quickly. They couldn't dramatically change the way C interfaces with the CPU, so they had to change the hidden internals instead.

For example, CPUs didn't have an option to hide DRAM latency with a SIMT architecture, so they've went for complex opaque branch prediction and speculative execution instead.

The way C is built and deployed in practice didn't leave room for recompiling code for a specific CPU, so explicit scheduling like VLIW failed. Instead there's implicit magic that works with existing binaries.

When there were enough transistors to have more ALUs, more registers, more of everything in parallel, C couldn't target that. So CPUs got increasingly complex OoO execution, hidden register banks, and magic handling of stack as registers. Contrast this with the current GPUs that have register-like storage available that is explicitly divided between threads (sort of like 6502's zero page – something that C couldn't target well either!)


So that you learn that loaning is for giving temporary shared^exclusive access within a statically-known scope, and not for storing data.

Trying to construct permanent data structures using non-owning references is a very common novice mistake in Rust. It's similar to how users coming from GC languages may expect pointers to local variables to stay valid forever, even after leaving the scope/function.

Just like in C you need to know when malloc is necessary, in Rust you need to know when self-contained/owning types are necessary.


The biggest thing I’ve run into where I really want self-referential types is for work that I want to perform once and then cache, while still needing access to the original data.

An example: parsing a cookie header to get cookie names and values.

In that case, I settled on storing indexes indicating the ranges of each key and value instead of string slices, but it’s obviously a bit more error prone and hard to read. Benchmarking showed this to be almost twice as fast as cloning the values out into owned strings, so it was worth it, given it is in a hot path.

I do wish it were easier though. I know there are ways around this with Pin, but it’s very confusing IMO, and still you have to work with pointers rather than just having a &str.


Note that some users of GC languages that support stack allocation, are used that it is a compiler error trying to have such a pointer/reference.

D example, https://godbolt.org/z/bbfbeb19a

> Error: returning `& my_value` escapes a reference to local variable `my_value`

C# example, https://godbolt.org/z/Y8MfYMMrT

> error CS8168: Cannot return local 'num' by reference because it is not a ref local


The GitHub SSO is annoying. I can't even view public issues if I'm logged in to GitHub, but haven't recently re-authenticated with SSO.

GitHub also has a lot of features and authentication scopes tied to the whole org, which is pretty risky for an org as large as Mozilla.


The barriers may keep out low effort submissions*, but they also keep out contributors whose time is too valuable to waste on installing and configuring a bespoke setup based on some possibly outdated wiki.

* contributors need to start somewhere, so even broken PRs can lead to having a valuable contributor if you're able to guide them.


My non-techie relatives can't tell the difference between the local device password/passphrase and the iCloud/Apple ID password, so they'll enter them all until something works (I don't blame them, the UIs for these are unclear and inconsistent).

Apple used to make fun of Vista's UAC, but they've ended up with the same patchwork of sudden prompts, and even weaker UI.


Yeah, to be perfectly honest, I understand. I think TCC is meant to be the primary consent system, but there are others (such as the Authorization system, and the Service Management framework).


UK doesn't protect the term psychotherapy, but there's a distinction between services of counsellors and (regulated) psychologists.

For counselling, people are encouraged to choose counsellors accredited by professional orgs like BACP.


"Psychologist" is not a protected title and anyone can use it. "Clinical psychologist" is a protected title, and one that requires an extremely high level of training and very strict professional standards. I imagine that the overwhelming majority of the population are completely oblivious to this distinction.

The BACP's standards really aren't very high, as you can qualify for membership after a one-year part-time course and a few weeks of work experience. Their disciplinary procedures are, in my opinion, almost entirely ineffectual. They undertake no meaningful monitoring of accredited members, relying solely on complaints from members of the public. Out of tens of thousands of registered members, only a single-digit number are subject to disciplinary action every year. The findings of the few disciplinary hearings they do actually conduct suggest to me that they are perfectly happy to allow lazy, feckless and incompetent practitioners to remain on their register, with only a perfunctory slap on the wrist.

BACP membership is of course entirely voluntary and in no way necessary in order to practice as a counsellor or psychotherapist.

https://www.hcpc-uk.org/news-and-events/blog/2023/understand...

https://www.bacp.co.uk/about-us/protecting-the-public/profes...


Rust generates absurd amounts of debug info, so the default debug builds are much much larger.

Zero-cost abstractions don't have zero-cost debug info. In fact, all of the optimized-away stuff is intentionally preserved with full fidelity in the debug info.


It's cool that it has/claims support for DC fast charging. All the custom conversions I've seen are AC only.


The child comments are a bit all over the place, but I can clarify.

At present, the Zombieverter supports:

Chademo CCS by interfacing with the BMW i3 LIM CCS by interfacing with the open source FOCCCI CCS controller.

The FOCCCI CCS controller is an associated project. See https://openinverter.org/wiki/Foccci

FOCCCI is the newest kid on the block, but it has been successfully integrated into several conversions now.


Very cool. Sometimes, DC fast charging is more accessible than AC.


It’s CHAdeMO - much simpler - and not CCS2


I am for sure not educated about the difference between CCS and CCS2 but it does say it's not just CHAdeMO https://openinverter.org/wiki/ZombieVerter_VCU#:~:text=ccs%2... and further down they cite a BMW i3 which I had and it fast charged to my satisfaction


Included me methods for ccs


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: