Hacker Newsnew | past | comments | ask | show | jobs | submit | notatoad's commentslogin

given the vagueness of the available information, i'm guessing they haven't actually defined its capabilities yet.

They can accept that building a smartphone is doomed to fail, and they want to build some hardware, so they're experimenting with all the "not a smartphone" form factors they can think of to see what sticks.


being the tech industry's conduit to the US president pays well.

I understand that sometimes the HN titles get edited to be less descriptive and more generic in order to match the actual article title.

What’s the logic with changing the title here from the actual article title it was originally submitted with “AV1 — Now Powering 30% of Netflix Streaming” to the generic and not at all representative title it currently has “AV1: a modern open codec”? That is neither the article title nor representative of the article content.


OK guys, my screwup.

We generally try to remove numbers from titles, because numbers tend to make a title more baity than it would otherwise be, and quite often (e.g., when reporting benchmark test results) a number is cherry-picked or dialed up for maximum baitiness. In this case, the number isn't exaggerated, but any number tends to grab the eye more than words, so it's just our convention to remove number-based titles where we can.

The thing with this title is that the number isn't primarily what the article is about, and in fact it under-sells what the article really is, which is a quite-interesting narrative of Netflix's journey from H.264/AVC, to the initial adoption of AV1 on Android in 2020, to where it is now: 30% adoption across the board.

When we assess that an article's original title is baity or misleading, we try to find a subtitle or a verbatim sentence in the article that is sufficiently representative of the content.

The title I chose is a subtitle, but I didn't take enough care to ensure it was adequately representative. I've now chosen a different subtitle which I do think is the most accurate representation of what the whole article is about.


Though in the original title AV1 could be anything if you don't know it's a codec. How about:

"AV1 open video codec now powers 30% of Netflix viewing, adds HDR10+ and film grain synthesis"


AV1 is fine as-is. Plenty of technical titles on HN would need to be googled if you didn't know it. Even in yours, HDR10+ "could be anything if you don't know it". Play this game if you want, but it's unwindable. The only people who care about AV1 already know what it is.

Well, I'm interested in AV1 as a videographer but hadn't heard of it before. Without 'codec' in the title I would have thought it was networking related.

Re: HDR - not the same thing. HDR has been around for decades and every TV in every electronics store blasts you with HDR10 demos. It's well known. AV1 is extremely niche and deserves 2 words to describe it.


AV1 has been around for a decade (well, it was released 7 years ago but the Alliance for Open Media was formed a decade ago).

It's fine that you haven't heard of it before (you're one of today's lucky 10,000!) but it really isn't that niche. YouTube and Netflix (from TFA) also started switching to AV1 several years ago, so I would expect it to have similar name recognition to VP9 or WebM at this point. My only interaction with video codecs is having to futz around with ffmpeg to get stuff to play on my TV, and I heard about AV1 a year or two before it was published.


I'm old (50) and have heard AV1 before. My modern TV didn't say HDR or HDR10 (it did say 4k). Agree that AV1 should include "codec".

One word, or acronym, just isn't enough to describe anything on this modern world.


this is the reason articles exist, and contain more information than the headline does.

you might not know what AV1 is, and that's fine, but the headline doesn't need to contain all the background information it is possible to know about a topic. if you need clarification, click the link.


> Though in the original title AV1 could be anything if you don't know it's a codec.

I'm not trying to be elitist, but this is "Hacker News", not CNN or BBC. It should be safe to assume some level of computer literacy.


Knowledge of all available codecs is certainly not the same tier as basic computer literacy. I agree it doesn't need to be dumbed down to the general user, but we also shouldn't assume everyone here know every technical abbreviation.

The article barely mentioned “open”, and certainly gave no insight as to what “open” actually means wrt AV1.

For me that’s a FU moment that reminds me ‘TF am I doing here?’ I genuinely see this resource as a censoring plus advertising (both for YC, obviously) platform, where there are generic things, but also things someone doesn’t want you to read or know. The titles are constantly being changed to gibberish like right here, the adequate comments or posts are being dead, yet the absolutely irrelevant or offensive things, can stay not touched. Etc.

Amen. The mania for obscurity in titles here is infuriating. This one is actually replete with information compared to many you see on the front page.

If there really was a “mania for obscurity in titles” we’d see a lot more complaints than we do.

Our title policy is pretty simple and attuned for maximum respect to the post’s author/publisher and the HN audience.

We primarily just want to retain the title that was chosen by the author/publisher, because it’s their work and they are entitled to have such an important part of their work preserved.

The only caveat is that if the title is baity or misleading, we’ll edit it, but only enough that it’s no longer baity or misleading. That’s because clickbait and misleading titles are disrespectful to the audience.

Any time you see a title that doesn’t conform to these principles, you’re welcome to email us and ask us to review it. Several helpful HN users do this routinely.


No, because people who point out obscure titles are downvoted in most cases, and eventually shadow-banned. So those voices are silenced here.

"We primarily just want to retain the title that was chosen by the author/publisher, because it’s their work and they are entitled to have such an important part of their work preserved."

Nobody said the title had to be deleted. But when it doesn't convey WHAT the "thing" is, it needs augmentation. Currently on page 4 there's an example that not only conveys nothing, but DOESN'T respect the actual title you find on the linked page. The HN post is entitled merely "tunni.gg".

But if you click on that, you get to a page that says, "Expose localhost to the internet." But the poster couldn't be bothered to put that important and interesting information in the title. Instead, the title is worthless.

You see plenty of similarly and intentionally obscure titles on HN daily. Try calling them out and see what happens.


I don't know where anyone gets the idea that the moderators or the community are such fearsome tyrants about this!

> people who point out obscure titles are downvoted in most cases, and eventually shadow-banned

Nothing like this happens! Nobody gets banned for pointing out anything about titles. People only get banned ("shadow" or otherwise) for serial abuse or trolling (and only after multiple warnings), or for spamming. Comments only get downvoted if more people disagree than agree with the title suggestion or the way it's suggested. It's no big deal. It's how opinions are expressed and debated on HN.

> The HN post is entitled merely "tunni.gg".

That's Tunnl.gg [1], and it would have been fine for the page's heading to be added to the HN title (that routinely happens when software projects on Github are submitted). It's also not terrible for just the project name to be there, because the name of the project (a variant of the word "Tunnel") hints at what it is. But we're not dogmatic about it, and anybody could have emailed us (hn@ycombinator.com) to suggest a better title we would have given it due consideration and replied appreciatively. We do that multiple times each day.

> You see plenty of similarly and intentionally obscure titles on HN daily. Try calling them out and see what happens.

“Intentionally obscure” isn't the right framing. Maybe we don't always want to clobber people over the head with obviousness. The joy of surprising discovery is an important part of the HN experience.

But the key principles – (a) respect the original work of the author/publisher and (b) don't mislead or disrespect the HN audience with clickbait or false information – have proven to be the most stable and defensible over time. There's still plenty of room for discernment in the way those principles are applied on a case-by-case basis.

[1] https://news.ycombinator.com/item?id=46145902


"I don't know where anyone gets the idea that the moderators or the community are such fearsome tyrants about this!"

Direct experience.


Can you link to examples so we can understand what you mean?

hacker news loves low information click bait titles. The shorter and more vague the better.

It is usually Dang using his judgment.

I really like moderation on HN in general, but honestly this inconsistent policy of editorializing titles is bad. There were plenty of times where submitter editorialized titles (e.g GitHub code dumps of some project) were changed back to useless and vague (without context) original titles.

And now HN administration tend to editorialize in their own way.


Also, it’s not the whole picture. AV1 is open because it didn’t have the good stuff (newly patented things) and as such I also wouldn’t say it’s the most modern.

AV1 has plenty of good stuff. AOM (the agency that developed AV1) has a patent pool https://www.stout.com/en/insights/article/sj17-the-alliance-... comprising of video hardware/software patents from Netflix, Google, Nvidia, Arm, Intel, Microsoft, Amazon and a bunch of other companies. AV1 has a bunch of patents covering it, but also has a guarantee that you're allowed to use those patents as you see fit (as long as you don't sue AOM members for violating media patents).

AV1 definitely is missing some techniques patented by h264 and h265, but AV2 is coming around now that all the h264 innovations are patent free (and now that there's been another decade of research into new cutting edge techniques for it).


Just because something is patented doesn't necessarily mean its good. I think head to head comparisons matter more. (Admittedly i dont know how av1 holds up)

Yes, but in this case, it does.

AV1 is good enough that the cost of not licensing might outweigh the cost of higher bandwidth. And it sounds like Netflix agrees with that.


I don't like newer codecs like AV1. I find them blurrier. Perhaps the bitrate is too low, but they do seem blurrier compared to h264. Even VP09 has often seemed better.

h264 is a very good codec.


anybody care to speculate on how long this is likely to last? is this a blip that will resolve itself in six months, or is this demand sustainable and we are talking years to build up new manufacturing facilities to meet demand?

Pure speculation, nobody can say say for sure, but my guess is 2-3 years.

If all goes "well", starting work on a new DDR5 fab now would result in having it ready to go when DDR6 hits the market:

https://www.techpowerup.com/339178/ddr6-memory-arrives-in-20...

So the supply side won't get better until about 2028.

I suppose you could hope for an AI crash bad enough to wipe out OpenAI, but unless it happens within the next few months, it may still be too late to profitably restore the DDR5 production lines now being converted to HBM, even if the broader economy doesn't tank:

https://www.reuters.com/markets/europe/if-ai-is-bubble-econo...

Perhaps not coincidentally, that Reuters article was published the same day OpenAI announced that it had cornered an estimated 40% of the world's DRAM production:

https://openai.com/index/samsung-and-sk-join-stargate/

https://www.tomshardware.com/pc-components/dram/openais-star...


>How it works # YesNotice works by periodically checking the status of the item you care about

okay, but how does it work? how does it check the status of things?


There are two general options:

1. Scrape a google search for the question, feed that into OpenAI with the additional prompt of "Given the above information, is the answer to <user prompt> yes or no". Or give the AI a "google" tool and just ask it directly.

2. Same thing, except instead of OpenAI feed it into underpaid people in the global south (i.e. amazon mechanical turk). These people then probably feed it into ChatGPT anyway.

Given there's a free tier, and when you use it it produces very ai-sounding text, I think it's pretty clearly 1.

Also, if you enter a clever enough question, you can get the system prompt, but this is left as an exercise to the reader (this one's somewhat tricky, you have to make an injection that goes through two layers).


My favorite part about the spread of AI/LLM stuff is that it opens up a new kind of reverse engineering. Trying to fetch the system prompt that was used. Trying to deduce the model that was used (there's lots of ways to do this: glitch tokens, slop words, "vibes", etc.)

Their “About” site is (just slightly) more insightful:

> Using AI-powered web search, we continuously monitor your questions and send you an email notification when the status flips to what you're waiting for.

via https://yesnotice.com/about/

Without knowing whether they actually do it that way, if you give ChatGPT the following prompt, it returns `No.`:

> Please answer the following question with just “yes” or “no”: Is the new iPhone 18 available for pre-order?


I built something very similar a few months back and I just asked an LLM. You could optionally specify a CSS selector for HTML or JMESPath for JSON to narrow things down, but it would default to feeding the entire textual content to the LLM and just asking it the question with a yes or no response.

…and how frequently?

the frequency is specified later in the doc[1]:

> We check free accounts daily and premium accounts up to every 15 minutes.

[1]: https://yesnotice.com/about/


From a database!

You just curl the site or use its API, if it has one? Then you store the result in a database and see if its value has flipped. I don't get the question; this is trivial.

> Additionally, YesNotice will provide an estimated availability timeline for the question, so you can have some information about when to expect the change.

How is that trivial in the general case?


Let's see how accurate those predictions are before worrying about the how.

[flagged]


Here is a trivial estimator: predict a random date in the future based on characteristics of the query. Use an LLM to classify the query, yielding the random variable's scale parameter, if you want.

How much better is it than my trivial estimator?


And how exactly does it call any arbitrary API or know which site to curl for any arbitrary question a user might ask? Your answer doesn’t contemplate the how this actually works.

According to https://yesnotice.com/about/ it uses "AI-powered web search" so the heavy lifting is likely outsourced.

> YesNotice works by periodically checking the status of the item you care about (e.g., product stock, website availability, domain status) and comparing it to the previous status. When it detects a change from no to yes, it sends you a notification via email.

How does it generalize arbitrary indications of status into yes/no?

How does it know how to use arbitrary APIs to obtain arbitrary indications of status?


what site does it check? what api does it call?

one of the examples is to see if a new coffee shop is opened in town. what's the API to call for that?



which is, of course, what i mean by "how does it work"

That's if the website you're querying is a static html file but the web is much more dynamic and varied. Some of the questions I have: does yesnotice execute js, does it handle an answer appearing on a different page, does it handle ambiguous launch language. In essence: how does it work?

if anything, siri has gotten worse over the years, which is wild.

it used to be able to set a timer or alarm 100% of the time, now sometimes it decides it needs to ask chatGPT for help.


yeah, i get why people were doing the DIY air purifier thing during covid when air purifiers were hard to come by, and the box fan version at least has cost-effectiveness going for it.

but 5x arctic cooling PC fans is ~$100. the commercial versions are easily available, more effective, no more expensive, and don't look like a box of furnace filters taped together.


Huh? 5x Arctic P12 is $24 on Amazon right now, no sale going on. And the whole Corsi-Rosenthal trend started specifically because consumer air purifiers were tested to be both noisier and less effective at their job.

But I'd honestly pay a premium for a commercial air purifier that just has a bunch of 120mm/140mm fan mounts instead of their "maybe tolerable at Very Low" integrated box-fan equivalent.

In general, all I've learned from online reviews of "quiet" appliances is that different people have very different definitions / criteria for "quiet".


conway filter posted above, you can't really hear it from 10ft when it runs on low speed (it has sensor and he autoadjusts ).

i also have iqair which according to reviews is quiet at low speed. in my experience this quiet sounds like airplane (i got it replaced once. apparently it's just the way it is).


I think you're missing his point entirely. The problem with these retail purifiers is that you either get quiet or effective. You don't get both.

Sure, you can't hear it from 10ft away - but how much air is it moving at that setting?

I have various configurations of these PC fan setups and the Arctic P14 Pro that you can get 5 for $32 on amazon are honestly wildly effective and designed for applications with some static pressure (radiators and such).

So we're back to; effective or quiet, you're only going to get both with the PC fans, for now.


I have the one I linked and it is both. Set to auto, it is quiet most of the time and only ramps up when I’m cooking or the air quality is bad.

there is also added benefit of not taking half of the room. i think conway footprint is 1/6th of The Cube if not less.

Rabbit Air (i think they closely related to conway) you can hang on the wall as art piece.


HTTP semantics aren’t hard enforced but that only means something if you always control the client, server, and all the middle layers like proxies or CDNs that your traffic flows over.

Your GET request can modify state. But if your request exceeds a browser’s timeout threshold, the browser will retry it. And then you get to spend a few days debugging why a certain notification is always getting sent three times (ask me how I know this)

Similarly, you can put a body on your GET request in curl. But a browser can’t. And if you need to move your server behind cloudflare one day, that body is gonna get dropped.


i have zero experience with linux system programming so i'm probably missing something, but what's the point of an application restricting itself at runtime? if the application were compromised in some way, wouldn't it simply un-restrict itself?

LWN's article on unveil() is a good explanation - the restrictions are permanently applied to the process and its children until termination: https://lwn.net/Articles/767137/

The kernel enforces that once the policy gets added it can't be removed.

So the restrictions are permanent for the life of the program. Even root can't undo them.


Since it can’t re-enable privileges during runtime, the compromise would have to modify its own code and restart; if you don’t allow the running process to access its own code, it couldn’t make any changes that would persist across a restart of the code.

As the article states. You can not give extra permissions only limit further.

Reading this as a web developer, it reminds me of Demo's permission system.

Deno is a JS runtime that often runs, at my behest, code that I did not myself write and haven't vetted. At run time, I can invoke Deno with --allow-read=$PWD and know that Deno will prevent all that untrusted JS from reading any files outside the current directory.

If Deno itself is compromised then yeah, that won't work. But that's a smaller attack surface than all my NPM packages.

Just one example of how something like this helps in practise.


> if the application were compromised in some way, wouldn't it simply un-restrict itself?

The API doesn't allow un-restriction, only restriction. Since one typically applies restrictions at program start, they will be applied before an attacker gains remote-execution, and the attacker is then limited in what they can do...


The kernel guarantees that once restricted, that process will stay restricted. The only way for it to un-restrict itself would be to also compromise the Linux kernel. So you have 2 things you have to compromise to own the machine, instead of just 1.

For sandboxes where the underlying software is assumed to be non-hostile (e.g. browser sandboxes), these kind of restrictions can be applied very early in a program's execution. If the program doesn't accept any untrusted input until after the restrictions are applied, it can still provide a strong defense against escalation in the event of a vulnerability.

codex-cli is a neat example of an open source Rust program that uses Landlock to run commands that an LLM comes up with when writing code (see [1]). The model is that a user trusts the agent program (codex-cli), but has much more limited trust of the commands the remote LLM asks codex-cli to run.

[1] https://developers.openai.com/codex/security/


The point is it can’t.

yeah clippy absolutely would have sold your data if he'd been clever enough to do that.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: