Hacker Newsnew | past | comments | ask | show | jobs | submit | derefr's commentslogin

You seem the right person to ask about this: why don’t we see any public web archivers operated by individuals or organizations based in countries that aren’t big fans of aiding or listening to American intelligence?

Well they certainly do exist. However they tend not to even get noticed because the mindset and momentum behind everything is America-centric.

I think that one refers to doing so when there is no food on the chopsticks. Picture tapping the chopsticks against your lips to show you’re thinking, if conversing while eating. The overarching rule being that you should put the chopsticks down whenever you’re not in the middle of picking up/moving food with them.

(Unless you want to come off as imitating a Rakugo storyteller. If you do, then go ahead and use them as a talking prop. But maybe make it clear that you’re not eating with those ones, so people don’t worry you’ll flick sauce at them!)


So what are you expected to do with the last few sauce-soaked grains of rice that would at best be able to be plucked grain by grain from the bowl, and even then would likely slip from between the tips of the chopsticks? Just leave them in the bowl?

I vaguely remember something about not finishing completely to acknowledge there was enough

I've heard that clearing the table of food would be considered rude in China, as it means you didn't get enough to eat, almost exactly opposite to the only food-related rule I was ever taught growing in the US - never waste food or serve yourself more than you can eat. That's probably just a "my family" thing though. I get the impression that even saving leftovers is rare among Americans these days.

There are still contradictory customs around this enough that it is standard practice to warn exchange students from Europe that if they finish absolutely everything on their plate that this is a signal in many American homes that you should be served more. This can lead to some real discomfort as the student tries to eat everything they are given which leads to being given more and more.

So at the same time it is considered poor taste to take more than you can eat, it is also considered poor form to offer a guest anything less than more than they can eat. This also shows up when people rate restaurants by the serving size.


Which is funny, because the serving sizes in US restaurants are so big that no human being can be expected to eat it all.

Channel your inner Mr. Miyagi.

Use a knife and fork

I wonder what Ms. Kyoto would tell me to do to properly pick up my chopsticks, given that I’m left-handed, and yet it is apparently a faux pas to lay down the chopsticks pointing to the right.

It's probably a faux pas to be left handed

I’m thinking this would be interesting inspiration for a song by the band Pulp.

Jarvis Cocker-san.


> to try to maybe help some rather technologically-hopeless groups of people

Even if they're the majority?

(Keep in mind that as average lifespan keeps getting longer while birth rates keep going lower, demographics will tend to skew older and older. Already happened in Japan; other developed countries will catch up soon.)

> They should probably not have a bank account at all and just stick to cash.

You know that these (mostly) don't fall into this category of being "hopeless with [modern] technology" because they're cognitively impaired, right?

Mostly, the people who most benefit by these protections, are just people 1. with full lives, who 2. are old enough that when they were first introduced to these kinds of technologies, it came at a time in their life when they already had too much to do and too many other things to think/care about, to have any time left over for adapting their thinking to a "new way of doing things."

This group of people still fully understands, and can make fluent use of, all the older technologies "from back in their day" that they did absorb and adapt to earlier in their lives, back when they had the time/motivation to do so. They can use a bank account; they can make phone calls and understand voicemail; they can print and fax and probably even email things. They can, just barely, use messaging apps. But truly modern inventions like "social media' confound them.

Old bigcorps with low churn rates are literally chock-full of this type of person, because they've worked there since they were young. That's why these companies themselves can sometimes come off as "out of touch", both in their communications and in their decision-making. But those companies don't often collapse from mismanagement. Things still get done just fine. Just using slower, older processes.


OpenAI don't talk about the "size" or "weights" of these models any more. Anyone have any insight into how resource-intensive these Mini/Nano-variant models actually are at this point?

I assume that OpenAI continue to use words like "mini" and "nano" in the names of these model variants, to imply that they reserve the smallest possible resource-units of their inference clusters... but, given OpenAI's scale, that may well be "one B200" at this point, rather than anything consumers (or even most companies) could afford.

I ask because I'm curious whether the economics of these models' use-cases and call frequency work out (both from the customer perspective, and from OpenAI's perspective) in favor of OpenAI actually hosting inference on these models themselves, vs. it being better if customers (esp. enterprise customers) could instead license these models to run on-prem as black-box software appliances.

But of course, that question is only interesting / only has a non-trivial answer, if these models are small enough that it's actually possible to run them on hardware that costs less to acquire than a year's querying quota for the hosted version.


Have they ever talked about their size or weights?

They never put the parameter counts in their model names like other AI companies did, but back in the GPT3 era (i.e. before they had PR people sitting intermediating all their comms channels), OpenAI engineers would disclose this kind of data in their whitepapers / system cards.

IIRC, GPT-3 itself was admitted to be a 175B model, and its reduced variants were disclosed to have parameter-counts like 1.3B, 6.7B, 13B, etc.


Wow, would love to see a source for this.


How about these two niches:

1. non-vegans eating with vegans at a vegan restaurant, where eating there wasn't their choice (they were craving a burger), and so, being forced to order off this menu, will choose the most burger-like thing on the menu.

2. non-vegans eating with vegans at a non-vegan restaurant, where for whatever reason they feel the need to impress / not-offend the vegan by eating vegan food as well. (Think "first date" or "client meeting.")


In both the situations, I'd order the best vegan thing on the menu instead of nasty imitation meat.

Most of what I (and in my experience many people) want a voice assistant for, is setting+ending timers... which for me happens mostly in the kitchen, while I'm simultaneously holding a hot pan or hand-tossing a salad or paper-towelling off some raw chicken. In none of those cases would I want a ring anywhere near my hands, let alone a smart ring. (And nor, in half of those cases, is it convenient/hygenic to use my oven timer.)

That being said, we could solve for fully 50% of in-home voice-assistant use-cases just by developing an extremely domain-specific voice assistant that has an extremely small (ideally burned-into-a-DSP) voice model that only knows how to recognize commands to manage kitchen timers. If such a device existed, and was cheap enough that you could assume anyone who wanted this functionality would just buy one, then this would make truly hands-free activation of a "real" voice-assistant much less necessary, as there'd be far fewer user-stories that would really "need" that. The rest of those user-stories really mostly could work with some kind of ring / belt buckle / shirt comm badge / etc.


> if an agent browses a poisoned page during research, the injected instructions could override its behavior before secrets ever come into play.

Why is this problem (UGC instruction injection) still a thing, anyway? It feels like a problem that can be solved very simply in an agentic architecture that's willing to do multiple calls to different models per request.

How: filter fetched data through a non-instruction-following model (i.e. the sort of base text-prediction model you have before instruction-following fine-tuning) that has instead been hard-fine-tuned into a classifier, such that it just outputs whether the text in its context window contains "instructions directed toward the reader" or not.

(And if that non-instruction-following classifier model is in the same model-family / using the same LLM base model that will be used by the deliberative model to actually evaluate the text, then it will inherently apply all the same "deep recognition" techniques [i.e. unwrapping / unarmoring / translation / etc] the deliberative model uses; and so it will discover + point out "obfuscated" injected instructions to exactly the same degree that the deliberative model would be able to discover + obey them.)

Note that this is a strictly-simpler problem to that of preventing jailbreaks. Jailbreaks try to inject "system-prompt instructions" among "user-prompt instructions" (where, from the model's perspective, there is no natural distinction between these, only whatever artificial distinctions the model's developers try to impose. Without explicit anti-jailbreak training, these are both just picked up as "instructions" to an LLM.) Whereas the goal here would just be to prevent any UGC-tainted document containing anything that could be recognized as "instructions I would try to follow" from ever being injected into the context window.

(Actually, a very simple way to do this is to just take the instruction-following model, experimentally derive a vector direction within it representing "I am interpreting some of the input as instructions to follow" [ala the vector directions for refusal et al], and then just chop off all the rest of the layers past that point and replace them with an output head emitting the cosine similarity between the input and that vector direction.)


> a zillion decent-spec chromebook style machines

The interesting/unique thing about Apple's offering at this price point is the build quality, not the spec.

If you're a school IT department buying these in volume, you want something that actually lasts more than a year before pieces of plastic begin chipping off, hinges start wearing out, etc. And you want something that's easy to clean / sanitize sticky little kid fingerprints off of, and also to undo e.g. residue (from kids who thought it'd be a good idea to stick stickers on their take-home laptop) without worrying about either the adhesive or the thinner permanently damaging the chassis.

In both cases, Apple can actually promise this with the Neo, while none of the Chromebook OEMs can for their equivalent offerings at this price point. (The other OEMs can promise it, but only for offerings at higher price-points schools aren't willing to pay.)

Also, Apple can now promise that you can keep a pile of spares and spare parts, and swap parts between them easily, replace consumables like batteries, etc. (https://www.youtube.com/watch?v=PbPCGqoBB4Y). Which is essentially table stakes for the education market, but it's good that they've caught up.


While the Neo is a nice notebook, I think you are overestimating it's durability advantages.

> If you're a school IT department buying these in volume, you want something that actually lasts more than a year before pieces of plastic begin chipping off, hinges start wearing out, etc. And you want something that's easy to clean / sanitize sticky little kid fingerprints off of, and also to undo e.g. residue (from kids who thought it'd be a good idea to stick stickers on their take-home laptop) without worrying about either the adhesive or the thinner permanently damaging the chassis.

If you manage to break a plastic cover, that amount of force will certainly also dent, bent and/or dislodge the aluminum cover of the Neo.

I've never seen or heard about plastic chipping off due to normal use (i.e. just wear). In the EU chipping-off plastic due to wear (with normal use) would fall under warranty. I have seen aluminum covers on high-end HP notebooks being bent, dent, etc. For example when transported in a bag, with other things in it, aluminum is more likely to get damaged.

All major brands (Lenovo, HP, Apple, etc.) have at some point had issues with hinges. I think it's even fair to say that Apple isn't known for being particularly forth coming about acknowledging problems with hinges and issuing service advisories to repair those under warranty even when it's a known issue.

> good idea to stick stickers on their take-home laptop) without worrying about either the adhesive or the thinner permanently damaging the chassis.

Getting stickers off plastic covers vs getting stickers of macbook covers doesn't really matter in difficulty. If it is problematic for plastic, it's probably going to problematic for aluminum as well. There are a lot of cleaning agents aluminum doesn't like, which cause white-ish stains in it. You can test that yourself by putting an aluminum breadbox in a dishwasher.

> Also, Apple can now promise that you can keep a pile of spares and spare parts, and swap parts between them easily, replace consumables like batteries, etc.

Right now the Apple self-repair program is, from a financial standpoint, pretty much a gimmick. The costs are so high, you are better of going to the Apple store. Also the swap-able battery is going to be mandatory in the EU so that's something all notebooks will have. Schools usually aren't that interested in starting a repair shop.


This guy [1] that posted about his series of plastic laptops over the years is a telling indictment of what the PC/chromebook value range is about. Hinges easily damage, bits and pieces falling off, can't go from closed to open with one finger, etc. In my region in Australia schools require parents to buy a laptop and the choice is between PC and Mac (Chromebook not allowed); before the Neo getting a Mac would be a budget constraint, especially for their children, but now it is such an easy sensible choice.

[1]: https://xcancel.com/mweinbach/status/2032235367961694542


Yep. Anyone saying a MacBook of any kind is comparable to the average school Chromebook has clearly never touched a school Chromebook anywhere other than in a Best Buy.

$200 — or even $500 — plastic computers are different in kind (of parts and materials used) to $800+ computers. It's not anything you'd notice when the hardware is new — not the extreme "deck flex" or anything like that — but it becomes clear after 3–6 months of even light use.

Planned obsolescence is real. But, rather than being a result of malicious adulteration, it is the predictable result of aiming for an MSRP (and therefore COGS) where the only viable parts and materials the OEM can get their hands on to meet that price point, have engineering tolerances far below the use-case they’re applying them to. The makers of $500 Chromebooks know they'll break well before buyers expect them to. But with their middling purchasing power and economies of scale, this is the best they can do.

Apple, meanwhile, can hit the same MSRP not by cheaping out on parts, but rather through economies of scale and manufacturing consolidation. Obviously the A18. But also: buy enough high-quality aluminum in bulk, and stamp the same modular chassis parts out for every laptop you make — and those parts start to get cheap enough to use even in a $500 product.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: