Hacker Newsnew | past | comments | ask | show | jobs | submit | roblh's commentslogin

I feel like the stars are probably pretty easy to mask out since they’re very bright relative to the rest of the image. Once you have the mask, each one is small enough that you could probably fill it with the values from adjacent pixels. Kinda like sensor mapping to hide dead pixels. That’s just a guess though, I’m sure there’s more to it than that.

Bright stars are so bright they literally mask areas of the sky. You'll probably need deconvolution algorithms (CLEAN being the standard some time ago, don't know whether some AI/deep-inv approach works nowadays...) to remove them.

There are several “AI” deconvolution tools to remove stars which work exceptionally well: two of the most popular ones being StarNet and RC-Astro’s StarXTerminator. I’m willing to bet that the author used the latter for star removal as it’s become something of a standard in the astrophotography world.

I haven't used all of them, but I have used both the 9 point and the 45 point types, and the difference is massive. The 45 point was far, far more tactile and responsive. I don't mean speed of autofocus, but the actual way that the points sit over top of the viewfinder and light up, it's hard to explain. I'm sure part of that is software, but owning an older model and then trying out a newer one in the camera store in like 2013 really was eye opening, it blew my mind. The 9 point feels like a toy.


On the other hand in actual usage i don't think that they are really that different. It's useful for sports/wildlife for focusing closest moving target (and speed of AF on these cameras was not that quick so you were hunting focus anyway). Otherwise selecting some offset autofocus point is pretty niche. With more static subjects majority of photographers would use the single middle point, focus and then recompose.

It's only with advent of smart focusing of mirrorless cameras with people/faces recognition where there is a big difference.


I started to write a stupid comment about how insanely fast the focus on the EOS R5 C is- only to remember it’s a non reflex !

The difference in focal time between my 5D mkii even with the best lenses, and the R5 C with a cine servo or vcm is insane. The R5 feels instant and only ever searches if it’s been locked to an inappropriate point.


I kinda love this. That sounds like an incredibly entertaining place to work for between 1 and 2 years in your late 20s and not a second longer.


If you enjoyed this, you'd probably enjoy thedailywtf.com, which is full of stories like that.


Yeah, not even having an upgrade to 16gb or more makes this dead on arrival for anyone doing real work. Bummer, since otherwise it looks great. I guess it'd be the same price as a macbook air after that upgrade anyways though, so it doesn't really matter.


> dead on arrival for anyone doing real work

Honestly, we’re not the target market for this. I’m pretty sure at this price point though, it will sell like hotcakes. Once people get slightly into the ecosystem, it’s usually a big win for Apple since their stickiness ( from my experience of people around me) is undeniable once you get one product


It's perfectly adequate for most office work: documents, spreadsheets, presentations, web browsing / research. The vast majority of users are not doing software development and never will.


Why would anyone doing “real work” want this?

If you’re doing “real work” then 16gb won’t be sufficient, either. My “real work” machine has 96 and I sometimes wish it had more.


This is not for "serious work". It's for users who spend most of their time in a browser and/or using lightweight apps.


People doing real work have money to spend and Apple wants them buying Airs/Pros.

If only we could get fun colors for those…


“real work” != “development”.


Kinda funny that the top image is capture one when Apple literally owns Photomator and gives you the option of bundling it when you buy.


It's a not a binary thing, it's a spectrum. There are many elements of uncertainty in every action imaginable. I'm inclined to agree with the other commenter though, the LLM slot machine is absolutely closer on that spectrum to gambling than your example is.

Anthropic's optimization target is getting you to spend tokens, not produce the right answer. It's to produce an answer plausible enough but incomplete enough that you'll continue to spend as many tokens as possible for as long as possible. That's about as close to a slot machine as I can imagine. Slot rewards are designed to keep you interested as long as possible, on the premise that you _might_ get what you want, the jackpot, if you play long enough.

Anthropic's game isn't limited to a single spin either. The small wins (small prompts with well defined answers) are support for the big losses (trying to one shot a whole production grade program).


> Anthropic's optimization target is getting you to spend tokens, not produce the right answer.

The majority of us are using their subscription plans with flat rate fees.

Their incentive is the precise opposite of what you say. The less we use the product, the more they benefit. It's like a gym membership.

I think all of the gambling addiction analogies in this thread are just so strained that I can't take them seriously. Even the basic facts aren't even consistent with the real situation.


Thats a bit naive. Anthropic makes way more money if they gey you to use past your plans limit and wonder if you should get the next tier or switch to tokens


The price jump between subscription tiers is so high that relatively few people will upgrade instead of waiting a few more hours, and even if somebody does upgrade to the next subscription level, Anthropic still has an incentive to provide satisfactory answers as quickly as possible, to minimize tokens used per subscription, and because there is plenty of competition so any frustrated users are potential lost customers.

I swear this whole conversation is motivated reasoning from AI holdouts who desperately want to believe everybody else is getting scammed by a gambling scheme, that they don't stop and think about the situation rationally. Insofar as Claude is dominant, it's only because Claude works the best. There is meaningful competition in this market, as soon as Anthropic drops the ball they'll be replaced.


And we're still in the expansion phase, so LLM life is actually good... for now.


It's not going to get worse than now though. Open models like GLM 5 are very good. Even if companies decide to crank up the costs, the current open models will still be available. They will likely get cheaper to run over time as well (better hardware).


>Open models like GLM 5 are very good. Even if companies decide to crank up the costs, the current open models will still be available.

https://apxml.com/models/glm-5

To run GLM-5 you need access to many, many consumer grade GPUs, or multiple data center level GPUs.

>They will likely get cheaper to run over time as well (better hardware).

Unless they magically solve the problem of chip scarcity, I don't see this happening. VRAM is king, and to have more of it you have to pay a lot more. Let's use the RTX 3090 as an example. This card is ~6 years old now, yet it still runs you around $1.3k. If you wanted to run GLM-5 I4 quantization (the lowest listed in the link above) with a 32k context window, you would need *32 RTX 3090's*. That's $42k dollars you'd be spending on obsolete silicon. If you wanted to run this on newer hardware, you could reasonable expect to multiply that number by 2.


I mean it would make sense to see this as a hardware investment into a virtual employee, that you actually control (or rent from someone who makes this possible for you), not as private assistant. Ballparking your numbers, we would need at least an order of magnitude price-performance improvement for that I think.

Also, how much bang for the buck do those 3090s actually give you compared to enterprise-grade products?


That's good to hear. I'm not really up-to-date on the open models, but they will become essential, I'm sure.


im on a subscription though.

they want me to not spend tokens. that way my subscription makes money for them rather than costing them electricity and degrading their GPUs


Wouldn't that apply only to a truly unlimited subscription? Last I looked all of their subs have a usage limit.

If you're on anything but their highest tier, it's not altogether unreasonable for them to optimize for the greatest number of plan upgrades (people who decide they need more tokens) while minimizing cancellations (people frustrated by the number of tokens they need). On the highest tier, this sort of falls apart but it's a problem easily solved by just adding more tiers :)

Of course, I don't think this is actually what's going on, but it's not irrational.


For subscription isers, anthropic makes mkre money if you hit your usage limit and wonder idlf the next plan, or switching to tokens would be better. Especially given the FOMO you probably have from all these posts talking about peoples productivity


> im on a subscription though.

Understood.

> they want me to not spend tokens.

No, they want you to expand your subscription. Maybe buy 2x subscriptions.


He's not going to do that if all Claude can do is waste tokens for hours.


> you'll continue to spend as many tokens as possible for as long as possible.

I mean this only works if Anthropic is the only game in town. In your analogy if anyone else builds a casino with a higher payout then they lose the game. With the rate of LLM improvement over the years, this doesn't seem like a stable means of business.


While I don't know if this applies to AI usage, but actual gambling addicts most certainly do not shop around for the best possible rewards: they stick more or less to the place they got addicted at initially. Not to mention, there's plenty of people addicted to "casinos" that give 0 monetary rewards, such as Candy Crush or Farmville back in the day and Genshin Impact or other gacha games today.

So, if there's a way to get people addicted to AI conversations, that's an excellent way to make money even if you are way behind your competitors, as addicted buyers are much more loyal that other clients.


You're taking the gambling analogy too seriously. People do in fact compare different LLMs and shop around. How gamblers choose casinos is literally irrelevant because this whole analogy is nothing more than a retarded excuse for AI holdouts to feel smug.


My experience with elixir, as a scrub who spends every day at work writing javascript, is pretty in line with that. The language forces you to work that way, and you spend half your time just architecting your supervision tree. But the language itself is so easy to write business logic in that it takes half as long as it would in another language. So it works out to the same total time investment but the return is so much higher cause your program is better and more predictable and has scaling for free.


I unironically use this website everytime I forget a status code at work. The name is instantly memorable, it loads immediately, and I can ctrl-f it. It's basically muscle memory at this point.



Yes, but opening that and searching for "411" is much slower than just typing "http.cat/411" into the URL bar


Also relieves a stress a bit with a funny cat photo. There's also http.dog to the same effect.


...which is when you set up a browser bookmark with a keyword, so you can just type "http 411" and it will redirect you! :-)

Eg.: "https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/..." would then go to: "https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/..."


Same - most of the time I just directly open the specific status (e.g https://http.cat/504 )


I always post a cat from it whenever I need to specify a response code in an issue ;)


Same. I know and see several of the codes all the time. But occasionally I encounter a weird one and I always go to http.cat to find out what it is.


i still don't understand 409 errors. saw one for the first time a few weeks ago


Key violation in your database? Can't insert the record because the key already exists? Thus conflict


Ahaha same.


Seriously, that’s a completely nonsense line.


Is this some kind of OEM Apple display? Or did they just put all that effort into machine out those spheres in the back of it so it looks like one?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: