Hacker Newsnew | past | comments | ask | show | jobs | submit | ijk's commentslogin

Also the rules and norms of the subreddit has changed over time, which has led to spin-off subreddits that serve those purposes.

That's not necessarily a downside for traffic safety, though. Though I imagine someone must have studied the effects of various wavelengths on drivers...

Advertisers definitely did - there's (some) money in billboards, but only as long as you don't kill your prospective customers.

Which is GOG's selling point, versus Steam.

Everything is DRM free and they provide offline installers. They are also proactive in making sure the games they sell run on modern systems.

I've been reaching for BAML when I really need prompt iteration at speed.

This matches my experience with Dspy. I ended up removing it from our production codebase because, at the time, it didn't quite work as effectively as just using Pydantic and so forth.

The real killer feature is the prompt compilation; it's also the hardest to get to an effective place and I frequently found myself needing more control over the context than it would allow. This was a while ago, so things may have improved. But good evals are hard and the really fancy algorithms will burn a lot of tokens to optimize your prompts.


Yes! I have also felt this. I highly recommend taking a look at Maxime's template adapter: https://github.com/dspy-community/dspy-template-adapter

I think it solves some of this friction!


Yeah, there's often a heavy instruction and recency bias that just squeezes all of the nuance and subtlety out if it.

Yes, though I would take it in a different direction and say that LLMs are better at putting actual ideas into code. They've never gotten real feedback on how their literary metaphor feels, but they have gotten very direct feedback on whether code runs at all, and slightly more indirect feeds on whether it runs as part of the larger system.

Adequate coffee almost works as an image.

In the hands of Douglas Adams or Kurt Vonnegut it could be spun into a whole recurring motif.

In this case it's merely...adequate. Almost captures the density of ideas packed into something like "The ships hung in the sky in much the same way that bricks don't" but doesn't quite manage the same effect.


No, I clocked the AI images before I noticed the text. I think the "obviously" is earned.

You are correct that a previous era would have included a bunch of Fiverr images that would be in sort of that style, but it's not the style that's the problem. None of the images say more than the text that they're illustrating. It's subtle, but once you notice the lack of information density it becomes starkly apparent.


In Peter Watt's Blindsight, the aliens understand language as spam, a hostile intent to waste their time, and respond by opening fire.

Reading LLM slop without warning makes me see their point of view.

I think there's useful ways to engage with LLM writing, but they are often very different than human writing.

A human writer, a good one, often has ideas that are denser than the words on the page, and close reading is rewarded by helping you unpack the many implications.

With AI writing, there's usually fewer ideas than words, and so it requires a different kind of engagement. Either the human prompter behind it didn't supply enough ideas, or they were noncommittal enough that their very indecision got baked in.

LLMs are very prone to hedging and circling around a point while not saying much of anything. Maybe it is the easiest way to respond to RLHF incentives and corporate-speak training data. Or maybe they're just intrinsically stuck on being unable to find the right next token so they just endlessly spiral around via all of the wrong ones. Either way, there's often a whole lot of cotton candy text that dissolves when you try to look at it more closely.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: