Minutes really, despite what the article says you can get 90% of the way there by telling Claude how you want the project documentation structured and just let it do it. Up to you if you really want to tune the last 10% manually, I don't. I have been using basically the same system and when I tell Claude to update docs it doesn't revert to one big Claude.md, it maintains it in a structure like this.
I've taken the position that if something is too expensive without ads, it's too expensive for me. My life is blissfully, nearly entirely, ad free. The only downside is I'm an alien on my own planet, blind to the continuous swamp of advertising everyone around me lives in.
I feel you on that, I almost never see ads and don't know how people subjugate themselves to it (by not running an ad blocker of any kind, nor paying to remove ads)
The last point is very interesting. I would say I have minimal, nearly nonexistent ability to visualize mentally, unless I am in a lucid dreaming state in which case it does feel like a magic power.
I never thought about it until you brought it up, but my ability to manipulate images in my head is nonetheless top notch, I can solve visual puzzles that seem difficult for most people with hardly any effort at all. And my ability to draw/paint is not something people would pay me for but is still well above average, and that also requires “holding” an image in your mind. So either these skills are unrelated to being able to “see” your imagination or we are really failing to communicate about it.
I think you've got to handle them on their own merits. Ultimately humans can write like AI and AI can write like humans, but a good answer is a good answer and a bad one is bad. Karma and moderation already have plenty of ways to handle inappropriate answers.
I think the value of the interaction matters. Who ever got an LLM to reply wanted to learn? be thoughtful? argue? what? And will this interaction be valuable for anyone replying to it? reading it? I think yelling at the void and hearing the coherent echo of a million people is not the same as having a conversation.
Also, put heavy lint rules in place, and commit hooks to make sure everything compiles, lints, passes tests, etc. You've got to be super, super defensive. But Claude Code will see all those barriers and respond to them automatically which saves you the trouble of being vigilant over so many little things. You just need to watch the big picture, like make sure tests are there to replicate bugs, new features are tested, etc, etc.
Exactly. There's a lot of strongly worded stuff in here about how easy locks are to defeat, but that's only against someone who's practiced the art, which is a very small percentage of the population. And in my experience they're mostly honest people interested in the technical challenge, rather than criminal exploitation. A typical modern lock is going to massively slow down or outright stop nearly everyone who comes up against it.
I stopped watching a couple of years ago but I assume they're still doing this: dealing out every game to a different network. You needed like 4 sports subscriptions just to be able to watch the season, sometimes even to watch the championship. For me that was the bridge too far.
All that mainly because some streaming services are willing to pay a more competitive amount of money for single (albeit national) weekly game broadcasts to sweeten their offering and get more subscribers.
Of course the NFL isn't gonna turn down $1B per year from Amazon for TNF. They get ~$2B from CBS and Fox each for the combined 10 Sunday games, then another couple billion from NBC for one Sunday night game, and another couple billion from ESPN for Monday night.
I think it's unlikely a single broadcaster would spend $12+B/year for exclusive rights to all games.
Wasn't me but I think the principle is straightforward. When you get an answer that wasn't what you want and you might respond, "no, I want the answer to be shorter and in German", instead start a new chat, copy-paste the original prompt, and add "Please respond in German and limit the answer to half a page." (or just edit the prompt if your UI allows it)
Depending on how much you know about LLMs, this might seem wasteful but it is in fact more efficient and will save you money if you pay by the token.
In most tools there is no need to cut-n-paste, just click small edit icon next to the prompt, edit and resubmit. Boom, old answer is discarded, new answer is generated.
That is odd, are you using small models with the temperature cranked up? I mean I'm not getting word for word the same answer but material differences are rare. All these rising benchmark scores come from increasingly consistent and correct answers.
Perhaps you are stuck on the stochastic parrot fallacy.
You can nitpick the idea that this or that model does or does not return the same thing _every_ time, but "don't anthropomorphize the statistical model" is just correct.
People forget just how much the human brain likes to find patterns even when no patterns exist, and that's how you end up with long threads of people sharing shamanistic chants dressed up as technology lol.
To be clear re my original comment, I've noticed that LLMs behave this way. I've also independently read that humans behave this way. But I don't necessarily believe that this one similarily means LLMs think like humans. I didn't mean to anthropomorphize the LLM, as one parent comment claims.
I just thought it was an interesting point that both LLMs and humans have this problem - makes it hard to avoid.