So, in my professional work environment, which is a complex C++ app with long build times and lots of proprietary domain-specific knowledge, LLMs were worse than useless. Not totally surprising, but I originally had hopes for something like Cursor being useful in terms of simplifying the process of large mechanical refactors, which it decidedly wasn't. It could suggest interesting and complex uses of C++ templates, but its application were very slop-adjacent, and even for less complex refactors it would attempt and fall apart in the "check" part of the "guess and check" loop, probably because of the whole "long build times" thing. So we're still paying for visual assist and just wishing it had more (and more specific!) kinds of mechanical refactors available.
for personal projects (more polyglot but rust, js, python, and random shell scripts are bigger and more important here) it's been more mixed to positive; and this is (i think?) in part because i have the luxury of writing off things I'm _not actually_ interested in doing. maintaining cmake files sucks, and the free tier of Cursor does a good enough job of it. I have a few small plugins/extensions for things like blender, and again, I don't know enough to do a good job there, and the benefit of making something extremely specific to what i need without actually knowing what's going on under the hood works fine: I can just verify the results, and that's good enough. but then, conversely, it's made it _wayyyy_ harder to pick and verify third party libraries for the things i do care about? I'll look something up and it'll either be 100% AI vibe coded and not good enough to sneeze at, or it'll be fine, but the documentation is 100% AI generated and likewise, I would rather just have the version of this library before AI ever existed.
more and more I'm convinced LLM agents are only fit for purpose for things that don't need to be good or consistent, but that there is actually a viable niche of things that don't need to be good that it can nicely slot into. That's still not worth $20/month to me, though. and it's absolutely ruining the online commons in a way that makes it hard to feel good about.
(my understanding of claude code is that it's a non-interactive agent, which is worse for what i have in mind. iteration and _changing my mind_ are a big part of my process, so even if I let the computer do its own thing for an hour and work on something else, that's less productive than spending even 10 minutes of focused time on the same thing.)
>(my understanding of claude code is that it's a non-interactive agent, which is worse for what i have in mind. iteration and _changing my mind_ are a big part of my process, so even if I let the computer do its own thing for an hour and work on something else, that's less productive than spending even 10 minutes of focused time on the same thing.)
Just use 'plan mode'; it will ask clarifying questions.
so i managed to convince my company to try solid on a new project, pretty much on the basis of "this looks like react but solves many of our existing problems with react". since the JSX and project structure is basically the same, we could take our (pretty tiny at the time) demo project and do a 1:1 diff and show the differences inline. and it was pretty compelling! the code was simpler, and faster, and we still got to keep lots of the unique patterns/other stuff we were used to when creating react apps
Probably the most equivalent existing open source thing out there is ENet, which doesn't do encryption or detailed stats. If you've signed the steam NDA (which is free of charge iirc) you can check out the existing docs for the closed source version, which has extra features on top of this.
The concern here is revenge-spam: someone takes your email and submits it to every mailchimp default form they can find automatically, and then your inbox is flooded with ostensibly legitimate email that you have to manually unsubscribe from, for each individual list.
I understand the concern on an individual level as expressed in the article, but doubt "revenge spam" even moves the needle with mail providers' decisions on MailChimp mails get through or not, which is the primary concern of people worrying about its deliverability.
notably: it's not the GOP the organization that initial funded the oppo, but a private news org with conservative leanings. and the FBI did not fund the dossier, but were provided it during its creation. (side note: who cares)
When it comes to corroboration: I would think the special investigation is good enough evidence that the claims of Russian cooperation are being taken seriously, no? Significant is a weasel word but there is actual smoke here.
>I would think the special investigation is good enough evidence that the claims of Russian cooperation are being taken seriously, no?
How would you tell the claims being taken seriously from the investigators trying to use the legal system to get dirt on Donald Trump?
At this point, the investigation has been ongoing for around 10 months, and currently they've only found anything on Paul Manafort, who was Trump's campaign manager for a few months.
The charges against Manafort are essentially failing to disclose lobbying for foreign agents, tax evasion, and money laundering [0]. The lobbying was done while Manafort was working for the Podesta Group, which was founded by Hillary Clinton's campaign manager [1].
On a side note, I find it interesting how Wikipedia doesn't have any information on Manafort's involvement with the Podesta Group.
I would say it's mostly a matter of use: In C you deal with void* or typecasts all the time, whereas in higher level languages it's much less common, either because the type system is smarter, or the constraints that it does have are more strictly enforced. For example: you can happily compare a char* and an int in C, but other languages like python might error at the thought.
C is incredibly permissive with regard to its types which are themselves very anemic. With the exception of the numeric primitives, C really only has a single type, the pointer, everything else is just syntactic sugar for various forms of pointer arithmetic. For instance arrays in C are just a shortcut for some pointer plus an offset multiplied by a constant determined at compile time based on what you've claimed is the underlying struct or primitive of the array. Importantly C is perfectly happy to take any random pointer into arbitrary memory and allow you to map any set of offsets into it. It's worth looking at for instance Rust that at least in theory allows the same thing to be done, but only by explicitly opting out of static checks via unsafe declarations. In normal safe code Rust will statically verify that a given reference (pointer more or less) is in fact referring to the type you're code is expecting it to, rather than the C approach of simply assuming the program is correct. Looked at another way, as far as the C compiler is concerned nearly everything is a pointer, and one kind of pointer is entirely exchangeable with another kind of pointer (with at most a cast being required, but probably not even that if it's the entirely too common case of being a void pointer). This is in contrast to nearly every other statically typed language that will either at compile or runtime verify that any given reference is the appropriate type before dereferencing it. C++ nominally at least has a more powerful type system, but since it was designed (in theory at least) as a superset of C, C's permissivity blows a giant gaping hole in its type system.
/t/tmp.1q8r9dZAtX > cat test.c
int main() {
char *test = "test";
int i = 10;
return test == i;
}
/t/tmp.1q8r9dZAtX > cc test.c
test.c: In function ‘main’:
test.c:4:14: warning: comparison between pointer and integer
return test == i;
^~
If your intent is to improve the world, and not just act superior, I don't see what this accomplishes. Hear dissent without action long enough and it becomes noise.
it can still be an issue with memory even if it isn't as blatant as "out of memory".
The easy example being a datastructure growing over time and getting slower to manipulate and query. This is a problem where being cavalier with the amount of memory you allocate and use will hurt you, even if you've got lots of memory to spare.
for personal projects (more polyglot but rust, js, python, and random shell scripts are bigger and more important here) it's been more mixed to positive; and this is (i think?) in part because i have the luxury of writing off things I'm _not actually_ interested in doing. maintaining cmake files sucks, and the free tier of Cursor does a good enough job of it. I have a few small plugins/extensions for things like blender, and again, I don't know enough to do a good job there, and the benefit of making something extremely specific to what i need without actually knowing what's going on under the hood works fine: I can just verify the results, and that's good enough. but then, conversely, it's made it _wayyyy_ harder to pick and verify third party libraries for the things i do care about? I'll look something up and it'll either be 100% AI vibe coded and not good enough to sneeze at, or it'll be fine, but the documentation is 100% AI generated and likewise, I would rather just have the version of this library before AI ever existed.
more and more I'm convinced LLM agents are only fit for purpose for things that don't need to be good or consistent, but that there is actually a viable niche of things that don't need to be good that it can nicely slot into. That's still not worth $20/month to me, though. and it's absolutely ruining the online commons in a way that makes it hard to feel good about.
(my understanding of claude code is that it's a non-interactive agent, which is worse for what i have in mind. iteration and _changing my mind_ are a big part of my process, so even if I let the computer do its own thing for an hour and work on something else, that's less productive than spending even 10 minutes of focused time on the same thing.)