I disagree that attackers aren't sophisticated enough to use modified tools. There are entire work campuses dedicated to committing fraud. There's also state sponsored subterfuge. There's no reason to think that bad actors are intrinsically unsophisticated.
How could they publish the terabytes of training data? A million RAR files?
Honestly would that part even be useful? Like I want to know how they did the training so I can repro it with my own set of training data, right?
I mean, isn't that the future? Somebody figures out how to do P2P distributed training and groups can crawl the web training their own open source models?
IDK. I think sometimes people lean into the idiosyncrasies of their own ADHD. I have ADHD, but I get sucked into writing code because it lets me focus in a way that dishes and laundry do not.
I don't know about that. Even with a company like Google, 12 years is a long time. What if the responsible attorney retired? I've never worked in a legal setting, but I could imagine that causing their work being reevaluated for cost effectiveness.
Builds integrations to APIs vs no code/low code providers doing it it seems. The logical next step is to make LLM workflow running robust and maintainable.