Verification has always been hard and always ignored, in software more than other industries. This is not specific to AI generated code.
I currently work in a software field that has a large numerical component and verifying that the system is implemented correctly and stable takes much longer than actually implementing it. It should have been like that when I used to work in a more software-y role, but people were much more cavalier then and it bit that company in the butt often. This isn't new, but it is being amplified.
I just don't understand how this is true unless you're doing something extremely basic. So much context is missing in this post.
Having a CS degree doesn't mean much, but I don't see how a lit major is going to learn how to be productive in an embedded environment for example. There is just too much domain specific knowledge that isn't based purely on intelligence and can't be inferred from first principles.
> I just don't understand how this is true unless you're doing something extremely basic.
The same way it is true for people with no college degree at all. People can learn on the side. Some of them might have had a minor in CS, or worked on hobby software projects in the meantime. Those hires might become some of the best, but finding them is difficult.
Out of the two such SWEs I worked with at Microsoft years ago, one of them had no college degree at all, and another one had an entirely unrelated degree (with his previous full-time job being an air traffic controller at a nearby airport). None of the SWE work they did was trivial or basic even in the slightest.
I taught myself how to program as a teenager by… programming. While I didn’t have an academic background, I was perfectly capable of contributing to OSS and working. Rarely ever did I think “I wish I had a degree to do this.” The little bit of academics I did need I also self taught, like time complexity. The only case really where the degree may be helpful is leetcode type interview questions where you need to know the algorithm.
So you basically have a CS degree. I learned C in 7th grade and was completely self taught. I then got a CS degree because I just wanted to learn more about it and be around people who were also enthusiastic about CS.
There is something disingenuous about the parent post. Highly motivated people will always be good at what they want to do. I'm good at guitar, but never went to music school. Highly motivated individuals though are the exception, not the rule. If you take two random individuals, one with a lit degree and one with a CS degree, the CS degree person will know more in the domain of CS and be more likely to write useful software.
The parent post is conflating being highly selective about personality type and attributing it to the degree.
A lot of our industry was built by people without CS degrees. Actually, I doubt that there are too many newly minted CS graduates able to code anything using an assembler.
It's a valid concern, but with a doctor giving bad advice there is accountability and there are legal consequences for malpractice. These LLM companies want to be able to act authoritatively without any of the responsibility. They can't have it both ways.
I don't mean just doctors giving bad advice. It comes from the top, too.
For example, I remember when eggs were bad for you. Now they're good for you. The amount of alcohol you can safely drink changes constantly. Not too long ago a glass of wine a day was good for you. I poisoned myself with margarine believing the government saying it was healthier than butter. Coffee cycles between being bad and good. Masks work, masks don't work. MJ is addictive, then not addictive, then addictive again. Prozac is safe, then not safe. Xanax, too.
And on and on.
BTW, everyone always knew that smoking was bad for you. My dad went to high school in the 1930s, and said the kids called cigarettes "coffin nails". It's hard to miss the coughing fits, and the black lungs in an autopsy. I remember in the 1960s seeing a smoker's lung in formaldehyde. It was completely black, with white cancerous blobs. I avoided cigarettes ever since.
The notion that people didn't know that cigs were bad until the 1960s is nonsense.
I'm not sure what to make of these technologies. I read about people doing all these things with them and it sounds impressive. Then when I use it, it feels like the tool produces junior level code unless I babysit it, then it really can produce what I want.
If I have to do all this babysitting, is it really saving me anything other than typing the code? It hasn't felt like it yet and if anything it's scary because I need to always read the code to make sure it's valid, and reading code is harder than writing it.
This is the things thar gets me the most. Code review is _hard_. So hard that I'm convinced my colleagues don't do it and just slap "LGTM" on everything.
We are trading "one writer, one reader" for "two readers", and it seems like a bad deal.
Yep, and I'll add: the first reader is the first maintainer. When that is turned over to an LLM agent the organization's leadership had better be prepared to entertain rewrites (reprompts?) of significant portions of LLM-generated code on a regular basis. The call of the rewrite isn't new of course, but it'll be far more alluring since LLMs are at their most "productive" and least destructive when working from a clean slate.
I'm always puzzled by these claims. I usually know exactly what I want my code to look like. Writing a prompt instead and waiting for the result to return takes me right out of the flow. Sure, I can try to prompt and ask for larger junks, but then I have to review and understand the generated output first. If this makes people 10x faster, they must have worked really slow before.
That's what I've been saying. On top of that, I have to read way more code, sometimes multiple times as it just doesn't get it, and add the extra cognitive load of "correcting it" rather than just do it myself. I find the act of reading code way more taxing than just mechanically writing the solution, so I don't know where all the AI zealots are coming from.
Also add the huge security gap of letting a probabilistic tool with blurry boundaries execute shell commands. Add the fact that AI is currently not being profitable, and that all major players most likely train on your code (Anthropic does).
- you should document your best practices in a file and point it to the LLM (the standards are @claude or @agent markdown files
- you should manage context (the larger it gets the weaker the output)
- you should use good and clear prompts
- you should generally make it generate a plan with the requirements (business logic changes focused) and then follow and review the implementation plan (I generally produce both in two different markdown files).
- only then you let it code
The last phase, isn't even the most important to be honest, you can do it manually. But I have found that forcing myself through the first two and having AI find information in the codebase, edge cases in the business logic, propose different solutions, evaluate the impact of the changes is a huge productivity multiplier.
Very often I'm not worn out by the coding part, again, I can do it on my own, it's the finding information and connecting the dots the hard one. In that, it excels and I would struggle (mentally) to go back to jumping from file to file while keeping track of my findings in notes to figure out the wheres, whats and whys.
Open source has been good, but I think the expanded use of highly permissive licences has completely left the door open for one sided transactions.
All the FAANGs have the ability to build all the open source tools they consume internally. Why give it to them for free and not have the expectation that they'll contribute something back?
Even the GPL allows companies to simply use code without contributing back, long as it's unmodified, or through a network boundary. the AGPL has the former issue.
Something I haven't seen mentioned is that python is very commonly taught at universities. I learned it in the 2010s at my school, whereas I never got exposed to Perl. The languages people learn in school definitely stick with you and I wonder if that plays a non-zero factor in this.
I'm not a fan of rust, but I don't think that is the only takeaway. All systems have assumptions about their input and if the assumption is violated, it has to be caught somewhere. It seems like it was caught too deep in the system.
Maybe the validation code should've handled the larger size, but also the db query produced something invalid. That shouldn't have ever happened in the first place.
> It seems like it was caught too deep in the system.
Agreed, that's also my takeaway.
I don't see the problem being "lazy programmers shouldn't have called .unwrap()". That's reductive. This is a complex system and complex system failures aren't monocausal.
The function in question could have returned a smarter error rather than panicking, but what then? An invariant was violated, and maybe this system, at this layer, isn't equipped to take any reasonable action in response to that invariant violation and dying _is_ the correct thing to do.
But maybe it could take smarter action. Maybe it could be restarted into a known good state. Maybe this service could be supervised by another system that would have propagated its failure back to the source of the problem, alerting operators that a file was being generated in such a way that violated consumer invariants. Basically, I'm describing a more Erlang model of failure.
Regardless, a system like this should be able to tolerate (or at least correctly propagate) a panic in response to an invariant violation.
The takeaway here isn’t about Rust itself, but that the Rust marketing crew’s claims that we constantly read on HN and elsewhere about the Result type magically saving you from making mistakes is not a good message to send.
They would also tell you that .unwrap() has no place in production code, and should receive as much scrutiny as an `unsafe` block in code review :)
The point of option is the crash path is more verbose and explicit than the crash-free path. It takes more code to check for NULL in C or nil in Go; it takes more code in Rust to not check for Err.
1. They don’t. There is presumably some hypothetical world where they would tell you if you start asking questions, but nobody buying into the sales pitch ever asks questions.
2. You’re getting confused by technology again. This isn’t about technology.
Sometimes it's hard: for many kinds of projects, I don't think anyone would use them if they were not open source (or at least source-available). Just like I wouldn't use a proprietary password manager, and I wouldn't use WhatsApp if I had a choice. Rather I use Signal because it's open source.
How to get people to use your app if it's not open source, and therefore not free?
For some projects, it feels better to have some people use it even if you did it for free than to just not do it at all (or do it and keep it in a drawer), right?
I am wondering, I haven't found a solution. Until now I've been open sourcing stuff, and overall I think it has maybe brought more frustration, but on the other hand maybe it has some value as my "portfolio" (though that's not clear).
But it can be profited for not-so-big corps, so I'm still working for free.
Also I have never received requests from TooBigTech, but I've received a lot of requests from small companies/startups. Sometimes it went as far as asking for a permissive licence, because they did not want my copyleft licence. Never offered to pay for anything though.
Corporations extract a ton of value from projects like ffmpeg. They can either pay an employee to fix the issues or setup some sort of contract with members of the community to fix bugs or make feature enhancements.
Nearly everyone here probably knows someone who has done free labor and "worked for exposure", and most people acknowledge that this is a scam, and we don't have a huge issue condemning the people running the scam. I've known people who have done free art commissions because of this stuff, and this "exposure" never translated to money.
Are the people who got scammed into "working for exposure" required to work for those people?
No, of course not, no one held a gun to their head, but it's still kind of crappy. The influencers that are "paying in exposure" are taking advantage of power dynamics and giving vague false promises of success in order to avoid paying for shit that they really should be paying for.
I've grown a bit disillusioned with contributing to Github.
I've said this on here before, but a few months ago I wrote a simple patch for LMAX Disruptor, which was merged in. I like Disruptor, it's a very neat library, and at first I thought it was super cool to have my code merged.
But after a few minutes, I started thinking: I just donated my time to help a for-profit company make more money. LMAX isn't a charity, they're trading company, and I donated my time to improve their software. They wouldn't have merged my code in if they didn't think it had some amount of value, and if they think it has value then they should pay me.
I'm not very upset over this particular example since my change was extremely simple and didn't take much time at all to implement (just adding annotations to interfaces), so I didn't donate a lot of labor in the end, but it still made me think that maybe I shouldn't be contributing to every open source project I use.
I understand the feeling. There is a huge asymmetry between individual contributors and huge profitable companies.
But I think a frame shift that might help is that you're not actually donating your time to LMAX (or whoever). You're instead contributing to make software that you've already benefited from become better. Any open source library represents many multiple developer-years that you've benefited from and are using for free. When you contribute back, you're participating in an exchange that started when you first used their library, not making a one-way donation.
> They wouldn't have merged my code in if they didn't think it had some amount of value, and if they think it has value then they should pay me.
This can easily be flipped: you wouldn't have contributed if their software didn't add value to your life first and so you should pay them to use Disruptor.
Neither framing quite captures what's happening. You're not in an exchange with LMAX but maintaining a commons you're already part of. You wouldn't feel taken advantage of when you reshelve a book properly at a public library so why feel bad about this?
Now count how many libraries you use in your day to day paid work that are opensource and you didn't have to pay anything for them. If you want to think selfishly about how awful it is to contribute to that body of work, maybe also purge them all from your codebase and contact companies that sell them?
The main issue is the collaboration aspect of LibreOffice. I imagine though with funding LibreOffice can be upgraded to do this. If countries are already trying to migrate away from US tech, they could invest in this.
I currently work in a software field that has a large numerical component and verifying that the system is implemented correctly and stable takes much longer than actually implementing it. It should have been like that when I used to work in a more software-y role, but people were much more cavalier then and it bit that company in the butt often. This isn't new, but it is being amplified.