I was going to say the same thing. For some real world estimation tasks where I don't want 100% accuracy (example: analysing working capital of a business based on balance sheet, analysing some images and estimating inventory etc.) the job done by GPT-4o is better than fresh MBA graduates from tier 2/tier 3 cities in my part of world.
Job seekers currently in college have no idea what is about to hit them in 3-5 years.
I agree. HN's and the tech bubble's bias many people are not noticing is that it's full of engineers comparing GPT-4 to software engineering tasks. In programming, the margin of error is incredibly slim in the way that a compiler either accepts entirely correct code (in its syntax of course) or rejects it. There is no in between, and verifying software to be correct is hard.
In any other industry where just need an average margin of error close to a human's work and verification is much easier than generating possible outputs, the market will change drastically.
On the other hand, programming and software engineering data is almost certainly over-represented on the internet compared to information from most professional disciplines. It also seems to be getting dramatically more focus than other disciplines from model developers. For those models that disclose their training data, I've been seeing decent sized double-digit percentages of the training corpus being just code. Finally, tools like copilot seem ideally positioned to get real-world data about model performance.
Job seekers currently in college have no idea what is about to hit them in 3-5 years.