I would say code generation and documenting is another one. The UX for this is a bit sucky but I've been pretty blown away with chat gpt's ability to generate, translate, and explain existing code.
I actually started trying to learn rust using chat gpt. I can just ask it to explain bits I don't understand ... and it does.
There's a lot of content generation in the legal and medical sphere that is probably also pretty much something where chat gpt could be helpful. Dangerous of course when it gets it wrong. But still, I could see this being a useful tool for researchers in all sorts of fields to quickly dig through a lot of information. Basically, chat gpt is trained on more stuff than a single human will be able to read.
I think this whole space is maybe bottlenecked on imagination. We have a lot of AI experts with not a whole lot of other expertise not seeing the forest for the trees. OpenAI employs a few geniuses but their product is basically a chat box and an API.
As I was remarking to a friend the other day: so they have chat gpt, pretty decent speech to text, and amazing text to speech .... so why can't I talk to chat gpt and listen to the answer? Such an obvious thing to do. Surely somebody thought of that. I mean inside OpenAI. I know there's a multitude of openai powered proofs of concept by third parties. But they don't seem to have the ambition to support a finished product so far.
I actually started trying to learn rust using chat gpt. I can just ask it to explain bits I don't understand ... and it does.
There's a lot of content generation in the legal and medical sphere that is probably also pretty much something where chat gpt could be helpful. Dangerous of course when it gets it wrong. But still, I could see this being a useful tool for researchers in all sorts of fields to quickly dig through a lot of information. Basically, chat gpt is trained on more stuff than a single human will be able to read.
I think this whole space is maybe bottlenecked on imagination. We have a lot of AI experts with not a whole lot of other expertise not seeing the forest for the trees. OpenAI employs a few geniuses but their product is basically a chat box and an API.
As I was remarking to a friend the other day: so they have chat gpt, pretty decent speech to text, and amazing text to speech .... so why can't I talk to chat gpt and listen to the answer? Such an obvious thing to do. Surely somebody thought of that. I mean inside OpenAI. I know there's a multitude of openai powered proofs of concept by third parties. But they don't seem to have the ambition to support a finished product so far.