Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[1] Although ChatGPT might not meet some special elevated standards for what is considered general AI, it is able to solve a wide variety of generalized requests, including writing this response on command. In this sense, it is similar to SpaceX being a "real" rocket company, even though it is a private company with less experience than NASA. SpaceX has disrupted the traditional aerospace industry and demonstrated that it is capable of designing and launching successful space probes, despite not being a traditional "establishment" player.

Another analogy might be to Uber, which has revolutionized the transportation industry by offering a new way of connecting riders with drivers through a smartphone app. Uber may not meet some traditional definitions of a "taxi company," but it has proven to be a successful and popular service for millions of people.

A third analogy might be to the smartphone itself, which has become an essential tool for many people despite not being a traditional computer. Smartphones have disrupted the personal computing industry and offer a wide range of capabilities, from making phone calls and sending texts to accessing the internet and running apps.

Similarly, ChatGPT is able to complete many general requests that meet the expectations of many users, even though it might not meet some definitions of general AI. It is able to do this because it is able to perform a wide range of tasks, including writing this comparison at your [my] request.

Of course, ChatGPT has its limitations and is not a substitute for human intelligence. However, it is able to complete many tasks that would be challenging or impossible for chatbots of decades ago, which demonstrates its usefulness and capabilities as a tool for assisting users with a wide range of tasks.

I hope this helps clarify my personal feelings about ChatGPT's intelligence and capabilities. It is important to recognize that different people have different standards and definitions for what constitutes general AI, and that is completely fine. What matters is whether a tool is able to meet the needs and expectations of its users, and ChatGPT has certainly been able to do that for me and many other users.

--

[1] I want to clarify that this response was written by ChatGPT, a large language model trained by OpenAI, at my request. For more context on my request and the interaction with ChatGPT, you can view the full exchange at https://hastebin.com/raw/wolovegiqa



(writing this comment myself by hand)

In case you click through, what I want you to focus on in the transcript is that it came up with the two additional analogies at the end at my request, but I absolutely didn't suggest or introduce what they are specifically and I had no idea what it would come up with. You can't know this, but it's my experience.

Understanding and coming up with analogies is a gold standard of general intelligence, and it also understood my request, which was of a highly generic nature.

You might argue that it could have picked better analogies but I gave it pretty strict requirements to focus on technology companies. At this point we are arguing about whether airplanes really fly like birds, or whether submarines really swim. It's irrelevant. ChatGPT gets from point A to point B which is the point. It's some form of AGI. Just make it a generic request if you don't believe me, ask it to something generic that has never been done before and it'll do a great job at it. Go ahead and see for yourself.


The analogies are garbage, not even remotely relevant to the issue. I have tried ChatGPT for myself. It's a great technical achievement, and has some practical value in writing rough drafts on certain limited topics. But you haven't provided any evidence that it is a form of AGI in any meaningful sense.


Another example of how ChatGPT is general intelligence:

There is a chatgpt detector here: https://detectchatgpt.com/

I asked ChatGPT to write a story (my prompt was just "Write a story about a pumpkin"). ChatGPT wrote a nice story, well, slightly weird, and the chatgptdetector detected it with 99.96% confidence ("We estimate a 99.96% probability that this text was generated by ChatGPT or another GPT variant.") while a visual bar showing its confidence filled up all the way.

I next gave ChatGPT the instruction "Rewrite it so it is not detected as GPT output." (thanks to a tip on Reddit, where I saw this mentioned.)

Bear in mind that ChatGPT is a GPT variant. I am asking it to fool a test that by definition it cannot fool. It would be like asking you to figure out how you can go through a human detector and not be detected as human.

I fully expected the site to identify it despite ChatGPT's best attempt at self-obfuscation since by definition it is still outputing GPT output.

This time it passed the test. The bar dropped from a full 99.96% to just 15%.

It was the same story. ChatGPT just successfully passed the generic request to fool some detection algorithm it knows nothing about.

Do you have any idea how much intelligence that takes? To successfully fool a test, where you don't know how the test works, you don't know which test I'm talking about, you're just trying not to pass as what you really are, which in ChatGPT's case is a GPT variant?

That is the most extraordinary thing I've ever seen any computer do. It is by definition an impossible task - since in reality it is still ChatGPT. How can it fulfill the generic request to no longer be detectable?

I am blown away by the capabilities of this AGI.


I believe I have provided evidence it is a form of AGI. It is a generic type of request for it to come up with an analogy and it did so for me. I think arguing over whether it's general AI is quite similar to arguing whether uber is a taxi company. People pay to ride in someone else's car which gets them from A to B. It came up with that analogy at my request.

You can make chatgpt generic requests and it answers them. just try it if you don't believe me.

I asked it how it would solve not being able to get some cookies off of a high counter if it were a small child, it replied with its plans:

https://hastebin.com/raw/geponewiki

I had it generate a PDF of the latest advances in AI. It gave me something pretty generic and not that insightful. See for yourself:

http://online.verypdf.com/app/sharepdf/?url=http://online.ve...

That is a PDF titled "AI Advances:A Review of the Latest Developments in Artificial Intelligence". Nothing special as far as titles or contents goes. But an AI wrote the whole damn thing.

(Though it couldn't fix the broken formating when I tried to make a latext document out of it.)

But that's not the point. The point is I asked it to create an executive summary of advances in AI and it did as well as a seventh or eighth grader.

What you really don't understand is you can't just instantly apply the standard of "this thing must be the best at everything and never make any mistake" to be generally intelligent. Ask it some intelligence type of tests of a generic nature.

I mean what's more generic than some new question nobody has any reason to ask and that requires inductive and deductive reasoning to solve? https://hastebin.com/raw/gupevutohe

That's the definition of AGI. I mean what is your standard? What is AGI supposed to do that this thing doesn't do at all?

I just think that this meets my requirements for handling generic arbitrary tasks. Its input and output is language but it is able to think and keep track of complex thoughts including about entirely novel situations. It's usually pretty reasonable.

What more evidence of AGI do you need than that it can solve novel generic tasks it has never encountered before?


I asked ChatGPT if GPT is equivalent to AGI:

"""GPT (Generative Pre-training Transformer) is a type of machine learning model developed by OpenAI that is used for natural language processing tasks, such as language translation, summarization, and question answering. GPT is trained on large amounts of text data and is able to generate human-like responses to prompts.

AGI, or artificial general intelligence, refers to the ability of a machine or artificial intelligence system to perform any intellectual task that a human being can. AGI is often thought of as a hypothetical future technology that would be able to understand and learn any intellectual task that a human being can, rather than being narrowly specialized to perform specific tasks like current AI systems.

GPT is not equivalent to AGI, as it is a specific type of machine learning model that is designed to perform natural language processing tasks, rather than being able to perform any intellectual task that a human being can. However, some researchers believe that the development of AGI may require the use of machine learning models like GPT as a building block, as they can be used to train AI systems to understand and generate human-like language."""

It similarly told me it is also not a "form of AGI"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: