Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Note this is pre-GPT-3. In fact I expect GPT-4 will be where interesting things start happening in NLP.


I honestly don't get where the big deal is with NLP. So far the most useful application has been customer support chatbots and those still don't rise to the level of having an actual human that can understand the intricacies of your special request.


I work on such a bot @Intercom.

A lot of support requests aren't actually intricate and special: there's almost always loads of simple requests that come in again and again.

When you ask a request like that and get the answer instantly, that really is ML delivering value.

You mightn't think it, but a lot of people work in customer support, and spend a lot of time answering rote questions again and again.

People talk about how much hype there is, or the chance of another AI winter. Yes there's a ton of hype - but I think they aren't considering the real value already being delivered this time around.

Everyone is excited about GPT3, but there's already been amazing progress in practical NLP already, over the last few years.


This past week, I used this new edit distance library to identify quasi-duplicates in a large dataset:

https://github.com/Bergvca/string_grouper

Saved me hours of work because all the Levenshtein implementations are pretty slow and I’m going to need to rerun the analysis as the dataset grows.

I don’t know about consumer-facing tools, but NLP stuff has helped me solve all kinds of tedious data problems at work.


Current NLP is bad. Still useful (Google search increasingly feels like it is doing NLP to change what I asked for into what it thinks I meant) but bad. A hypothetical future “perfect” NLP can demonstrate any skill that a human could learn by reading, and computers can read so much more than any given human.


Is reading enough to understand the real world without direct experience of the real world? Is there any research that tries to answer this question?


Is there any research that tries to answer this question?

That's the whole point of the experiment called GPT-3.


As of about ten years ago when I received my degree in linguistics, I understood there were two schools on this issue:

A: Of course not, let's do something else. B: What? You put text in the maths and stuff comes out.


That's just not even close to the only or most useful application.

I use NLP and associates s2s techniques every day. I struggle to see how so many people don't see the obvious benefits deep inference is bringing to stuff all around them.


Could be a more convincing statement with examples...


Speech recognition on any device? Translation?


those still don't rise to the level of having an actual human

What if they did? Do you see where the big deal is now?


If a chatbot was as good as a human then would you notice it was a chatbot?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: