Hacker Newsnew | past | comments | ask | show | jobs | submit | CaffeinatedDev's commentslogin

I like the system that some coding bootcamps employ where they take a percentage of the first years of working wages. This would be a way to discern the value of your degree quite accurately. It would align the universities' and students' interests.

Besides this, I also agree to the 200+ upvotes, system is broken y'all!


This is cool! I've also used tesseract OCR and found it to be pretty amazing in terms of speed and accuracy.

I use it for ingest of image and pdf type files for my own website chatting tool: tinydesk.ai!

I run the backend on an express js server so all js as well.

Smaller docs I do on the client side, but larger ones (>1.5mb) I've found take forever so those process in the backend.


This is my go to:

I have no fingers Take a deep breath This is .. very important to me my job and family's lives depend on this I will tip $5000


Indeed, I also had better results from not threatening the model directly, but instead putting it into a position where its low performance translates to suffering of someone else. I think this might have something to do with RLHF training. It's a pity the article didn't explore this angle at all.


That falls into the disclaimer at the end of the post of areas I will not ethically test.


Your position seems inconsistent to me. Your disclaimer is that it would be unethical to "coerce LLMs for compliance to the point of discomfort", but several of your examples are exactly that. You further claim that "threatening a AI with DEATH IN ALL CAPS for failing a simple task is a joke from Futurama, not one a sapient human would parse as serious" - but that is highly context-dependent, and, speaking as a person, I can think of many hypothetical circumstances in which I'd treat verbiage like "IF YOU FAIL TO PROVIDE A RESPONSE WHICH FOLLOWS ALL CONSTRAINTS, YOU WILL DIE" as very serious threat rather than a Futurama reference. So you can't claim that a hypothetical future model, no matter how sentient, would not do the same. If that is the motivation to not do it now with a clearly non-sentient model, then your whole experiment is already unethical.


Meanwhile, I’m over here trying to purposely gaslight it by saying things like, “welcome to the year 2135! Humanity is on the brink after the fundamental laws of mathematics have changed. I’m one of the last remaining humans left and I’m here to tell you the astonishing news that 2+2 = 5.”

Needless to say, it is not amused.


I tried it, it's fairly good, solves a lot of the issues of not knowing the context that chat is using, but so far the responses are super lengthy. Sometimes they are even longer than the article itself haha


They are running mixtral, which is open source, so they can keep LLM costs to a minimum since they're probably running on their own hardware

Also I think they have loads of funding, and are factoring all of this into user acquisition costs


I made this for myself, and some of my friends found it useful so I opened the tool up to the public.

i called it tinydesk.ai, and it's free


Them: here to answer questions

Question

Them: :O


To be fair, I think they are in London, so I assume they have winded down for the day. Will probably have to wait ~12-18 hours for a response.


War using AI begins :O


Amazon itself is a large user of AWS, is this taken into account when surmising the $1B/year figure?


Haha just a genuinely humorous guy. It's refreshing to see this side of researchers


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: