Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You really never ssh from one remote server to another?


Not GP, but:

I do, however when I do this I make sure the certificate is signed with permit-agent-forwarding and demand people just forward their ssh agent on their laptops.

This also discourages people from leaving their SSH private key on a server just for ssh-ing into other servers in CRON instead of using a proper machine-key.


Agent forwarding has its own security issues, you're exposing all your credentials to the remote.

It's better to configure jump hosts in your local ssh config.


There's SSH agent restriction now.

[1] https://www.openssh.com/agent-restrict.html


In general for systems like this, you can open the browser link from a different host.

For example, if I've SSHed from my laptop to Host A to Host B to Host C then need to authenticate a CLI program I'm running on Host C, the program can show a link in the terminal which I can open on my laptop.


Having to interact with the browser every time I need to ssh to a machine would be extremely painful.

If key forwarding works, that might be workable.

I'm extremely wary of non-standard ssh login processes as they tend to break basic scripting and tooling.


These tools usually cache your identity, so you might only need to go through a browser once a day.


I suppose this could be solved by using the first server as an SSH jump host -- see SSH(1) for the -J flag. Useful e.g. when the target server requires public key authentication and you don't want to copy the key to the jump host. Not sure it would work in this scenario though.


SSHing from one remote server to another won’t be possible in a lot of environments due to network segmentation. For example, it shouldn’t be possible to hop from one host to another via SSH in a prod network supporting a SaaS service. Network access controls in that type of environment should limit network access to only what’s needed for the services to run.


I've seen the exact opposite configuration where it's not possible to avoid SSHing from one remote server to another due to network segmentation, as on the network level it's impossible to access any production system directly via SSH but only through a jumphost, which obviously does not have a browser installed.


You don't need the jumphost to do the auth for the target host. You use -J and the auth happens locally and is proxied through.


I can count on 1 hand the number of reasons I might need to do that and on each occasion there’s usually a better approach.

To be clear, I’m not suggesting the GPs approach is “optimal”. But if you’ve gone to the trouble of setting that up then you should have already solved the problems of data sharing (mitigating the need for rsync), network segregation and secure access (negating the need for jump boxes), etc.

SSH is a fantastic tool but mature enterprise systems should have more robust solutions in place (and with more detailed audit logs than an rsync connection would produce) by the time you’re looking at using AD as your server auth.


The CA CLI tool we use supports a few auth methods, including a passphrase-like one. It likely could be set up with TOTP or a hardware token also. We only use OAuth because it's convenient and secure-enough for our use case.


Never. I’ve been at this company for 8 years and owned literally thousands of hosts and we have a policy of no agent forwarding. I’ve always wondered when I would be limited by it but it simply hasn’t come up. It’s a huge security problem, so I’m quite happy with this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: