File format: as many of the following blocks as you like
Host $ALIAS <-- whatever you want here
Hostname www.example.com
User someuser
Port 1234
You can now ssh to that server as that user by doing "ssh $ALIAS" on the command line, without needing to specify the port or user with the usual command line arguments, or necessarily spell out the entire host name. Host $ALIAS
User $USER
HostName $INTERNAL
ProxyCommand ssh $USER2@$PUBLIC -W %h:%p
Now laptop> ssh jim@public.example.com
public> ssh dev@myworkstation
becomes laptop> ssh work
(I just realized that this slightly confused
article seems to accomplish the same by using a convoluted
setup of port-forwardings and netcat.) ServerAliveInterval 240
ServerAliveCountMax 5Yeah, the article sets separately first
Host bar
...
and then Host behind.bar
...
But it can also be done by just one step: host behindbar
User <user-behindbar>
Hostname behindbar.domain
ProxyCommand ssh <user-bar>@bar.domain nc %h %p 2> /dev/null Host bos-??
HostName %h.mydomain.com
IdentityFile ~/.ssh/my-boston-key
Host nyc-??
HostName %h.mydomain2.com
IdentityFile ~/.ssh/my-nyc-key
and log in with e.g. "ssh bos-14". Host *.amazonaws.com
User ec2-user
IdentityFile ...
And then it is just ssh ec2-X-X-X-X.compute-1.amazonaws.comThanks though, I didn't know about the ?? syntax.
scp $ALIAS:/var/log/mylogs/logfile ~/backups/logs/Nautilus / Connect to Server / Server Address: work/home/user
(here 'work' is the alias for the work computer)
Host 192.168.* *.foo.*.com *.bar.net alias compile-ssh-config='echo -n > ~/.ssh/config && cat ~/.ssh/*.config > ~/.ssh/config'
alias ssh='compile-ssh-config && ssh'
Compiles all your ~/.ssh/*.config files into a single ssh config file. It's simple and stupid and seems to do the trick.Maybe it is just me, but I prefer to dump all my custom commands and aliases into .zshrc so they are easy to backup/track/find.
One reason is so that invocations of ssh outside of the context of user invocation of ssh at the command line will also have these customizations included. This is especially important for ssh, which has emerged as a main security interoperability tool for Unix systems.
For example, if you use rsync, the tunneling and host alias conventions you set up in .ssh/ will carry over transparently to the ssh tunnel used by rsync.
Another example would be invocations of ssh in scripts (sh/bash scripts, even) that will not or might not read your .zshrc.
Just specify the alias host name as configured in ~/.ssh/config and the user, identify file, and anything else you put there will be used as set.
The excellent (though unfortunately named) keychain[0] utility provides a ready and powerful abstraction for both ssh-agent and gpg-agent.
http://www.ibm.com/developerworks/library/l-keyc.html http://www.ibm.com/developerworks/library/l-keyc2/ http://www.ibm.com/developerworks/library/l-keyc3/
> No more password prompts
Is that - you ask - because he's using ssh-agent? No, it's because he doesn't tell you you should be using a password-protected key. Some kung fu.In this case you would limit this ssh-key to only be able to execute the nagios monitoring scripts. Nothing else.
You do this in ~/.ssh/config on the remote machine.
For continuous deployment you can't easily work around using ssh, but at least the access can be limited to specific commands only.
Let's suppose I have an account tests@host which runs the tests (scripts) that need to login to an array of machines.
In order for keychain to be helpful here, you need two prerequisites.
1) You need to be able to interactively login to tests@host once after bootup; after that you don't need to touch the machine again.
2) Then, the test scripts need to say
. $HOME/.keychain/$HOSTNAME-sh
once before executing any ssh command (the line above
simply imports the ssh-agent session variables into
the current environment).edit: I removed the Nagios references as other posters rightly point out that there are more endemic ways to collect information with Nagios.
This treatment ssh does not mention ssh-agent and, more importantly perhaps, implies that there is a certain virtue in having private keys unprotected by sturdy passphrases lying around.
There is not; most emphatically not.
ssh-keygen -f id_rsa -pAs a bonus Ed25519 keys unconditionally use bcrypt for protecting the private key
- X11 Forwarding
- Reverse forwarding (bind listening sockets on the remote machine,
redirecting to a local service)
- SSH-Based VPNsDisconnected from your host but not timed out yet? Press Enter, ~, . and the client will quit.
I've tried this before, and what effectively always happened (to me) is that as soon as I started copying a file, I couldn't continue working in Vim anymore until the file was done transmitting because the copying would eat all the bandwidth. There may be a flag or setting around this, but I've never found it. When I open two connections, it is usually fine.
ControlPath /tmp/ssh_mux_%h_%p_%r
This sets the path of the control file used to share the connection. If it ever hangs, I can just delete the file. But in practice I found this doesn't happen often and I appreciate the speed boost I get from connection sharing.As for the rest of the article, really nice stuff. Nice tricks for ssh newbies. I wish he also talked about setting up a nonce system with ssh or move sshd to a non-default port to prevent attackers spamming port 22, or even remove password authentication altogether.
In other words, the inconvenience this brings is not adequate to the infinitesimal increase in security.
Failed password for root2 from 82.192.86.44 port 44990 ssh2
Failed password for admin from 82.192.86.44 port 44990 ssh2
Failed password for sysdb from 82.192.86.44 port 44990 ssh2
Failed password for scott from 82.192.86.44 port 44990 ssh2
(Yes, that IP has scanned my machine before)You are wrong. Please refrain from giving security advice.
Changing or filtering the SSH port prevents your host from being compromised by automated netrange sweeps in the event of a pre-auth ssh vulnerability. For this reason changing the SSH port is considered best practice.
PermitRootLogin without-password
instead of 'yes' in /etc/ssh/sshd_config if you absolutely must have ssh root access. PermitRootLogin forced-commands-only
If it's necessary to run something as root — declare it beforehead. If you encounter a situation when you need to run something unusual on an automated basis — login as administrator (or edit your Puppet/Chef/Ansible/alike rules if you're on the smart system management side) and update ~root/authorized_keys.If one needs to SFTP as root, they could enlist `internal-sftp` target, too (although I haven't tested this, I don't SFTP as root and if I must update some files — I setfacl on them)
Getting by /without/ direct SSH root access is often impractical (think about scp), and without-password is a secure way to have it.
Also, the more people know about "without-password", the less people will set PermitRootLogin to "yes".
Allowing root login can be a user-management headache in multi-user environments, but strong SSH security can exist for root just the same as for any other user.
Passwordless root SSH is perfectly fine, which is why it is enabled by default. By people who have thought a little longer and harder about all this than you. (Sorry for the tone. Maladvice like yours on public forums is demonstrably harmful.)
My own fix was to use 3G to do the SSH work via a tethered phone and to use the wifi adapter to run the bulk of any other web traffic. It'd be great to have a workaround for DPI, though, if anyone has any experience there.
There's no reason they should be refusing your traffic, and they are probably only causing a problem because some overzealous consultant cranked up the setting too high.
In my city, the compliance requirement that must be met is to have a policy to address "obscene, indecent, violent, or otherwise inappropriate for viewing in the library environment". Blocking SSH access is not required meet that compliance requirement.
In our case, our library actually doesn't filter, it's left to the discretion of the librarians. And there is a time limit for access.
Read up about SSH VPNs. Probably some kid set up a proxy accessed over SSH port forwarding, to access some pr0n site, got caught, and next thing you know, no SSH allowed anymore. If they were really smart they'd allow it but rate limit it to 2400 bit/s, which is fairly fast for console work but not so great for downloading animated pr0n gifs.
Whats weird is librarians typically are pretty hard core against censorship. The same place thats willing to go to court to keep "to kill a mockingbird" or "huckleberry finn" on the shelves, will simultaneously spare no expense to block adults from accessing a breast cancer awareness site. A strange bunch, librarians.
In any case, you can try proxying SSH over SSL using stunnel: http://askubuntu.com/questions/423727/ssh-tunneling-over-ssl
Or you could try setting up OpenVPN, it's easy enough.
http://blog.chmd.fr/ssh-over-ssl-a-quick-and-minimal-config....
[1]: http://www.daemonology.net/blog/2012-08-30-protecting-sshd-u...
http://www.amazon.com/SSH-Mastery-OpenSSH-PuTTY-Tunnels-eboo...
But if someone expects to gain deep, expert level knowledge of how OpenSSH works then he'll be disappointed. What expert-level books can you recomment?
Note that on the tip of ~/.ssh/known_hosts providing ssh auto completion, adding SSH server config to ~/.ssh/config will also enable auto completion.
It does have a few quirks though. One that I've noticed is (IIRC) that closing a shared session isn't sufficient to pick up new groups membership when you reconnect. You actually need to kill the master connection as well[0].
The syntax for shutting down a master connection is a bit clunky as well:
ssh -O stop -S ~/.ssh/mux/socketname hostname
I've been meaning to make a little script or 2 that finds the current mux sockets and tests them with -O check and give you a list of simple IDs you can 'ssh-mux-kill $id' or something. In fact, it'd probably be a nice use for percol[1][0] There might be other ways of refreshing group memberships, but I don't know of any.
It shouldn't be hard to find a program which runs fine with tun2socks but breaks completely or subtly with tsocks.
(disclaimer: tooting my own horn here, but it is a mighty useful trick)
laptop - user (userid: me)
F - firewall (userid: me)
A - machine 1 in colo (userid: colo)
B - machine 2 in colo (userid: colo, machine I want to access)
C - machine 2 in colo (userid: colo)
.
.
100s of machines.
Trust (ssh password less login) is setup between me@laptop and me@F, and me@laptop and colo@A, and between all colo machine (A,B,C..). So colo@A can ssh colo@B w/o password.I am able to log into colo@A via F w/o password as I copied the ssh key there manually. (path me@laptop -> colo@F -> colo@A)
QUESTION: Is it possible to ssh to other machines (B,C..) via A while assuming full identity of colo@A? (Path would be me@laptop -> colo@F -> colo@A -> colo@B/C/..) With my current config when I try to ssh to B it knows request is originating from 'laptop' and still asks me for password.
ProxyCommand ssh ...
However, apenwarr's sshuttle https://github.com/apenwarr/sshuttle is a briliant semi-proxy-semi-vpn solution that, in return for local root and remote python (but not remote root), gives you transparent VPN-style forwarding of TCP connections (and DNS requests if you want). It works ridiculously well. Try it, if you haven't yet.
I have a shell script that helps with setting up trusted keys: trusted keys help if you need to run automated tests, that involve several machines, or simply if you would like to skip typing in a password on each connection.
http://mosermichael.github.io/cstuff/all/projects/2011/07/14...
# sshfs -o direct_io,nonempty,allow_other,cache=no,compression=no,workaround=rename,workaround=nodelaysrv user@remote:/place/ /mnt/somewhere
For even more performance:
* On server, start socat:
# socat TCP4-LISTEN:7001 EXEC:/usr/lib/sftp-server
* On client, do:
# sshfs -o directport=7001,direct_io,nonempty,allow_other,cache=no,compression=no,workaround=rename,workaround=nodelaysrv user@remote:/place/ /mnt/somewhere
above creates dynamic tunel (for use as socks proxy) through jumphost to reach http hosts available only to some_host_inside.lan machine