They are saying they want to directly SSH into a VM/container based on the web hostname it serves. But that's not how the HTTP traffic flows either. With only one routable IP for the host, all traffic on a port shared by VMs has to go to a server on the host first (unless you route based on port or source IP with iptnbles, but that is not hostname based).
The HTTP traffic goes to a server (a reverse proxy, say nginx) on the host, which then reads it and proxies it to the correct VM. The client can't ever send TCP packets directly to the VM, HTTP or otherwise. That doesn't just magically happen because HTTP has a Host header, only because nginx is on the host.
What they want is a reverse proxy for SSH, and doesn't SSH already have that via jump/bastion hosts? I feel like this could be implement with a shell alias, so that:
ssh [email protected] becomes: ssh -j [email protected] user@vm1
And just make jumpusr have no host permissions and shell set to only allow ssh.
SSH is an incredibly versatile and useful tool, but many things about the protocol are poorly designed, including its essentially made-up-as-you-go-along wire formats for authentication negotiation, key exchange, etc.
In 2024-2025, I did a survey of millions of public keys on the Internet, gathered from SSH servers and users in addition to TLS hosts, and discovered—among other problems—that it's incredibly easy to misuse SSH keys in large part because they're stored "bare" rather than encapsulated into a certificate format that can provide some guidance as to how they should be used and for what purposes they should be trusted:
https://cryptographycaffe.sandboxaq.com/posts/survey-public-....
I'm building something that has to share a pool of phone numbers for SMS between many businesses with many clients and the architecture I had planned out looks a lot like this - client gets assigned a phone number from the pool for all its interactions with a certain business.
Good write up of a tricky problem, and glad to real-world validate the solution I was considering.
I would love it if more systems just understood SRV records, hostname.xyz = 10.1.1.1:2222
So far it feels like only LDAP really makes use of it, at least with the tech I interact with
Yeah, I ran into this problem too. I tried a few different hacky solutions and then settled on using port knocking to sort inbound ssh connections into their intended destinations. Works great.
I have an architecture with a single IP hosting multiple LXC containers. I wanted users to be able to ssh into their containers as you would for any other environment. There's an option in sshd that allows you to run a script during a connection request so you can almost juggle connections according to the username -- if I remember right, it's been several years since I tried that -- but it's terribly fragile and tends to not pass TTYs properly and basically everything hates it.
But, set up knockd, and then generate a random knock sequence for each individual user and automatically update your knockd config with that, and each knock sequence then (temporarily) adds a nat rule that connects the user to their destination container.
When adding ssh users, I also provide them with a client config file that includes the ProxyCommand incantation that makes it work on their end.
Been using this for a few years and no problems so far.
SSH waits for the server key before it presents the client keys, right? Does this mean that different VMs from different users have the same key? (Or rather, all VMs have the same key? A quick look shows s00{1,2,3}.exe.xyz all having the same key.) So this is full MitM?
This is a problem I've come up against a few times. Enforcing a different key per server would also help solve it in their case, but really I just want a haproxy plugin that allows selecting a backend based on the public key
This is a clever trick, but I can’t help but wonder where it breaks. There seems to be an invariant that the number of backends a public key is mapped to cannot exceed the number of proxy IPs available. The scheme probably works fine if most people are only using a small number of instances, though. I assume this is in fact the case.
Another thing that just crossed my mind is that the proxy IP cannot be reassigned without the client popping up a warning. That may alarm security-conscious users and impact usability.
Hosting DNS on the same machine as your application opens up all sorts of nice hacks. For example, you can add domain names to nf_conntrack by noticing the client resolving example.com to 10.0.0.1, then making a connection to 10.0.0.1 tcp/443. This was how I made my own “little snitch” like tool.
In kinda the same situation, I was using username for host routing. And real user was determined by the principal in SSH certificate - so the proxy didn't even need to know the concrete certificates for users; it was even easier than keeping track of user SSH keys.
Certificate signing was done by a separate SSH service, which you connected too with enabled SSH agent forwarding, pass 2FA challenge, and get a signed cert injected into your agent.
Once hooked into PAM to have a central „ssh box“ mount remote boxes filesystems on user connect. Just need to have a lookup table: which username belongs to wich customer(s server). Ezpz.
While not transparent to users, I'd just use SSH ProxyCommand like I did in https://github.com/ThomasHabets/huproxy
Not exactly what i built in for, but it'll do the job here too, and able to connect to private addresses on the server side.
Wouldn't a much simpler approach be to have everyone log in to a common server which sits on a VPN with all the VMs? It introduces an extra hop, but this is a pretty minor inconvenience and can be scripted away.
This would be a great use case of SSH over HTTP/3[0]. Sadly it doesn't seem to have gained traction.
[0]: https://www.ietf.org/archive/id/draft-michel-ssh3-00.html
I wonder if it's something like https://github.com/cea-hpc/sshproxy that sits in the middle (with decryption and everything) or if they could do this without setting up a session directly with the client.
Well, we're implicitly trusting the host when running a VM anyway (most of the time), but it's something I'd want to check before buying into the service.
EDIT: Ah, it's probably https://github.com/boldsoftware/sshpiper
will try to remember to look later.
I am not sure to understand what this is this achieving compared to just assigning a ip + port per vm?
I mean it works... but it's really ghetto. You have to handle username collisions(or enforce unique usernames). IPv4 should be non free, and that'd cover the costs...
It's hard to think of a clearer example for the concept of Developer Experience.
One similar example of SSH related UX design is Github. We mostly take the git clone [email protected]/author/repo for granted, as if it were a standard git thing that existed before. But if you ever go broke and have to implement GitHub from scratch, you'll notice the beauty in its design.
The solution is ipv6.
The solution to this is TLS SNI redirecting.
You can front a TLS server on port 443 and then redirect without decrypting the connection based on the SNI name to your final destination host.
[dead]
You don't need SSH. Installing an SSH server to such a VM is a hold over from how UNIX servers worked. It puts you in the mindset of treating your server as a pet and doing things for a single vm instead of having proper server management in place. I would reconsider if offering ssh is an actual requirement here or if it could be better served by offering users a proper control panel to manage and monitor the vms.
[dead]
[dead]
> We cannot issue an IPv4 address to each machine without blowing out the cost of the subscription. We cannot use IPv6-only as that means some of the internet cannot reach the VM over the web. That means we have to share IPv4 addresses between VMs.
Give a user a option for use IPv6 only, and if the user need legacy IP add it as a additional cost and move on.
Trying to keep v4 at the same cost level as v6 is not a thing we can solve. If it was we wouldn't need v6.