Docker docs:
Docker routes container traffic in the nat table, which means that packets are diverted before it reaches the INPUT and OUTPUT chains that ufw uses. Packets are routed before the firewall rules can be applied, effectively ignoring your firewall configuration.
Nat is not security.
Keep that in mind.
It’s just a crutch ipv4 has to use because it’s not as powerful as the almighty ipv6
This was a large part of the reason I switched to rootless podman for everything
Explicitly binding certain ports to the container has a similar effect, no?
I still need to allow the ports in my firewall when using podman, even when I bind to 0.0.0.0.
Also when using a rootfull Podman socket?
When running as root, I did not need to add the firewall rule.
Thanks for checking
I haven’t tried rootful since I haven’t had issues with rootless. I’ll have to check on that and get back to you.
It’s better than nothing but I hate the additional logs that came from it constantly fighting firewalld.
My problem with podman is the incompatibility with portainer :(
Any recommendations?
cockpit has a podman/container extension you might like.
It’s okay for simple things, but too simple for anything beyond that, IMO. One important issue is that unlike with Portainer you can’t edit the container in any way without deleting it and configuring it again, which is quite annoying if you just want to change 1 environment variable (GH Issue). Perhaps they will add a quadlet config tool to cockpit sometime in the future.
i mean, you can just redeploy the container with the updated variable. thats kinda how they work.
CLI and Quadlet? /s but seriously, that’s what I use lol
Quadlets are so nice.
I assume portainer communicates via the docker socket? If so, couldn’t you just point portainer to the podman socket?
Portainer Docs | Install Portainer CE with Podman on Linux The official docs also mention doing that.
This is the way.
My impression from a recent crash course on Docker is that it got popular because it allows script kiddies to spin up services very fast without knowing how they work.
OWASP was like “you can follow these thirty steps to make Docker secure, or just run Podman instead.” https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html
My impression from a recent crash course on Docker is that it got popular because it allows script kiddies to spin up services very fast without knowing how they work.
That’s only a side effect. It mainly got popular because it is very easy for developers to ship a single image that just works instead of packaging for various different operating systems with users reporting issues that cannot be reproduced.
I dont really understand the problem with that?
Everyone is a script kiddy outside of their specific domain.
I may know loads about python but nothing about database management or proxies or Linux. If docker can abstract a lot of the complexities away and present a unified way you configure and manage them, where’s the bad?
That is definitely one of the crowds but there are also people like me that just are sick and tired of dealing with python, node, ruby depends. The install process for services has only continued to become increasingly more convoluted over the years. And then you show me an option where I can literally just slap down a compose.yml and hit “docker compose up - d” and be done? Fuck yeah I’m using that
No it’s popular because it allows people/companies to run things without needing to deal with updates and dependencies manually
Another take: Why should I care about dependency hell if I can just spin up the same service on the same machine without needing an additional VM and with minimal configuration changes.
This post inspired me to try podman, after it pulled all the images it needed my Proxmox VM died, VM won’t boot cause disk is now full. It’s currently 10pm, tonight’s going to suck.
eh, booting into single user mod should work fine, uninstall podman and init 5
Okay so I’ve done some digging and got my VM to boot up! This is not Podman’s fault, I got lazy setting up Proxmox and never really learned LVM volume storage, while internally on the VM it shows 90Gb used of 325Gb Proxmox is claiming 377Gb is used on the LVM-Thin partition.
I’m backing up my files as we speak, thinking of purging it all and starting over.
Edit: before I do the sacrificial purge This seems promising.
thinking of purging it all and starting over.
Don’t do that. You’ll learn nothing.
So I happened to follow the advice from that Proxmox post, enabled the “Discard” option for the disk and ran
sudo fstrim /
within the VM, now the Proxmox LVM-Thin partition is sitting at a comfortable 135Gb out of 377Gb.Think I’m going to use this
fstrim
command on my main desktop to free up space.I think linux does fstrim oob.
edit: I meant to say linux distros are set up to do that automatically.
It’s been about a day since this issue and now I’ve been keeping a close eye on my local-lvm, it fills fast, like, ridiculously fast and I’ve been having to run
sudo fstrim /
inside the VM just to keep it maintained. I’m finding it weird I’m now just noticing this as this server has been running for months!For now I edited my
/etc/bash.bashrc
so whenever I ssh in it’ll automatically runsudo fstrim /
, there is something I’m likely missing but this works as a temporary solution.
It’s my understanding that docker uses a lot of fuckery and hackery to do what they do. And IME they don’t seem to care if it breaks things.
To be fair, the largest problem here is that it presents itself as the kind of isolation that would respect firewall rules, not that they don’t respect them.
People wouldn’t make the same mistake in NixOS, despite it doing exactly the same.
I don’t know how much hackery and fuckery there is with docker specifically. The majority of what docker does was already present in the Linux kernel namespaces, cgroups etc. Docker just made it easier to build and ship the isolated environments between systems.
This is why I hate Docker.
I DIDNT KNOW THAT! WOW, this puts “not to use network_mode: host” another level.
network: host
gives the container basically full access to any port it wants. But even with other network modes you need to be careful, as any-p <external port>:<container port>
creates the appropriate firewall rule automatically.I just use caddy and don’t use any port rules on my containers. But maybe that’s also problematic.
Actually I believe host networking would be the one case where this isn’t an issue. Docker isn’t adding iptables rules to do NAT masquerading because there is no IP forwarding being done.
When you tell docker to expose a port you can tell it to bind to loopback and this isn’t an issue.
If I had a nickel for every database I’ve lost because I let docker broadcast its port on 0.0.0.0 I’d have about 35¢
How though? A database in Docker generally doesn’t need any exposed ports, which means no ports open in UFW either.
I exposed them because I used the container for local development too. I just kept reseeding every time it got hacked before I figured I should actually look into security.
Where are you working that your local machine is regularly exposed to malicious traffic?
My use case was run a mongodb container on my local, while I run my FE+BE with fast live-reloading outside of a container. Then package it all up in services for docker compose on the remote.
Ok… but that doesn’t answer my question. Where are you physically when you’re working on this that people are attacking exposed ports? I’m either at home or in the office, and in either case there’s an external firewall between me and any assholes who want to exploit exposed ports. Are your roommates or coworkers those kinds of assholes? Or are you sitting in a coffee shop or something?
This was on a VPS (remote) where I didn’t realise Docker was even capable of punching through UFW. I assumed (incorrectly) that if a port wasn’t reversed proxied in my nginx config, then it would remain on localhost only.
Just run
docker run -p 27017:27017 mongo:latest
on a VPS and check the default collections after a few hours and you’ll likely find they’re replaced with a ransom message.
For local access you can use
127.0.0.1:80:80
and it won’t put a hole in your firewall.Or if your database is access by another docker container, just put them on the same docker network and access via container name, and you don’t need any port mapping at all.
Yeah, I know that now lol, but good idea to spell it out. So what Docker does, which is so confusing when you first discover the behaviour, is it will bind your ports automatically to
0.0.0.0
if all you specify is27017:27017
as you port (without an IP address prefixing). AKA what the meme is about.
For all the raving about podman, it’s dumb too. I’ve seen multiple container networks stupidly route traffic across each other when they shouldn’t. Yay services kept running, but it defeats the purpose. Networking should be so hard that it doesn’t work unless it is configured correctly.
Or maybe it should be easy to configure correctly?
instructions unclear, now its hard to use and to configure
rootless podman and sockets ❤️
This only happens if you essentially tell docker “I want this app to listen on 0.0.0.0:80”
If you don’t do that, then it doesn’t punch a hole through UFW either.
We use Firewalld integration with Docker instead due to issues with UFW. Didn’t face any major issues with it.
I also ended up using firewalld and it mostly worked, although I first had to change some zone configs.
You’re forgetting the part where they had an option to disable this fuckery, and then proceeded to move it twice - exposing containers to everyone by default.
I had to clean up compromised services twice because of it.
This is why I install on bare metal, baby!
On windows (coughing)