Nothing here about crypto, and the risks of running crypto for two different applications on the same kernel (and with the same CSPRNG state) simultaneously.
<snark> But I'm sure if nobody's talking about it on Reddit, that must mean it's not a big deal. </snark> ;)
basically even if the container code was perfect (and its not) theres a bunch of desgin issues security wise due to sharing the same kernel memory space.
yeah but they don't get the same sort of commercially-baked advertising - which will not mention the issues and make you believe it's the next big thing.
also, jail has an arguably better implementation.
containers, jails, etc. are useful for the right scenarios of course.
But I question their usefulness as "vm-replacements". For the reason tptacek presented the long list of similar possibilities. It's complex and thus it's hard to secure.
I am terrified of containers. In 2012 I used lxc-destroy on a container, and it managed to destroy my entire filesystem. It seems beyond belief to me that something like that could happen, but it did.
Definitely unfair to bring it up at this point--it was a while back--but until everyone universally says they're solid, I'm not touching them.
Which filesystem? Host or container's? Either way, it's more likely that the fs was already corrupted in that case - did you also stop using that filesystem since then?
EDIT: Anyway, it's probably an issue with the tooling around containers, not the implementation of kernel containers itself
(well, in theory it might have been a bug in the kernel that corrupted the rootfs and you reported it here as it has destroyed your host's system).
I've also experienced problems with lack of host/container separation with lxc on arch linux, e.g. shutting down container shut down the host. I suspected the problem was an improperly mounted/unmounted /dev or /sys in the guest.
I've had a much smoother experience with lxc on Ubuntu than arch. The core lxc developers work for canonical, and Ubuntu lxc bootstrap scripts are much more refined. Lxc support for arch linux is provided by the community, and at least when I tried last year, there were minor problems here and there.
On arch linux, I've found that systemd-nspawn support is much better than lxc's. The commands mkarchroot and arch-nspawn (in the devtools package) make running arch in arch straightforward.
Up until Linux 3.8/3.9 the shutdown() syscall was not container aware and hence shutting down from inside a container would shutdown a host
It has since been fixed and attached to the 'PID' namespace meaning that all processes in the PID namespace get shutdown in the same manner as the host calling shutdown() (ie init gets a specific signal, userspace processes get signaled as well)
Definitely not doubting you, but surprised, though my first experience is 2013, and I was doing some "crazy" stuff to get familiar... like binding mounts to my host fs, experimenting with btrfs and the like, definitely seems safe now.
Didn't know enough at the time to be able to diagnose what had happened, and I had just started playing with containers so doubtlessly I fucked up somewhere. But it seems so utterly weird that (a) it could happen and (b) that a newbie could easily accidentally stumble on something that'd cause everything to go to shit.
I just ran lxc-destroy on what I thought was a container, started running into lots of issues in Chrome running on host so I closed out Chrome to try to restart it, at which point its binary couldn't be found so I blindly restarted hoping it'd fix it. And... there was nothing.
I had nasty side effects between host and guest on my first 'play' with lxc. Fortunately it was just processes being killed and locking me out but I'd never expect shutting down a virtual instance to interact with the host like this.
The presentation makes a good point that containers aren't universally "insecure".
For certain use cases they are absolutely fine because the trust boundary between containers or the kernel isn't critical:
* Deployment (immutable servers)
* Development environment (develop against same configuration as production)
* Test environment (try different distros)
But running multiple containers from untrusted parties on one host is risky. Let's face it, kernel exploits do come out periodically and when that happens, container boundaries can be breached.
At the end of the day, security isn't absolute. You need to consider how valuable your data is and make your own decision.
It's worth noting that libvirt puts an (LXC) container around each regular KVM virtual machine it runs, and will also secure it using SELinux (see: sVirt).
Can someone comment on the current state of Namespaces in Linux and how that impacts LXC security? I found the following from 2012: http://lwn.net/Articles/528078/
The slides are horrible outdated. Using Linux user namespaces you can have a secure Linux container with a root within.
And everyone who thinks that KVM is totally secure need to go back to school.
This is pretty much spot on based on my experience writing containers implementations. I have been putting together information documenting containers and just added some notes about security earlier.
At the moment i am taking my notes on how to secure containers and attempting to put them in a more digestible form unfortunately depending on what you are trying to do with containers the security model and how you defend those containers changes dramatically
if any one is interested in more info hit up http://doger.io and feel free to ask questions or request specific information be posted
<snark> But I'm sure if nobody's talking about it on Reddit, that must mean it's not a big deal. </snark> ;)