Moblin was started because MS balked at making Windows for a Intel chip that didn't offer PCI enumeration.
For network intensive workloads, there is a choice between the efficiency of SR-IOV and the control & manageability of a virtual NIC like virtio-net. In order to get efficiency, you need to use SR-IOV, which (the last time I checked) still made lots of admins nervous when running untrusted guests. Sure, the guest could be isolated from internal resources via a vlan, but it could still be launching malicious code onto the internet, and it may be difficult to track its traffic for billing purposes, especially if you want to differentiate between external & internal traffic. SR-IOV NICs also have limited number of queues and VFs, so it is hard to over-commit servers. So in order to maintain control of guests, you end up doubling the kernel overhead by using a virtual NIC (eg, virtio-net) in the VM and a physical NIC in the hypervisor. Now you have twice the overhead, twice the packet pushing, more memory copies, VM exits, etc.
The nice thing about containers is that there is no need to choose. You get the efficiency of running just a single kernel, along with all the accounting and firewalling rules to maintain control & be able to bill the guest.
There are higher performance virtual network setups eg see http://www.virtualopensystems.com/en/solutions/guides/snabbs...
Container networking has overheads, the virtual network pairs and the natting is not costless at all, and most people with network intensive applications are allocating physical interfaces to containers anyway.
"Where is it appropriate to post a subscriber link? Almost anywhere. Private mail, messages to project mailing lists, and blog entries are all appropriate. As long as people do not use subscriber links as a way to defeat our attempts to gain subscribers, we are happy to see them shared."
I'm interested in seeing it tried though. The learning is in the doing.
I think PBI does de-duplication at the package manager level by manipulating hard-links to common files, rather than installing multiple copies.
Which is, itself, a bad reinvention of Plan 9's Venti filesystem. Having one, or two, or a million files on disk containing the same data should take up as much space as having just one. "Hard links" are a policy-level way to express shared mutability; deduplication of backing storage, meanwhile, should be a mechanism-level implementation detail.
How does PBI handle minor library version differences then? If one package provides and uses mylib-1.3.1 and another provides and uses mylib-1.3.5, how is that distinguished as the core library level (the plain .so file)? My understanding of what Clear Linux is attempting allows this level of granularity to ensure a package (really an amalgamation of individual packages in the sense of most current unixy distros) is functional and updated as a whole.
That's what made it attractive to me: I've painted myself into a corner several times when trying to install Ubuntu PPAs that want conflicting versions of shared libs
I mean, it is possible to completely isolate them, all.
It may end-up very heavy though, but, and I can be wrong on this, with the constant growth of storage capacities, network bandwidth, RAM capacity, and the progress made to lighten "containers", I don't think this "heavy" downside I see of immutable infrastructures will be a real issue in the future.
I guess it just lead to a turning point, where end-users won't have to worry about security updates for x or y library, but more about updating the application they're using. In the case you use containers/micro-vms, if there is a security update to do, the container "maintainer" would be in charge to push the security update, then you just need to update your container.
I'm not sure which one is the most constraining, dealing with conflicts or being careful on relating on well maintained "containers".
I guess, for production environments, the second option looks like a wise choice.
its a VM really, but packaged like a container. On my laptop, it starts about as fast as a Docker container, ie less than a second.
This is quite impressive.
No they are not. A VM is a completely different system, while a container is a packaged application.
VM's provide an awful lot more than just resource separation... security and isolation being at the top of the list.
The problem we see here is an awful lot of people think a container is a drop-in replacement for a VM, when it is usually not.
* what tangible benefits would I get from using Clear Linux over my own heavily customized/handrolled linux server?
* how does the update system handle breakage/conflicts?
* are any of Intel's changes likely to make it into other existing distros or kernels?
[ 0.000000] KERNEL supported cpus:
[ 0.000000] Intel GenuineIntel
[ 0.000000] e820: BIOS-provided physical RAM map:
...
[ 1.245851] calling fuse_init+0x0/0x1b6 [fuse] @ 1
[ 1.245853] fuse init (API version 7.23)
[ 1.246299] initcall fuse_init+0x0/0x1b6 [fuse] returned 0 after 431 usecs
quote: "With kvmtool, we no longer need a BIOS or UEFI; instead we can jump directly into the Linux kernel. Kvmtool is not cost-free, of course; starting kvmtool and creating the CPU contexts takes approximately 30 milliseconds."
And then recompile again whenever a bundle gets updated?
Those modifications are exciting for me as one of the developers of rkt. We built rkt with this concept of "stages"[1] where the rkt stage1 here is being swapped out from the default which uses "Linux containers" and instead executing lkvm. In this case the Clear Containers team was able to swap out the stage1 with some fairly minimal code changes to rkt which are going upstream. Cool stuff!
[1] https://github.com/coreos/rkt/blob/master/Documentation/deve...