Skip to content

Instantly share code, notes, and snippets.

@robinmonjo
Last active August 29, 2015 14:13
Show Gist options
  • Select an option

  • Save robinmonjo/f6ca0f85a204c8103e10 to your computer and use it in GitHub Desktop.

Select an option

Save robinmonjo/f6ca0f85a204c8103e10 to your computer and use it in GitHub Desktop.

Revisions

  1. robinmonjo revised this gist Jan 19, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion cargo_how_to.md
    Original file line number Diff line number Diff line change
    @@ -49,7 +49,7 @@ $>go get github.com/docker/libcontainer
    $>cd $GOPATH/src/github.com/docker/libcontainer/nsinit/

    #checkout to a version that I know will wotk with this tutorial
    $>git checkout v1.4.0
    $>git checkout 73ba097bf596249068513559225d6e18c1767b47
    $>GOPATH=`pwd`/../vendor:$GOPATH go build

    #moving the binary into our path
  2. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 3 additions and 0 deletions.
    3 changes: 3 additions & 0 deletions cargo_how_to.md
    Original file line number Diff line number Diff line change
    @@ -47,6 +47,9 @@ $>go get github.com/docker/libcontainer

    #heading to the nsinit main package and building the nsinit binary
    $>cd $GOPATH/src/github.com/docker/libcontainer/nsinit/

    #checkout to a version that I know will wotk with this tutorial
    $>git checkout v1.4.0
    $>GOPATH=`pwd`/../vendor:$GOPATH go build

    #moving the binary into our path
  3. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion cargo_how_to.md
    Original file line number Diff line number Diff line change
    @@ -15,7 +15,7 @@ This was old time ! Today docker is "everywhere": on [aws](https://aws.amazon.co
    [google cloud](https://cloud.google.com/container-engine/),
    [azure](http://azure.microsoft.com/blog/2014/10/15/new-windows-server-containers-and-azure-support-for-docker/)
    and in the future, even on [Windows](http://blog.docker.com/2014/10/docker-microsoft-partner-distributed-applications/).
    However, docker is not perfect for everyone and recently faced some [criticism from core-os](https://gigaom.com/2014/12/02/why-coreos-just-fired-a-rocket-at-docker/). I'm not here to argue if these critics are justified or not, but something is sure, docker is not the only container engine out there. However they are one big steps ahead on root file systems and how they make them available.
    However, docker is not perfect for everyone and recently faced some [criticism from core-os](https://gigaom.com/2014/12/02/why-coreos-just-fired-a-rocket-at-docker/). I'm not here to argue if these critics are justified or not, but something is sure, docker is not the only container engine out there. However they are one big step ahead on root file systems and how they make them available.

    ###Docker images

  4. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion cargo_how_to.md
    Original file line number Diff line number Diff line change
    @@ -5,7 +5,7 @@
    ###Background

    I have been using linux containers for almost 2 years now. It started during an internship at [Applidget](http://www.applidget.com/pages/home), a startup in Paris. My job was to integrate linux containers technology in their private PaaS.
    At this time, there where no doubt, the technology to use was [LXC](https://linuxcontainers.org/lxc/introduction/). One week or so after I started digging into LXC, one of my co-worker talked to me about this new thing called docker. He told me "*isn't it what you are supposed to do during your internship ?*". And it was kind of it.
    At this time, there where no doubt, the technology to use was [LXC](https://linuxcontainers.org/lxc/introduction/). One week or so after I started digging into LXC, [one of my co-worker](https://twitter.com/ssaunier) talked to me about this new thing called docker. He told me "*isn't it what you are supposed to do during your internship ?*". And it was kind of it.

    At this time I already highlighted a big challenge of containers in PaaS: creation time. Waiting for `bundle install` to complete is already long enough to have to wait for another 40 secondes to create a container :). Docker landed just in time !

  5. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion cargo_how_to.md
    Original file line number Diff line number Diff line change
    @@ -213,7 +213,7 @@ Protocols: dict file ftp ftps gopher http https imap imaps ldap pop3 pop3s rtmp
    Features: Debug GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
    ````

    Yes curl is installed ! You will notice that the filesystem wasn't downloaded with the -g option, so it's not a git repository (downside, you can't push it)
    Yes curl is installed ! You will notice that the filesystem wasn't downloaded with the `-g` flag, so it's not a git repository (downside, you can't push it)

    We just used *cargo* and the docker hub to pull, commit and push a debian jessie file system. I choosed to use nsinit here, but you can do it with whatever container engine you like (with of course some specific configuration).

  6. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion cargo_how_to.md
    Original file line number Diff line number Diff line change
    @@ -19,7 +19,7 @@ However, docker is not perfect for everyone and recently faced some [criticism f

    ###Docker images

    Docker images are remotly stored on a registry. Once in a registry, they can be pulled and pushed. Docker Inc. maintains one registry, the docker hub. Go have a look, it's full of pre-configured, official and well maintained images. The thing is, these images can only be pulled and pushed by docker users. *cargo* is meant to bring the docker hub to other linux container users.
    Docker images are remotly stored on a registry. Once in a registry, they can be pulled and pushed. Docker Inc. maintains one registry, the docker hub. Go [have a look](https://hub.docker.com/), it's full of pre-configured, official and well maintained images. The thing is, these images can only be pulled and pushed by docker users. *cargo* is meant to bring the docker hub to other linux container users.

    Docker images are not "just" linux root file system. Specification just got merged recently, you can go [read it here](https://github.com/docker/docker/blob/master/image/spec/v1.md). To summarize (roughly), images are made of ordered layers. You can read more about [it here too](https://docs.docker.com/terms/layer/), but you have the main idea. This layering approach make it very efficient to share and store images (pushing and pulling just the layer you need).

  7. robinmonjo renamed this gist Jan 18, 2015. 1 changed file with 0 additions and 0 deletions.
  8. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion docker images with any container engine.md
    Original file line number Diff line number Diff line change
    @@ -221,6 +221,6 @@ So far, *cargo* only supports the docker hub, but during the development process

    ###Conclusion

    I think *cargo* is a cool tool, and hopefully it will be useful fore some people out there. Go check the repo [here](https://github.com/robinmonjo/cargo). If you have any issue using it, open an issue on Github, if you feel like you want to contribute, open a pull request, and if you have any comments, leave a comment.
    I think *cargo* is a cool tool, and hopefully it will be useful fore some people out there. Go check the repo [here](https://github.com/robinmonjo/cargo). If you have any issue using it, open an issue on Github, if you feel like you want to contribute, open a pull request, and if you have any comments, leave a comment below.

    Robin.
  9. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion docker images with any container engine.md
    Original file line number Diff line number Diff line change
    @@ -1,4 +1,4 @@
    #[cargo](https://github.com/robinmonjo/cargo), docker hub without docker
    #[cargo](https://github.com/robinmonjo/cargo), docker hub without docker, how to

    18 Jan 2015

  10. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 25 additions and 1 deletion.
    26 changes: 25 additions & 1 deletion docker images with any container engine.md
    Original file line number Diff line number Diff line change
    @@ -199,4 +199,28 @@ Downloading layers:
    Done. Rootfs of robinmonjo/debian_curl:latest in rootfs
    ````

    So we pulled
    So we pulled 4 layers, that makes sense. Do we have curl installed in this image ?

    ````bash
    $>cd rootfs #root file system is in rootfs, we didn't set the -r flag

    #we can use nsinit directly, our previous configuration container.json was pushed earlier
    $>sudo nsinit exec bash

    root@cargo-demo:/# curl --version
    curl 7.26.0 (x86_64-pc-linux-gnu) libcurl/7.26.0 OpenSSL/1.0.1e zlib/1.2.7 libidn/1.25 libssh2/1.4.2 librtmp/2.3
    Protocols: dict file ftp ftps gopher http https imap imaps ldap pop3 pop3s rtmp rtsp scp sftp smtp smtps telnet tftp
    Features: Debug GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
    ````

    Yes curl is installed ! You will notice that the filesystem wasn't downloaded with the -g option, so it's not a git repository (downside, you can't push it)

    We just used *cargo* and the docker hub to pull, commit and push a debian jessie file system. I choosed to use nsinit here, but you can do it with whatever container engine you like (with of course some specific configuration).

    So far, *cargo* only supports the docker hub, but during the development process I easily got it working on a private registry I deployed for debugging purpose. This could be added in a near future.

    ###Conclusion

    I think *cargo* is a cool tool, and hopefully it will be useful fore some people out there. Go check the repo [here](https://github.com/robinmonjo/cargo). If you have any issue using it, open an issue on Github, if you feel like you want to contribute, open a pull request, and if you have any comments, leave a comment.

    Robin.
  11. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 1 addition and 0 deletions.
    1 change: 1 addition & 0 deletions docker images with any container engine.md
    Original file line number Diff line number Diff line change
    @@ -177,6 +177,7 @@ We can now go ahead, and delete our debian image, it's safely stored in the clou
    $>cd .. && sudo rm -rf debian
    #download it again, just to make sure everything worked
    #Note: you don't need to specify you credentials unless you made your image private
    $>sudo cargo pull <username>/debian_curl -u <username>:<password>

    Pulling image robinmonjo/debian_curl:latest ...
  12. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 2 additions and 2 deletions.
    4 changes: 2 additions & 2 deletions docker images with any container engine.md
    Original file line number Diff line number Diff line change
    @@ -1,6 +1,6 @@
    #[cargo](https://github.com/robinmonjo/cargo), docker hub without docker

    21 Jan 2015
    18 Jan 2015

    ###Background

    @@ -25,7 +25,7 @@ Docker images are not "just" linux root file system. Specification just got merg

    ###Container engine

    Linux containers are available in every linux distribution with a recent enough kernel. However, it's hard to setup a container manually. What I call container engines are tools that automates the setup of container. There are several of them (you probably heard about), the ones who come to mind are LXC, systemd-nspawn, docker (obviously), core-os rocket, and google lmctfy. But there is also a really minimal one, used to test [libcontainer](https://github.com/docker/libcontainer): nsinit. I mentioned libcontainer already. It's a Go package that replaced LXC as the default container backend in docker since version 0.9. I really like libcontainer, as it's dependency free and it gets a lot of support from company such as Google and Redhat.
    Linux containers are available in every linux distribution with a recent enough kernel. However, it's hard to setup a container manually. What I call *container engines* are tools that automates the setup of container. There are several of them (you probably heard about), the ones who come to mind are LXC, systemd-nspawn, docker (obviously), core-os rocket, and google lmctfy. But there is also a really minimal one, used to test [libcontainer](https://github.com/docker/libcontainer): nsinit. I mentioned libcontainer already. It's a Go package that replaced LXC as the default container backend in docker since version 0.9. I really like libcontainer, as it's dependency free and it gets a lot of support from company such as Google and Redhat.

    ###Cargo

  13. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion docker images with any container engine.md
    Original file line number Diff line number Diff line change
    @@ -21,7 +21,7 @@ However, docker is not perfect for everyone and recently faced some [criticism f

    Docker images are remotly stored on a registry. Once in a registry, they can be pulled and pushed. Docker Inc. maintains one registry, the docker hub. Go have a look, it's full of pre-configured, official and well maintained images. The thing is, these images can only be pulled and pushed by docker users. *cargo* is meant to bring the docker hub to other linux container users.

    Docker images are not "just" linux root file system. Specification just got merged recently, you can go [read it here](https://github.com/docker/docker/blob/master/image/spec/v1.md). To summarize (badly), images are made of ordered layers. You can read more about [it here too](https://docs.docker.com/terms/layer/), but you have the main idea. This layering approach make it very efficient to share and store images (pushing and pulling just the layer you need).
    Docker images are not "just" linux root file system. Specification just got merged recently, you can go [read it here](https://github.com/docker/docker/blob/master/image/spec/v1.md). To summarize (roughly), images are made of ordered layers. You can read more about [it here too](https://docs.docker.com/terms/layer/), but you have the main idea. This layering approach make it very efficient to share and store images (pushing and pulling just the layer you need).

    ###Container engine

  14. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 2 additions and 26 deletions.
    28 changes: 2 additions & 26 deletions docker images with any container engine.md
    Original file line number Diff line number Diff line change
    @@ -21,7 +21,7 @@ However, docker is not perfect for everyone and recently faced some [criticism f

    Docker images are remotly stored on a registry. Once in a registry, they can be pulled and pushed. Docker Inc. maintains one registry, the docker hub. Go have a look, it's full of pre-configured, official and well maintained images. The thing is, these images can only be pulled and pushed by docker users. *cargo* is meant to bring the docker hub to other linux container users.

    Docker images are not "just" linux root file system. Specification just got merged recently, you can go [read it here](https://github.com/docker/docker/blob/master/image/spec/v1.md). To summarize in one sentence, images are made of ordered layers. You can read more about [it here too](https://docs.docker.com/terms/layer/), but you have the main idea. This layering approach make it very efficient to share and store images (pushing and pulling just the layer you need).
    Docker images are not "just" linux root file system. Specification just got merged recently, you can go [read it here](https://github.com/docker/docker/blob/master/image/spec/v1.md). To summarize (badly), images are made of ordered layers. You can read more about [it here too](https://docs.docker.com/terms/layer/), but you have the main idea. This layering approach make it very efficient to share and store images (pushing and pulling just the layer you need).

    ###Container engine

    @@ -198,28 +198,4 @@ Downloading layers:
    Done. Rootfs of robinmonjo/debian_curl:latest in rootfs
    ````
    So we pulled 4 layers, that makes sense. Do we have curl installed in this image ?
    ````bash
    $>cd rootfs #root file system is in rootfs, we didn't set the -r flag

    #we can use nsinit directly, our previous configuration container.json was pushed earlier
    $>sudo nsinit exec bash

    root@cargo-demo:/# curl --version
    curl 7.26.0 (x86_64-pc-linux-gnu) libcurl/7.26.0 OpenSSL/1.0.1e zlib/1.2.7 libidn/1.25 libssh2/1.4.2 librtmp/2.3
    Protocols: dict file ftp ftps gopher http https imap imaps ldap pop3 pop3s rtmp rtsp scp sftp smtp smtps telnet tftp
    Features: Debug GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
    ````

    Yes curl is installed ! You will notice that the filesystem wasn't downloaded with the -g option, so it's not a git repository (downside, you can't push it)

    We just used *cargo* and the docker hub to pull, commit and push a debian jessie file system. I choosed to use nsinit here, but you can do it with whatever container engine you like (with of course some specific configuration).

    So far, *cargo* only supports the docker hub, but during the development process I easily got it working on a private registry I deployed for debugging purpose. This could be added in a near future.

    ###Conclusion

    I think *cargo* is a cool tool, and hopefully it will be useful fore some people out there. Go check the repo [here](https://github.com/robinmonjo/cargo). If you have any issue using it, open an issue on Github, if you feel like you want to contribute, open a pull request, and if you have any comments, leave a comment.

    Robin.
    So we pulled
  15. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion docker images with any container engine.md
    Original file line number Diff line number Diff line change
    @@ -19,7 +19,7 @@ However, docker is not perfect for everyone and recently faced some [criticism f

    ###Docker images

    Docker images are remotly stored on a registry. Once in a registry, they can be pulled and pushed. Docker Inc. maintains one registry, the docker hub. Go have a look, it's full of pre-configured, official and well maintained images. The thing is, these images can only be pulled and pushed by docker users. *cargo* is meant to give access to these images to other linux container users.
    Docker images are remotly stored on a registry. Once in a registry, they can be pulled and pushed. Docker Inc. maintains one registry, the docker hub. Go have a look, it's full of pre-configured, official and well maintained images. The thing is, these images can only be pulled and pushed by docker users. *cargo* is meant to bring the docker hub to other linux container users.

    Docker images are not "just" linux root file system. Specification just got merged recently, you can go [read it here](https://github.com/docker/docker/blob/master/image/spec/v1.md). To summarize in one sentence, images are made of ordered layers. You can read more about [it here too](https://docs.docker.com/terms/layer/), but you have the main idea. This layering approach make it very efficient to share and store images (pushing and pulling just the layer you need).

  16. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion docker images with any container engine.md
    Original file line number Diff line number Diff line change
    @@ -14,7 +14,7 @@ Docker is open source, great ! It's written in Go, ok, I heard about it. So I st
    This was old time ! Today docker is "everywhere": on [aws](https://aws.amazon.com/blogs/aws/cloud-container-management/),
    [google cloud](https://cloud.google.com/container-engine/),
    [azure](http://azure.microsoft.com/blog/2014/10/15/new-windows-server-containers-and-azure-support-for-docker/)
    and in a near future, even on [Windows](http://blog.docker.com/2014/10/docker-microsoft-partner-distributed-applications/).
    and in the future, even on [Windows](http://blog.docker.com/2014/10/docker-microsoft-partner-distributed-applications/).
    However, docker is not perfect for everyone and recently faced some [criticism from core-os](https://gigaom.com/2014/12/02/why-coreos-just-fired-a-rocket-at-docker/). I'm not here to argue if these critics are justified or not, but something is sure, docker is not the only container engine out there. However they are one big steps ahead on root file systems and how they make them available.

    ###Docker images
  17. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion docker images with any container engine.md
    Original file line number Diff line number Diff line change
    @@ -7,7 +7,7 @@
    I have been using linux containers for almost 2 years now. It started during an internship at [Applidget](http://www.applidget.com/pages/home), a startup in Paris. My job was to integrate linux containers technology in their private PaaS.
    At this time, there where no doubt, the technology to use was [LXC](https://linuxcontainers.org/lxc/introduction/). One week or so after I started digging into LXC, one of my co-worker talked to me about this new thing called docker. He told me "*isn't it what you are supposed to do during your internship ?*". And it was kind of it.

    At this time I already spotted a big challenge of containers in PaaS: creation time. Waiting for `bundle install` to complete is already long enough to have to wait for another 40 secondes to create a container :). Docker landed just in time !
    At this time I already highlighted a big challenge of containers in PaaS: creation time. Waiting for `bundle install` to complete is already long enough to have to wait for another 40 secondes to create a container :). Docker landed just in time !

    Docker is open source, great ! It's written in Go, ok, I heard about it. So I started reading the source code. At this time, docker was using LXC as default container backend (today they use their own libcontainer, I will talk about it later). I quickly identified the use of AuFS, a union file system, and then understood how docker was able to spawn containers that fast. Obviously using docker wasn't an option for me, early versions wasn't production ready. I ended up writting some ruby and bash scripts to efficiently use LXC with AuFS, and Applidget PaaS is using it everyday since then.

  18. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion docker images with any container engine.md
    Original file line number Diff line number Diff line change
    @@ -5,7 +5,7 @@
    ###Background

    I have been using linux containers for almost 2 years now. It started during an internship at [Applidget](http://www.applidget.com/pages/home), a startup in Paris. My job was to integrate linux containers technology in their private PaaS.
    At this time, there where no doubt, the technology to use was [LXC](https://linuxcontainers.org/lxc/introduction/). One week or so after I started digging into LXC, one of my co-worker talked to me about this totally new thing called docker. He told me "*isn't it what you are supposed to do during your internship ?*". And it was kind of it.
    At this time, there where no doubt, the technology to use was [LXC](https://linuxcontainers.org/lxc/introduction/). One week or so after I started digging into LXC, one of my co-worker talked to me about this new thing called docker. He told me "*isn't it what you are supposed to do during your internship ?*". And it was kind of it.

    At this time I already spotted a big challenge of containers in PaaS: creation time. Waiting for `bundle install` to complete is already long enough to have to wait for another 40 secondes to create a container :). Docker landed just in time !

  19. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion docker images with any container engine.md
    Original file line number Diff line number Diff line change
    @@ -7,7 +7,7 @@
    I have been using linux containers for almost 2 years now. It started during an internship at [Applidget](http://www.applidget.com/pages/home), a startup in Paris. My job was to integrate linux containers technology in their private PaaS.
    At this time, there where no doubt, the technology to use was [LXC](https://linuxcontainers.org/lxc/introduction/). One week or so after I started digging into LXC, one of my co-worker talked to me about this totally new thing called docker. He told me "*isn't it what you are supposed to do during your internship ?*". And it was kind of it.

    At this time I already spotted a big challenge of containers in PaaS: creation time. Waiting for `bundle install` to complete is already long enough to have to wait for another 40 secondes to create a container :). Docker landed just on time !
    At this time I already spotted a big challenge of containers in PaaS: creation time. Waiting for `bundle install` to complete is already long enough to have to wait for another 40 secondes to create a container :). Docker landed just in time !

    Docker is open source, great ! It's written in Go, ok, I heard about it. So I started reading the source code. At this time, docker was using LXC as default container backend (today they use their own libcontainer, I will talk about it later). I quickly identified the use of AuFS, a union file system, and then understood how docker was able to spawn containers that fast. Obviously using docker wasn't an option for me, early versions wasn't production ready. I ended up writting some ruby and bash scripts to efficiently use LXC with AuFS, and Applidget PaaS is using it everyday since then.

  20. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion docker images with any container engine.md
    Original file line number Diff line number Diff line change
    @@ -5,7 +5,7 @@
    ###Background

    I have been using linux containers for almost 2 years now. It started during an internship at [Applidget](http://www.applidget.com/pages/home), a startup in Paris. My job was to integrate linux containers technology in their private PaaS.
    At this time, there where no doubt, the technology to use was [LXC](https://linuxcontainers.org/lxc/introduction/). One week or so after I started digging into LXC, one of my co-worker talked to me about this totally new thing called docker. He told me "isn't it what you are supposed to do during your internship ?". And it was kind of it.
    At this time, there where no doubt, the technology to use was [LXC](https://linuxcontainers.org/lxc/introduction/). One week or so after I started digging into LXC, one of my co-worker talked to me about this totally new thing called docker. He told me "*isn't it what you are supposed to do during your internship ?*". And it was kind of it.

    At this time I already spotted a big challenge of containers in PaaS: creation time. Waiting for `bundle install` to complete is already long enough to have to wait for another 40 secondes to create a container :). Docker landed just on time !

  21. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion docker images with any container engine.md
    Original file line number Diff line number Diff line change
    @@ -216,7 +216,7 @@ Yes curl is installed ! You will notice that the filesystem wasn't downloaded wi

    We just used *cargo* and the docker hub to pull, commit and push a debian jessie file system. I choosed to use nsinit here, but you can do it with whatever container engine you like (with of course some specific configuration).

    >So far, *cargo* only supports the docker hub, but during the development process I easily got it working on a private registry I deployed for debugging purpose. This could be added in a near future.
    So far, *cargo* only supports the docker hub, but during the development process I easily got it working on a private registry I deployed for debugging purpose. This could be added in a near future.

    ###Conclusion

  22. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion docker images with any container engine.md
    Original file line number Diff line number Diff line change
    @@ -216,7 +216,7 @@ Yes curl is installed ! You will notice that the filesystem wasn't downloaded wi

    We just used *cargo* and the docker hub to pull, commit and push a debian jessie file system. I choosed to use nsinit here, but you can do it with whatever container engine you like (with of course some specific configuration).

    So far, *cargo* only supports the docker hub, but during the development process I easily got it working on a private registry I deployed for debugging purpose. This could be added in a near future.
    >So far, *cargo* only supports the docker hub, but during the development process I easily got it working on a private registry I deployed for debugging purpose. This could be added in a near future.
    ###Conclusion

  23. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 3 additions and 3 deletions.
    6 changes: 3 additions & 3 deletions docker images with any container engine.md
    Original file line number Diff line number Diff line change
    @@ -29,7 +29,7 @@ Linux containers are available in every linux distribution with a recent enough

    ###Cargo

    [*cargo*](https://github.com/robinmonjo/cargo) is meant to provide docker hub capabilities to every container engines. In this **how to**, I will use it with `nsinit` on a ubuntu 14.04 machine (you will have to install golang since we have to build `nsinit`).
    [*cargo*](https://github.com/robinmonjo/cargo) is meant to provide docker hub capabilities to every container engines. In this **how to**, I will use it with nsinit on a ubuntu 14.04 machine (you will have to install golang since we have to build nsinit).

    ####1 - Setup

    @@ -59,7 +59,7 @@ Now install *cargo* (check the readme for the latest version available):
    $>curl -sL https://github.com/robinmonjo/cargo/releases/download/v1.4.1/cargo-v1.4.1_x86_64.tgz | sudo tar -C /usr/local/bin -zxf -
    ````

    At this point you should have *nsinit* and *cargo* properly installed.
    At this point you should have nsinit and *cargo* properly installed.

    ####2 - Pull an image

    @@ -101,7 +101,7 @@ So first, we can see that we have our debian file system. But why do we have a g

    ####3 - Run a container

    Now that we have our file system, we want to run a container in it. `nsinit` need to find a `container.json` file at the root of the filesystem, containing the configuration of the container. There is a simple file that we can use in [*cargo* repository](https://github.com/robinmonjo/cargo/blob/master/sample_configs/container.json):
    Now that we have our file system, we want to run a container in it. nsinit need to find a container.json file at the root of the filesystem, containing the configuration of the container. There is a simple file that we can use in [*cargo* repository](https://github.com/robinmonjo/cargo/blob/master/sample_configs/container.json):

    ````bash
    $>sudo su
  24. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 10 additions and 10 deletions.
    20 changes: 10 additions & 10 deletions docker images with any container engine.md
    Original file line number Diff line number Diff line change
    @@ -19,7 +19,7 @@ However, docker is not perfect for everyone and recently faced some [criticism f

    ###Docker images

    Docker images are remotly stored on a registry. Once in a registry, they can be pulled and pushed. Docker Inc. maintains one registry, the docker hub. Go have a look, it's full of pre-configured, official and well maintained images. The thing is, these images can only be pulled and pushed by docker users. `cargo` is meant to give access to these images to other linux container users.
    Docker images are remotly stored on a registry. Once in a registry, they can be pulled and pushed. Docker Inc. maintains one registry, the docker hub. Go have a look, it's full of pre-configured, official and well maintained images. The thing is, these images can only be pulled and pushed by docker users. *cargo* is meant to give access to these images to other linux container users.

    Docker images are not "just" linux root file system. Specification just got merged recently, you can go [read it here](https://github.com/docker/docker/blob/master/image/spec/v1.md). To summarize in one sentence, images are made of ordered layers. You can read more about [it here too](https://docs.docker.com/terms/layer/), but you have the main idea. This layering approach make it very efficient to share and store images (pushing and pulling just the layer you need).

    @@ -29,7 +29,7 @@ Linux containers are available in every linux distribution with a recent enough

    ###Cargo

    [`cargo`](https://github.com/robinmonjo/cargo) is meant to provide docker hub capabilities to every container engines. In this **how to**, I will use it with `nsinit` on a ubuntu 14.04 machine (you will have to install golang since we have to build `nsinit`).
    [*cargo*](https://github.com/robinmonjo/cargo) is meant to provide docker hub capabilities to every container engines. In this **how to**, I will use it with `nsinit` on a ubuntu 14.04 machine (you will have to install golang since we have to build `nsinit`).

    ####1 - Setup

    @@ -53,13 +53,13 @@ $>GOPATH=`pwd`/../vendor:$GOPATH go build
    $>sudo cp nsinit /usr/local/bin/
    `````

    Now install `cargo` (check the readme for the latest version available):
    Now install *cargo* (check the readme for the latest version available):

    ````bash
    $>curl -sL https://github.com/robinmonjo/cargo/releases/download/v1.4.1/cargo-v1.4.1_x86_64.tgz | sudo tar -C /usr/local/bin -zxf -
    ````

    At this point you should have `nsinit` and `cargo` properly installed.
    At this point you should have *nsinit* and *cargo* properly installed.

    ####2 - Pull an image

    @@ -97,11 +97,11 @@ $>git branch
    * layer_2_58052b122b60f9e695b9a1b0b8272bfb40e7249b9ba2d50ac22d12f3a3c9b4dd
    `````

    So first, we can see that we have our debian file system. But why do we have a git repository ? Because we used `cargo` `-g` flag. Each branch is a layer of the image (remember, docker images are made of layers). `layer_2_*` contains the entire image since each layer is downloaded on a branch created from the previous one.
    So first, we can see that we have our debian file system. But why do we have a git repository ? Because we used *cargo* `-g` flag. Each branch is a layer of the image (remember, docker images are made of layers). `layer_2_*` contains the entire image since each layer is downloaded on a branch created from the previous one.

    ####3 - Run a container

    Now that we have our file system, we want to run a container in it. `nsinit` need to find a `container.json` file at the root of the filesystem, containing the configuration of the container. There is a simple file that we can use in [cargo repository](https://github.com/robinmonjo/cargo/blob/master/sample_configs/container.json):
    Now that we have our file system, we want to run a container in it. `nsinit` need to find a `container.json` file at the root of the filesystem, containing the configuration of the container. There is a simple file that we can use in [*cargo* repository](https://github.com/robinmonjo/cargo/blob/master/sample_configs/container.json):

    ````bash
    $>sudo su
    @@ -144,7 +144,7 @@ Checksum: tarsum.dev+sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495
    Layer size: 27188224
    Done
    ````
    Ok so what happended ? `cargo` took all the changes, and commited them into a new properly named branch. Remember, layers in docker images and how `cargo` save each one of them in a new branch ? That's what happened, we just created a new layer. cargo also wrote some metadata (image id, layer parent, layer checksum and size) that are needed to push and rebuild the image. Let's check the state of our git repository:
    Ok so what happended ? *cargo* took all the changes, and commited them into a new properly named branch. Remember, layers in docker images and how *cargo* save each one of them in a new branch ? That's what happened, we just created a new layer. *cargo* also wrote some metadata (image id, layer parent, layer checksum and size) that are needed to push and rebuild the image. Let's check the state of our git repository:

    ````bash
    $>git branch
    @@ -214,12 +214,12 @@ Features: Debug GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP

    Yes curl is installed ! You will notice that the filesystem wasn't downloaded with the -g option, so it's not a git repository (downside, you can't push it)

    We just used `cargo` and the docker hub to pull, commit and push a debian jessie file system. I choosed to use nsinit here, but you can do it with whatever container engine you like (with of course some specific configuration).
    We just used *cargo* and the docker hub to pull, commit and push a debian jessie file system. I choosed to use nsinit here, but you can do it with whatever container engine you like (with of course some specific configuration).

    So far, `cargo` only supports the docker hub, but during the development process I easily got it working on a private registry I deployed for debugging purpose. This could be added in a near future.
    So far, *cargo* only supports the docker hub, but during the development process I easily got it working on a private registry I deployed for debugging purpose. This could be added in a near future.

    ###Conclusion

    I think `cargo` is a cool tool, and hopefully it will be useful fore some people out there. Go check the repo [here](https://github.com/robinmonjo/cargo). If you have any issue using it, open an issue on Github, if you feel like you want to contribute, open a pull request, and if you have any comments, leave a comment.
    I think *cargo* is a cool tool, and hopefully it will be useful fore some people out there. Go check the repo [here](https://github.com/robinmonjo/cargo). If you have any issue using it, open an issue on Github, if you feel like you want to contribute, open a pull request, and if you have any comments, leave a comment.

    Robin.
  25. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 2 additions and 0 deletions.
    2 changes: 2 additions & 0 deletions docker images with any container engine.md
    Original file line number Diff line number Diff line change
    @@ -216,6 +216,8 @@ Yes curl is installed ! You will notice that the filesystem wasn't downloaded wi

    We just used `cargo` and the docker hub to pull, commit and push a debian jessie file system. I choosed to use nsinit here, but you can do it with whatever container engine you like (with of course some specific configuration).

    So far, `cargo` only supports the docker hub, but during the development process I easily got it working on a private registry I deployed for debugging purpose. This could be added in a near future.

    ###Conclusion

    I think `cargo` is a cool tool, and hopefully it will be useful fore some people out there. Go check the repo [here](https://github.com/robinmonjo/cargo). If you have any issue using it, open an issue on Github, if you feel like you want to contribute, open a pull request, and if you have any comments, leave a comment.
  26. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 14 additions and 15 deletions.
    29 changes: 14 additions & 15 deletions docker images with any container engine.md
    Original file line number Diff line number Diff line change
    @@ -9,7 +9,7 @@ At this time, there where no doubt, the technology to use was [LXC](https://linu

    At this time I already spotted a big challenge of containers in PaaS: creation time. Waiting for `bundle install` to complete is already long enough to have to wait for another 40 secondes to create a container :). Docker landed just on time !

    Docker is open source, great ! It's written in Go, ok, I heard about it. So I started reading the source code. At this time, docker was using LXC as container backend (today they use their own libcontainer, I will talk about it later). I quickly identified the use of AuFS, a union file system, and then understood how docker was able to spawn containers that fast. Obviously using docker wasn't an option for me, early versions wasn't production ready. I ended up writting some ruby and bash scripts to efficiently use LXC with AuFS, and Applidget PaaS is using it everyday since then.
    Docker is open source, great ! It's written in Go, ok, I heard about it. So I started reading the source code. At this time, docker was using LXC as default container backend (today they use their own libcontainer, I will talk about it later). I quickly identified the use of AuFS, a union file system, and then understood how docker was able to spawn containers that fast. Obviously using docker wasn't an option for me, early versions wasn't production ready. I ended up writting some ruby and bash scripts to efficiently use LXC with AuFS, and Applidget PaaS is using it everyday since then.

    This was old time ! Today docker is "everywhere": on [aws](https://aws.amazon.com/blogs/aws/cloud-container-management/),
    [google cloud](https://cloud.google.com/container-engine/),
    @@ -19,13 +19,13 @@ However, docker is not perfect for everyone and recently faced some [criticism f

    ###Docker images

    Docker images are remotly stored on a registry. Once in a registry, they can be pulled and pushed. Docker Inc. maintains one registry, the docker hub. Go have a look, it's full of pre-configured, official and well maintained images. The thing is, these images can only be downloaded by docker users. `cargo` is meant to give access to these images to other linux container users.
    Docker images are remotly stored on a registry. Once in a registry, they can be pulled and pushed. Docker Inc. maintains one registry, the docker hub. Go have a look, it's full of pre-configured, official and well maintained images. The thing is, these images can only be pulled and pushed by docker users. `cargo` is meant to give access to these images to other linux container users.

    Docker images are not just linux root file system. Specification just got merged recently, you can go [read it here](https://github.com/docker/docker/blob/master/image/spec/v1.md). To summarize in one sentence, images are made of ordered layers. You can read more about [it here too](https://docs.docker.com/terms/layer/), but you have the main idea. This layering approach make it very efficient to share and store images (pushing and pulling just the layer you need).
    Docker images are not "just" linux root file system. Specification just got merged recently, you can go [read it here](https://github.com/docker/docker/blob/master/image/spec/v1.md). To summarize in one sentence, images are made of ordered layers. You can read more about [it here too](https://docs.docker.com/terms/layer/), but you have the main idea. This layering approach make it very efficient to share and store images (pushing and pulling just the layer you need).

    ###Container engine

    Linux containers are available in every linux distribution with a recent enough kernel. However, it's hard to setup a container manually. What I call container engines are tools that automates the setup of container. There are several of them (you probably heard about), the ones who come to mind are LXC, systemd-nspawn, docker (obviously), core-os rocket, and google lmctfy. But there is also a really minimal one, used to test [libcontainer](https://github.com/docker/libcontainer), nsinit. I briefly talked about libcontainer already. It's a Go package that replaced LXC as the default container backend in docker since version 0.9. Understand, it's the core container tool in docker. I really like libcontainer, as it's dependency free and it gets a lot of support from company such as Google and Redhat.
    Linux containers are available in every linux distribution with a recent enough kernel. However, it's hard to setup a container manually. What I call container engines are tools that automates the setup of container. There are several of them (you probably heard about), the ones who come to mind are LXC, systemd-nspawn, docker (obviously), core-os rocket, and google lmctfy. But there is also a really minimal one, used to test [libcontainer](https://github.com/docker/libcontainer): nsinit. I mentioned libcontainer already. It's a Go package that replaced LXC as the default container backend in docker since version 0.9. I really like libcontainer, as it's dependency free and it gets a lot of support from company such as Google and Redhat.

    ###Cargo

    @@ -53,7 +53,7 @@ $>GOPATH=`pwd`/../vendor:$GOPATH go build
    $>sudo cp nsinit /usr/local/bin/
    `````

    Now lets install `cargo` (check the readme for the latest version available):
    Now install `cargo` (check the readme for the latest version available):

    ````bash
    $>curl -sL https://github.com/robinmonjo/cargo/releases/download/v1.4.1/cargo-v1.4.1_x86_64.tgz | sudo tar -C /usr/local/bin -zxf -
    @@ -112,8 +112,9 @@ We are now ready to spawn a new container running bash:

    ````bash
    $> nsinit exec bash
    root@cargo-demo:/#

    #great we are in our container, curl is not installed, lets install it:
    #great we are in our container, curl is not installed, let's install it:
    root@cargo-demo:/# apt-get update -qq
    root@cargo-demo:/# apt-get install curl -y

    @@ -134,7 +135,6 @@ We are back in our git repository. We can use git commands to check what changed
    Now we want to push this image on the docker hub. First, we need to commit it:

    ````bash
    $>cd .. #get out of our git repository
    $>cargo commit -r debian -m "install curl"

    Changes commited in layer_3_36a88d89412c6a7b67e87e1c16be1e21ef64548ce51ee6dab359e8d4026c2c0b
    @@ -144,10 +144,9 @@ Checksum: tarsum.dev+sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495
    Layer size: 27188224
    Done
    ````
    Ok so what happended ? `cargo` took all the changes, and commited them into a new properly formatted branch. Remember, layers in docker image and how `cargo` save each one of them in a new branch ? That's what happened, we just created a new layer. cargo also wrote some metadata (image id, layer parent, layer checksum and size) that are needed to push and rebuild the image. Lets check the state of our git repository:
    Ok so what happended ? `cargo` took all the changes, and commited them into a new properly named branch. Remember, layers in docker images and how `cargo` save each one of them in a new branch ? That's what happened, we just created a new layer. cargo also wrote some metadata (image id, layer parent, layer checksum and size) that are needed to push and rebuild the image. Let's check the state of our git repository:

    ````bash
    $>cd debian
    $>git branch
    layer_0_511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158
    layer_1_bce696e097dc5286a7b9556a5f7420ff90ca4854b51e28256651d0071f56efac
    @@ -157,10 +156,8 @@ $>git branch
    Before pushing our image we need a [docker hub account](https://hub.docker.com/account/signup/).

    ````bash
    $>cd .. #get out of the rootfs

    #push the image. Replace username and password with your own
    $>cargo push robinmonjo/debian_curl -r debian -u username:password #obviously I changed my credentials :)
    $>cargo push <username>/debian_curl -r debian_curl -u <username>:<password>

    Pushing image robinmonjo/debian_curl:latest ...
    Pushing 4 layers:
    @@ -177,10 +174,10 @@ We can now go ahead, and delete our debian image, it's safely stored in the clou
    ````bash
    #remove the image we just pushed
    $>sudo rm -rf debian
    $>cd .. && sudo rm -rf debian
    #download it again, just to make sure everything worked
    $>sudo cargo pull robinmonjo/debian_curl -u username:password #replace what's needed here
    $>sudo cargo pull <username>/debian_curl -u <username>:<password>
    Pulling image robinmonjo/debian_curl:latest ...
    Image ID: c4f9b437f40b18c9603a38c866adcad7ea422aeb38b673ecfb669e9e5e82cbcc
    @@ -217,8 +214,10 @@ Features: Debug GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP

    Yes curl is installed ! You will notice that the filesystem wasn't downloaded with the -g option, so it's not a git repository (downside, you can't push it)

    We just used `cargo` and the docker hub to pull, commit and push a debian jessie file system. I choosed to use nsinit here, but you can do it with whatever container engine you like (with of course some specific configuration).

    ###Conclusion

    I think `cargo` is a cool tool, and I hope it will be useful fore some people out there. Go check the repo [here](https://github.com/robinmonjo/cargo). If you have any issue using it, open an issue on Github, if you feel like you want to contribute, open a pull request, and if you have any comments, leave a comment.
    I think `cargo` is a cool tool, and hopefully it will be useful fore some people out there. Go check the repo [here](https://github.com/robinmonjo/cargo). If you have any issue using it, open an issue on Github, if you feel like you want to contribute, open a pull request, and if you have any comments, leave a comment.

    Robin.
  27. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion docker images with any container engine.md
    Original file line number Diff line number Diff line change
    @@ -9,7 +9,7 @@ At this time, there where no doubt, the technology to use was [LXC](https://linu

    At this time I already spotted a big challenge of containers in PaaS: creation time. Waiting for `bundle install` to complete is already long enough to have to wait for another 40 secondes to create a container :). Docker landed just on time !

    Docker is open source, great ! It's written in Go, ok, I heard about it. So I started reading the source code. At this time, docker was using LXC as container backend (today they use their own libcontainer, I will talk about it later). I quickly spotted the use of AuFS, and then understood how they were able to spawn containers that fast. Obviously using docker wasn't an option for me, early versions wasn't production ready. I ended up writting some ruby and bash scripts to efficiently use LXC with AuFS, and Applidget PaaS is using it everyday since then.
    Docker is open source, great ! It's written in Go, ok, I heard about it. So I started reading the source code. At this time, docker was using LXC as container backend (today they use their own libcontainer, I will talk about it later). I quickly identified the use of AuFS, a union file system, and then understood how docker was able to spawn containers that fast. Obviously using docker wasn't an option for me, early versions wasn't production ready. I ended up writting some ruby and bash scripts to efficiently use LXC with AuFS, and Applidget PaaS is using it everyday since then.

    This was old time ! Today docker is "everywhere": on [aws](https://aws.amazon.com/blogs/aws/cloud-container-management/),
    [google cloud](https://cloud.google.com/container-engine/),
  28. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion docker images with any container engine.md
    Original file line number Diff line number Diff line change
    @@ -9,7 +9,7 @@ At this time, there where no doubt, the technology to use was [LXC](https://linu

    At this time I already spotted a big challenge of containers in PaaS: creation time. Waiting for `bundle install` to complete is already long enough to have to wait for another 40 secondes to create a container :). Docker landed just on time !

    Docker is open source, great ! It's written in Go, ok, I heard about it. So I started inspecting the source code. At this time, docker was using LXC as container backend (today they use their own libcontainer, I will talk about it later). I quickly spotted the use of AuFS, and then understood how they were able to spawn containers that fast. Obviously using docker wasn't an option for me, early versions wasn't production ready. I ended up writting some ruby and bash scripts to efficiently use LXC with AuFS, and Applidget PaaS is using it everyday since then.
    Docker is open source, great ! It's written in Go, ok, I heard about it. So I started reading the source code. At this time, docker was using LXC as container backend (today they use their own libcontainer, I will talk about it later). I quickly spotted the use of AuFS, and then understood how they were able to spawn containers that fast. Obviously using docker wasn't an option for me, early versions wasn't production ready. I ended up writting some ruby and bash scripts to efficiently use LXC with AuFS, and Applidget PaaS is using it everyday since then.

    This was old time ! Today docker is "everywhere": on [aws](https://aws.amazon.com/blogs/aws/cloud-container-management/),
    [google cloud](https://cloud.google.com/container-engine/),
  29. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion docker images with any container engine.md
    Original file line number Diff line number Diff line change
    @@ -4,7 +4,7 @@

    ###Background

    I have been using linux containers for almost 2 years now. It started during an internship at [Applidget](http://www.applidget.com/pages/home), a startup in Paris. My mission was to integrate linux containers technology in their private PaaS.
    I have been using linux containers for almost 2 years now. It started during an internship at [Applidget](http://www.applidget.com/pages/home), a startup in Paris. My job was to integrate linux containers technology in their private PaaS.
    At this time, there where no doubt, the technology to use was [LXC](https://linuxcontainers.org/lxc/introduction/). One week or so after I started digging into LXC, one of my co-worker talked to me about this totally new thing called docker. He told me "isn't it what you are supposed to do during your internship ?". And it was kind of it.

    At this time I already spotted a big challenge of containers in PaaS: creation time. Waiting for `bundle install` to complete is already long enough to have to wait for another 40 secondes to create a container :). Docker landed just on time !
  30. robinmonjo revised this gist Jan 18, 2015. 1 changed file with 12 additions and 10 deletions.
    22 changes: 12 additions & 10 deletions docker images with any container engine.md
    Original file line number Diff line number Diff line change
    @@ -1,11 +1,13 @@
    #[cargo](https://github.com/robinmonjo/cargo), docker hub without docker - 21 Jan 2015
    #[cargo](https://github.com/robinmonjo/cargo), docker hub without docker

    21 Jan 2015

    ###Background

    I have been using linux containers for almost 2 years now. It started during an internship at [Applidget](http://www.applidget.com/pages/home), a startup in Paris. My mission was to integrate linux containers technology in their private home made PaaS.
    At this time, there where no doubt, the technology to use was [LXC](https://linuxcontainers.org/lxc/introduction/). One week or so after I started digging into LXC, one of my co-worker talked to me about this totally new thing called docker. He said to me "isn't it what you are supposed to do during your internship ?". And it was kind of it.
    I have been using linux containers for almost 2 years now. It started during an internship at [Applidget](http://www.applidget.com/pages/home), a startup in Paris. My mission was to integrate linux containers technology in their private PaaS.
    At this time, there where no doubt, the technology to use was [LXC](https://linuxcontainers.org/lxc/introduction/). One week or so after I started digging into LXC, one of my co-worker talked to me about this totally new thing called docker. He told me "isn't it what you are supposed to do during your internship ?". And it was kind of it.

    At this time I already spotted a big challenge of container in PaaS: creation time. Waiting for `bundle install` to complete is already long enough to have to wait for another 40 secondes to create a container :). Docker landed just on time !
    At this time I already spotted a big challenge of containers in PaaS: creation time. Waiting for `bundle install` to complete is already long enough to have to wait for another 40 secondes to create a container :). Docker landed just on time !

    Docker is open source, great ! It's written in Go, ok, I heard about it. So I started inspecting the source code. At this time, docker was using LXC as container backend (today they use their own libcontainer, I will talk about it later). I quickly spotted the use of AuFS, and then understood how they were able to spawn containers that fast. Obviously using docker wasn't an option for me, early versions wasn't production ready. I ended up writting some ruby and bash scripts to efficiently use LXC with AuFS, and Applidget PaaS is using it everyday since then.

    @@ -17,13 +19,13 @@ However, docker is not perfect for everyone and recently faced some [criticism f

    ###Docker images

    Docker images are remotly stored on a registry. Once in a registry, they can be pulled and pushed. Docker Inc. maintains one registry, the docker hub. Go have a look it's full of pre-configured, official and well maintained images. The thing is, these images can only be downloaded by docker users. `cargo` is meant to give access to the docker hub to other linux container users.
    Docker images are remotly stored on a registry. Once in a registry, they can be pulled and pushed. Docker Inc. maintains one registry, the docker hub. Go have a look, it's full of pre-configured, official and well maintained images. The thing is, these images can only be downloaded by docker users. `cargo` is meant to give access to these images to other linux container users.

    Docker images are not just linux root file system. Specification just got merged recently, you can go [read it here](https://github.com/docker/docker/blob/master/image/spec/v1.md). To summarize in one sentence, images are made of ordered layers. You can read more about [it here too](https://docs.docker.com/terms/layer/) but you have the main idea. This layering approach make it very efficient to share and store images (pushing and pulling just the layer you need).
    Docker images are not just linux root file system. Specification just got merged recently, you can go [read it here](https://github.com/docker/docker/blob/master/image/spec/v1.md). To summarize in one sentence, images are made of ordered layers. You can read more about [it here too](https://docs.docker.com/terms/layer/), but you have the main idea. This layering approach make it very efficient to share and store images (pushing and pulling just the layer you need).

    ###Container engine

    Linux containers are available in every linux distribution with a recent enough kernel. However, it's hard to setup a container manually. What I call container engines are tools that automates the setup of container. There are several of them you probably heard about, the ones who come to mind are LXC, systemd-nspawn, docker (obviously), core-os rocket, and google lmctfy. But there is also a really minimal one, used to test libcontainer, nsinit. I briefly talked about libcontainer already. It's a Go package that replaced LXC as the default container backend in docker since version 0.9. Understand, it's the core container tool in docker. I really like libcontainer, as it's dependency free and it gets a lot of support from company such as Google and Redhat.
    Linux containers are available in every linux distribution with a recent enough kernel. However, it's hard to setup a container manually. What I call container engines are tools that automates the setup of container. There are several of them (you probably heard about), the ones who come to mind are LXC, systemd-nspawn, docker (obviously), core-os rocket, and google lmctfy. But there is also a really minimal one, used to test [libcontainer](https://github.com/docker/libcontainer), nsinit. I briefly talked about libcontainer already. It's a Go package that replaced LXC as the default container backend in docker since version 0.9. Understand, it's the core container tool in docker. I really like libcontainer, as it's dependency free and it gets a lot of support from company such as Google and Redhat.

    ###Cargo

    @@ -142,7 +144,7 @@ Checksum: tarsum.dev+sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495
    Layer size: 27188224
    Done
    ````
    Ok so what happended ? `cargo` took all the changes, and commited them into a new properly formatted branch. Remember, layers in docker image and how `cargo` save each one of them in a new branch ? That's what happended, we just created a new layer. cargo also wrote some metadata (image id, layer parent, layer checksum and size) that are needed to push and rebuild the image. Lets check the state of our git repository:
    Ok so what happended ? `cargo` took all the changes, and commited them into a new properly formatted branch. Remember, layers in docker image and how `cargo` save each one of them in a new branch ? That's what happened, we just created a new layer. cargo also wrote some metadata (image id, layer parent, layer checksum and size) that are needed to push and rebuild the image. Lets check the state of our git repository:

    ````bash
    $>cd debian
    @@ -169,9 +171,9 @@ Pushing 4 layers:
    Done: https://registry.hub.docker.com/u/robinmonjo/debian_curl
    `````

    Ok so our image have been pushed. The first 3 layers already existed on the docker hub (we pulled them from the debian jessie image earlier), so their data have not been re-uploaded (that what make the image layering approach of docker really nice).
    Ok so our image has been pushed. The first 3 layers already existed on the docker hub (we pulled them from the debian jessie image earlier), so their data have not been re-uploaded (that what makes the image layering approach of docker really nice).

    We can now go ahead, and delete our debian image, it's safely stored in the cloud. But just for this tutorial we will download it:
    We can now go ahead, and delete our debian image, it's safely stored in the cloud. But just for this tutorial we will download it again:
    ````bash
    #remove the image we just pushed