• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Tim Cooke
  • paul wheaton
  • Liutauras Vilda
  • Ron McLeod
Sheriffs:
  • Jeanne Boyarsky
  • Devaka Cooray
  • Paul Clapham
Saloon Keepers:
  • Scott Selikoff
  • Tim Holloway
  • Piet Souris
  • Mikalai Zaikin
  • Frits Walraven
Bartenders:
  • Stephan van Hulst
  • Carey Brown

Docker for Embedded Use

 
Marshal
Posts: 4664
583
VSCode Eclipse IDE TypeScript Redhat MicroProfile Quarkus Java Linux
  • Likes 2
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Is docker suitable for use with embedded platforms where both memory and persistent storage is limited?  My applications are fairly small, but I am wondering if the supporting infrastructure for Docker might make it a poor fit.

Also, after the infrastructure is installed on the target, what would the demands be like when updating application images.  I have an IoT application where connectivity is using slow (4 kbps) and expensive satellite backhaul.  Would the entire updated image need to be sent each time a change has been made, or is there some mechanism to patch the image, and only provide the differences?

 
Author
Posts: 11
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
The image update question has a straightforward answer that's explained in detail in Chapter 8 and Chapter 9.  

Docker and the Open container image format support highly efficient and customizable distribution models.  Only the layers that have changed within an image or are not currently present on the host will be retrieved from the registry.  Each layer will be stored once on the host.  Each image and layer within the image is identified cryptographically so the client can be sure it has what it is supposed to have.

This distribution mechanism is probably about as efficient as you'll find in an out of the box software distribution solution that isn't specialized for IoT.  I expect you know better than me on this point.

Now... it may make sense to use containerd/runc/podman tooling to launch and manage containers rather than having dockerd do it.  In that case you will need to integrate that with your devices existing process management model, which I would expect to be viewed as a feature.
 
Saloon Keeper
Posts: 28392
210
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I do embedded and I don't think of Docker as an embedded environment because normally the embedded system completely owns the hardware. Thus putting in a separate OS to run Docker - and Docker itself - seems like somewhat the opposite of embedding. Although I suppose if you have a really massive multi-function embedded system you might think differently.

Docker does run on the Raspberry Pi, I believe. Of course a Pi 3 is the equivalent of a circa 2000 desktop computer to begin with. It's not exactly an Arduino.

You can definitely update Docker images without a complete upload, however. Docker is built on copy-on-write overlays, each of which is identified by a UUID. When you do a Docker pull, the local inventory is checked and if that UUID is already represented, it gets used. I find it not uncommon for a Docker image to consist of a dozen or so overlays, so there's a lot of potential for sharing there, especially if you build your images off of some standard base such as Alpine.

Just to illustrate, when you do a Docker build, the various execution steps in the Dockerfile each tend to create a separate overlay, which is why Docker images often have so many layers. You can, of course collapse overlays, and if you've had a lot of maintenance you might even want to, but when you do that, then you lose some of the sharability.
 
Ron McLeod
Marshal
Posts: 4664
583
VSCode Eclipse IDE TypeScript Redhat MicroProfile Quarkus Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Tim Holloway wrote:I do embedded and I don't think of Docker as an embedded environment because normally the embedded system completely owns the hardware. Thus putting in a separate OS to run Docker - and Docker itself - seems like somewhat the opposite of embedding. Although I suppose if you have a really massive multi-function embedded system you might think differently.


The embedded platform that I am thinking of is a single-board-computer with a Intel N-series Atom processor, 2GB of RAM, and 16GB of SSD.  I am wondering if the overhead to support Docker will consume a significant chunk of my RAM, processing, and storage resources.

Regarding updating, my question was does Docker support the concept of only sending deltas rather than complete images/layers/etc.  Is seems like the answer is no.
 
Tim Holloway
Saloon Keeper
Posts: 28392
210
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Difference of definitions. For me, embedded is something typically done without basing everything on a general-purpose computing platform. By my book, a Raspberry Pi barely qualifies as embedded.

In any event, I'm reasonably sure that a 2GB system could happily function as a Docker host. That is, after all bigger than some servers I've worked with, and as I said, I believe that a 1GB RPi can host Docker.

However, I don't see why you say Docker cannot do incremental updates. That's exactly what the overlays are. If you're bothered by the inability to delete superceded files, yeah. Each layer is immutable except for the top one, so you can't recover space. But unless you're really tight for storage, the cost is more than compensated by the benefits.
 
Ron McLeod
Marshal
Posts: 4664
583
VSCode Eclipse IDE TypeScript Redhat MicroProfile Quarkus Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
My question is about the ability to patch a resource rather than replace it.  For example, if a overlay is 100 kB in size and I have made an update which resulted in a change of only a few bytes, my understanding is that I would need to transfer a 100 kB to the target, not a smaller package of maybe 100 B which describes the delta between the previous version and the updated version.

In my application this is important because the low speed, high latency, low reliability, and high financial cost of using the backhaul network.
 
Tim Holloway
Saloon Keeper
Posts: 28392
210
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
That's not how overlays work.

If you have a 2.4kb difference between the current Docker image and the patched image, then a 2.4kb overlay will be generated.

Now if you rebuild the entire Docker image, you'll probably get a whole new set of overlays - excluding the base overlay, which, containing the container's OS will probably be much larger anyway. But if you modify an existing image, the existing composite substructures will remain undisturbed and so only the differences apply. Or, I suppose you could build a chain where your previous Docker image becomes the base of a new Docker image, but I've never seen that done..

In any event, no, you don't have to do a massive transfer unless you really, really want to.
 
Stephen Kuenzli
Author
Posts: 11
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
You could design your image layers such that files that changed frequently were in specific layers with the same change lifecycle.  Then when those layers files changed, only the updates would be transferred.

I believe the finest granularity you're going to get with standard image management tools is a layer with a single file.  The most popular form of are Docker images built from 'scratch' that contain a single executable.

If you're looking to patch e.g. a binary where only 100 bytes have changed, you'll probably need another distribution mechanism.
 
Tim Holloway
Saloon Keeper
Posts: 28392
210
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Stephen Kuenzli wrote:
I believe the finest granularity you're going to get with standard image management tools is a layer with a single file.



Yep. Because layers overlay the filesystem, not the disk image.

Stephen Kuenzli wrote: The most popular form of are Docker images built from 'scratch' that contain a single executable.



Well.... yes and no. Some executables require a bit of help. For example, quite a few Docker images have a copy of Apache embedded in them, but Apache (and sometimes other services) are managed by s6(?). So there's the application executable and support executables and typically several RUN steps are executed in the Dockerfile when the image is build, each of which creates another layer.

Speaking of layers, I used to have a very nice extension that would map out layers within (and layer-sharing beween) images. I can't find it at the moment, and it probably broke several Docker updates back. But Docker History can tell about the layer construction of an image. This is what I get when I filter out the parts that didn't create new layers on the docker.io/wordpress image:

I'm thinking that this is a compacted image and the "missing" layers were merged into the main image. By way of comparison, here's an image that I created and didn't compact. This is a bacula control system and it does have multiple programs in it - the Bacula director, which controls backups, a bacula storage daemon used to read/and write from the archives, and one or two other related programs:


Note that layer 2b48917d1801 is only 8 bytes! Although you cannot see the full command in this truncated listing, the only thing in that layer is an added softlink. The layer below it is much the same, but the extra 3 bytes reflect the deletion of the original /etc/bacula directory where it was linked to an external mount.
 
Tim Holloway
Saloon Keeper
Posts: 28392
210
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
And just a little more info. The "size" of a layer is one thing, but when calculating disk resources required, you need to know the real size. So for my "8 byte" layer, I checked. The actual layer file in the  layer database is 5K and its metadata is 79 bytes.
 
Politics n. Poly "many" + ticks "blood sucking insects". Tiny ad:
Gift giving made easy with the permaculture playing cards
https://coderanch.com/t/777758/Gift-giving-easy-permaculture-playing
reply
    Bookmark Topic Watch Topic
  • New Topic