This week's book giveaway is in the Programmer Certification forum.
We're giving away four copies of OCP Oracle Certified Professional Java SE 21 Developer (Exam 1Z0-830) Java SE 17 Developer (Exam 1Z0-829) Programmer’s Guide and have Khalid Mughal and Vasily Strelnikov on-line!
See this thread for details.

Karl Matthias

Author
+ Follow
since Jun 12, 2023
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
In last 30 days
0
Forums and Threads

Recent posts by Karl Matthias

Think of Docker images as portably packaged applications that are easy to run somewhere else. Building a system to do what you ask is possible and the portability of the container image would make it easier than with a traditional application. But you'd still have to do all the hard part yourself. I personally don't believe in multi-cloud as a really useful setup (YMMV). In general most of your investment in a cloud is in building your own tooling to run on it, even if that is just Terraform configs and auth, and using all the best offerings on that provider. If you use more than one, you have more than one set of tools and configs to maintain, and you have to run to the lowest common denominator on both clouds. That means you lose out on the biggest benefit of running on the cloud: all the hosted services that are "built-in" and native to the toolset.

If you need failover, I suggest looking at a single provider and finding a multi-region option you like. That could be a containerized option, and in that case, you get that benefit as well.

Don Horrell wrote:Does WSL2 make that easier?
WSL2 is a true Linux VM running under Windows.



It depends on what you want to run. The OP mentioned Windows containers. I believe that they were referring to https://dockerlabs.collabnix.com/intermediate/docker-desktop-for-windows/lab02-switching-to-windows-container.html

That's not the same thing as Linux containers on Windows under WSL2.
Caveat: I have not tried this. My guess is your easiest shot is X over TCP with the X server running under Windows. Then target that from the GUI app inside the container. The tricky bit would be getting all the networking open. X over TCP won't be fast, but it in theory will work. Someone else who has done it may have better advice!
This is a big list of things, actually. Probably easier to concentrate on how to do a good job at it rather than all the things to avoid. We do cover some of that in the book.
I generally recommend that people only do that when necessary. Most of the cloud vendors have their own reliable database services. This is what I prefer to use. If you really need cross-region DB failover, especially, I suggest getting a provider to do it for you. But in all other circumstances, running databases in containers on top of a scheduler, in a cloud environment with portable disks works fairly well.
Hi Stephen, thanks. In general it's best to run as a non-root user. This is discussed in the book. If you build and maintain your own application container images, this is not a heavy lift. It can be more challenging if trying to consume upstream containers that were built and designed to run as root.
Hi Don, it's a very broad topic. In general the container scheduler offerings from AWS or your cloud provider would be the lowest hanging way to make that happen. Amazon ECS or EKS, or Amazon Fargate would be options.
There is discussion about some good practices for building containers and what to think about when containerizing an application: both on the application side and on the container image side. We don't walk through every command available in Dockerfiles because the Docker docs are pretty good on this topic.
Anything can run in a container and databases are no exception. But containers are in general more ephemeral than a database typically would be. So if you intend to use it to run production workloads, you are probably talking about running it on top of a scheduler like K8s, using permanently allocated storage that will move with the containers/pods as needed. On cloud providers this is pretty reasonable to do, using their own disk storage implementations mapped into K8s. Running databases on K8s in EKS or Google Cloud, for instance is quite a common practice.
Windows-native containers will not run on Linux- or Mac-based Docker runtimes. However, Linux-based containers can be run on all platforms that support Docker. It is best to match CPU architecture, but even that is not required depending on how your runtime is configured (e.g. Docker Desktop on macOS can run AMD64-based containers on ARM)
Using an upstream Docker container image runs risks because you are effectively running something packaged by someone else. However, this is also true of any packages that you would install on your system with dpkg or rpm, etc, as well. So you should exercise caution by using images from known providers and that are in active use. If your risk profile requires stronger controls, then running automated scans on the images, or building and maintaining your own version are further options. You should also, in general, not run containers as root. It is still possible to break out of a container using exploits but it does make it more complicated to do so.
It does, the subtitle of the book is "Shipping Reliable Containers in Production". Because production stacks are commonly running less Docker than they used to, the focus is on how to use Docker to build and set up containers images and containerized apps that are ready for production, and how to think about a production environment and its interaction with your containers.
This is a good question, Ashish, and one that many people ask. The critical difference between containers and virtual machines is that containers on the same runtime all share the same Linux kernel. This has both advantages and disadvantages. The ease of integration with the host operating system, and interaction and overlap of containers on the same host can be an advantage both in performance and in resource over-allocation. But the flipside is that it is not as secure because the only barrier between containers is the kernel, rather than actual CPU-level enforcement like you get from a virtual machine. Containers also run a higher risk of noisy neighbor issues than do virtual machines.

Some container runtimes have made the best of both worlds, using micro-vms to run container images rather than a container runtime. We talk about some of this in the book as well.

In general things running in containers on the same host need to be assumed to be cooperating with each other from both a resource and security enforcement standpoint. There are resource utilization and security barriers, but they are just not solid enough to be used to run e.g. workloads publicly configured by end users.

Containers are more than just the runtime, though, as we explain in the book. They make portability and containment of all depedencies for your application easy. The ergonomics are very good, and resource utilization can also be excellent. Those and other workflow/tooling improvements are many of the reasons for their success.
There is coverage of some of the cloud provider offerings and of Kubernetes itself. Since the book focuses on the Docker side of things it does not venture into the very depths of the K8s ecosystem, including Istio, Helm, etc. It covers how you would interact with your Docker containers from Kubernetes and from AWS cloud offerings.
The third edition modernizes the book to cover changes in the tooling, Kubernetes, the cloud providers, the Docker command line, etc over the last few years. Sean focused on updates that bring recommendations in line with the more modern container stacks.