• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Tim Cooke
  • Campbell Ritchie
  • paul wheaton
  • Ron McLeod
  • Devaka Cooray
Sheriffs:
  • Jeanne Boyarsky
  • Liutauras Vilda
  • Paul Clapham
Saloon Keepers:
  • Tim Holloway
  • Carey Brown
  • Piet Souris
Bartenders:

Cloud Native DevOps with Kubernetes: Is possible and easy Devops in cloud without Kubernetes?

 
Greenhorn
Posts: 20
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
The scripting in Docker or  other container solutions, enable to  fastest configuration, and automatization  scripts.

It's really needed using  kubernetes?  what is the advantage to use? in your book  I can view examples to use kuebrnetes?
 
Greenhorn
Posts: 7
  • Likes 1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Camilo Cruz wrote:It's really needed using  kubernetes?  what is the advantage to use?



If, like me, you come from a background of running software on VMs, you could think of Docker as being like the package manager. The Dockerfile specifies what's in the package: language runtime, source code, dependencies, config, and so on.

Kubernetes is the next layer up: it's like the equivalent of Ansible or Puppet. It specifies what things should be running, where they store their data, how users access the service, and so on. So Kubernetes is like your robot sysadmin: it downloads packages and installs them for you, starts services, restarts things if they fail, reports logs and metrics on things that are running, and so on.

DevOps legend Kelsey Hightower puts it very well, and we quote him in the book:

“Kubernetes does the things that the very best system administrator would do: automation, failover, centralized logging, monitoring. It takes what we’ve learned in the DevOps community and makes it the default, out of the box.



But it's not just a bonus for sysadmins; the devs will like it too. This is what we say in the introductory chapter of the book:


Kubernetes greatly reduces the time and effort it takes to deploy. Zero-downtime deployments are common, because Kubernetes does rolling updates by default (starting containers with the new version, waiting until they become healthy, and then shutting down the old ones).

Kubernetes also provides facilities to help you implement continuous deployment practices such as canary deployments: gradually rolling out updates one server at a time to catch problems early. Another common practice is blue-green deployments: spinning up a new version of the system in parallel, and switching traffic over to it once it’s fully up and running.

Demand spikes will no longer take down your service, because Kubernetes supports autoscaling. For example, if CPU utilization by a container reaches a certain level, Kubernetes can keep adding new replicas of the container until the utilization falls below the threshold. When demand falls, Kubernetes will scale down the replicas again, freeing up cluster capacity to run other workloads.

Because Kubernetes has redundancy and failover built in, your application will be more reliable and resilient. Some managed services can even scale the Kubernetes cluster itself up and down in response to demand, so that you’re never paying for a larger cluster than you need at any given moment.

The business will love Kubernetes too, because it cuts infrastructure costs and makes much better use of a given set of resources. Traditional servers, even cloud servers, are mostly idle most of the time. The excess capacity that you need to handle demand spikes is essentially wasted under normal conditions. Kubernetes takes that wasted capacity and uses it to run workloads, so you can achieve much higher utilization of your machines—and you get scaling, load balancing, and failover for free too.

While some of these features, such as autoscaling, were available before Kubernetes, they were always tied to a particular cloud provider or service. Kubernetes is provider-agnostic: once you’ve defined the resources you use, you can run them on any Kubernetes cluster, regardless of the underlying cloud provider.

That doesn’t mean that Kubernetes limits you to the lowest common denominator. Kubernetes maps your resources to the appropriate vendor-specific features: for example, a load-balanced Kubernetes service on Google Cloud will create a Google Cloud load balancer, on Amazon it will create an AWS load balancer. Kubernetes abstracts away the cloud-specific details, letting you focus on defining the behavior of your application.

Just as containers are a portable way of defining software, Kubernetes resources provide a portable definition of how that software should run.

 
Camilo Cruz
Greenhorn
Posts: 20
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Thank you!!   very clear explanation!!!
 
Saloon Keeper
Posts: 28654
211
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Likes 2
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

John Arundel wrote:
If, like me, you come from a background of running software on VMs, you could think of Docker as being like the package manager. The Dockerfile specifies what's in the package: language runtime, source code, dependencies, config, and so on.



Er, not really. A Dockerfile is a set of directives that is used to build and configure a Docker container image. It's not quite like any other thing that I can think of offhand, since it contains features that exist in other systems, but isn't a complete implementation of any of those systems.

Roughly speaking, a Dockerfile contains ownership and version information useful to identify the product being created.

Also, it references a base image (usually), since Docker images are built from multiple filesystem overlays. Which is very space-efficient, since you don't have to replicate the entire nucleus of an OS for every Docker image. As in git, UUIDs help the Docker system uniquely identify each overlay layer.

Then you have the directives that actually build the image on the base. Here, you're effectively running the base image OS and using package build and/or install tools to create and deploy the application(s) within the Docker image.

And then you have the definitions that publish resources such as network ports and mounted external volumes. And set defaults.

Last, but not least, you define the default execution command that is used to run the image (start its apps) assuming nothing is overridden on the Docker run command.

And I may have forgotten an item or 2, but those are about the most common.

The Dockerfile is used to build the container image, which may then be published to a repository. The Dockerfile itself is not published as part of the image, although you can more or less reverse-engineer it from a Docker image. What you have done, then, in effect, is launched a virtual instance of the base OS image, provisioned it, defined some environmental information that Docker can use, then taken a snapshot of the resulting modified OS image.

I use Ansible to provision Docker images for small-scale stuff. Ansible has the virtue that it's pretty much able to handle any modern Linux OS release as a target straight out of the box and installing an Ansible server is trivial. I've also used Puppet to manage Docker, and it has some advantages, but then you have to have a Puppet client on the target VM and while I use Puppet a LOT, I find Ansible to be better here.

Kubernetes, like Docker Compose, has the ability to go elastic in a very dynamic way and of the two, Kubernetes has the better control panel. You can use tools like Ansible that way, but Kubernetes is easier to deal with when you're scaling up and down nodes in a large cluster. The only reason I'm using Ansible instead of Kubernetes is that, as I said, Ansible can be set up and used with almost no effort, whereas scaling Kubernetes up and out of the Minikube has been nothing but grief. There's no cookbook howto or pre-built package/image that I've come across so far that made it worth the frustration.
 
Tim Holloway
Saloon Keeper
Posts: 28654
211
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Likes 1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
And incidentally, while a Dockerfile will install and possibly built applications for the image, a top-notch Dockerfile will also contain directives to subsequently remove all build tools not needed to run the app, sample files, and other lint. Because when you're deploying many images, keeping their size small is a virtual. Plus the less extraneous items in the image, the fewer attack points for the Bad Guys.

edit: "keeping their size small is a virtual". This is what happens when automatic typing takes over. It's supposed to say "keeping their size small is a virtue"
reply
    Bookmark Topic Watch Topic
  • New Topic