• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Tim Cooke
  • paul wheaton
  • Liutauras Vilda
  • Ron McLeod
Sheriffs:
  • Jeanne Boyarsky
  • Devaka Cooray
  • Paul Clapham
Saloon Keepers:
  • Scott Selikoff
  • Tim Holloway
  • Piet Souris
  • Mikalai Zaikin
  • Frits Walraven
Bartenders:
  • Stephan van Hulst
  • Carey Brown

Cloud Native DevOps with Kubernetes: is Kubernetes overly complex?

 
Ranch Hand
Posts: 185
1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
My head spins every time I try to grasp all the concepts that Kubernetes introduces. Is it too complex for its own good?
 
Greenhorn
Posts: 7
  • Likes 1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Salil Wadnerkar wrote:My head spins every time I try to grasp all the concepts that Kubernetes introduces. Is it too complex for its own good?



Great question! I'm happy to admit here and now that I feel exactly the same, and that's one of the things that prompted us to write the book. We wanted something that explains what problem Kubernetes is trying to solve, and how I would actually use it in my job to solve that problem.

Accordingly, we spend a chapter or two exploring the problems of managing hardware and software infrastructure, deployment, and so on, and look at the various ways that have traditionally been used to tackle them. We outline the shortcomings of those solutions, and describe some of the ways in which we think a Kubernetes-based solution is better.

It's hard to understand a complex tool unless you have a really firm grounding in the basic concepts it uses. For example, containers. What even is a container? How do you make one? What do you do with it? So we devote a couple of chapters to getting you up and running with containers, building one on your laptop, running it, pushing an image to Docker Hub, and generally getting comfortable with the ecosystem. We then introduce Kubernetes, show you how to install it on your machine, start up a cluster, and deploy your container to it in a variety of ways.

The rest of the book then develops this further, helping you figure out how to choose a hosted Kubernetes service, or what to do if that choice isn't available. We look at how to size and scale a cluster for your particular situation, and how to do the everyday stuff like maintenance, backups, care and feeding. We go deep into security, configuration, operators, stateful sets, metrics, observability, and all of that great stuff, but it all relies on this foundation of understanding what's actually going on inside Kubernetes and what problem it's solving.

To answer your specific question, is Kubernetes too complex for its own good: yes, probably. The central idea of Kubernetes is actually super simple, and we explore it in the book in some detail to make sure everybody's completely happy with it. But we aim to go beyond the introductory tutorial and tackle the hard stuff too. Kubernetes is doing a complex job in a complex environment, so some of that complexity is irreducible. We find that a great way to make it more understandable is to actually use it, so the book has hands-on exercises all the way through to make sure that you get some solid experience of everything you'll need in order to run Kubernetes in production.

End of commercial
 
Saloon Keeper
Posts: 28408
210
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Likes 1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

John Arundel wrote:
To answer your specific question, is Kubernetes too complex for its own good: yes, probably. The central idea of Kubernetes is actually super simple



That's the way I feel. The easiest way to run Kubernetes seems to be to find a cloud provider that has Kubernetes baked in and use them. Which doesn't please me, since I run an R&D farm locally and I don't want to fight the rather tedious processes it seems to take to get Kubernetes running in anything more complex than the single-instance Kubectl. Which is a self-contained VM instance.

Kubernetes - as I perceive it - consists of 2 primary parts: the control module, with its associated user interfaces. and the management part, which allows Kubernetes to manage containers on a given VM instance. In an ideal world, I should be able to realize one or both of these functions as drop-in general-purpose container instances. It's true that the management container has to have extra-ordinary privileges, since it has to be able to run the actual Docker (or other container) API for the VM itself, extending outside of its box. But that's doable.

So in theory, I could just drop in Docker instances with some relatively minor configuration and go, but what few attempts I've seen so far have left me screaming. And I really don't want to have to build Kubernetes on a production machine. Having build tools on a production server got me hit with one of my worst system attacks ever a few years back. I'd even settle for an OS installer package, but as far as I know, CentOS 7 doesn't have one.
 
Greenhorn
Posts: 10
  • Likes 1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I think one problem with k8s adoption right now is the proliferation of tooling. It will take some time for standards and opinions to emerge in the community before that gets better. For example, I can think of at least 4 separate tools for tailing logs across pods. There are at least that many tools for running a local cluster (minikube, kind, microk8s, k3s, probably others). I don't think that there should ever be less options or that people should be discouraged from making new tooling, but often times I think folks forget that it generates a lot of noise in the system and it's not always obvious how to decipher what problems the new tooling solves, or why it is different from the other tools already out there.

Tim is correct in that the easiest way to get a full-blown cluster with all the bells and whistles like HA, RBAC, storage, overlay networking, ingress, etc. is probably running it on a cloud provider. There are plenty of tools for running tiny local k8s that I mention above, but depending on your use case that may not be good enough. Tools like kubespray, kops, and kubeadm, can make it easier for a spinning up a more production-ready k8s cluster yourself, but that's another area where there are lots of tools out there, probably more to come, and it's by no means an easy on-ramp.

I heard Joe Beda talk once about the 4 main areas of work around k8s. I won't explain it as well as he did at the time, but it was something like there are Cluster-Ops, Cluster-Devs, App-Ops, and App-Devs. Cluster-Ops primary focus is keeping clusters running. Cluster-Dev is concerned with developing apps and tooling for managing or using the cluster (like building an Operator that posts to Slack, or making a CLI tool for tailing logs from pods). App-Ops are folks who are working to help move code from the App-Devs to a cluster, so gluing together the CI-CD pipelines, writing helm charts, etc. App-Dev are the software developers building applications for the business/organization.

One thing k8s does is that it helps define and clarify those boundaries and hand-offs between those roles. So ideally someone in the App-Dev realm shouldn't need to be too concerned with how a k8s cluster gets provisioned or updated. They should be able to push a branch to source control, have a CI-CD pipeline handle testing their new code and automatically deploy it to some pre-production environment. There is a lot of work involved to get that all setup and frictionless, but in a large, distributed organization I think it's necessary in order to scale. In a smaller team there will be less people and more overlap in those roles and something like a full-blown HA k8s CI-CD pipeline will likely feel overly complex and clunky.
 
Tim Holloway
Saloon Keeper
Posts: 28408
210
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Likes 1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I'm having flashbacks to my mainframe days.

Certain widely-used mainframe products were originally created when memory was precious and coding was done in assembly language. As a result, they weren't very friendly. Error messages were terse (IEC141I - EROPT=ABE OR AN INVALID CODE - translation: either an attempt was made to read a record with the wrong length or the tape drive was on fire). Utilities were primitive (try dumping a DB2 database to SQL using only native utilities). And yet they continued to pile on bells and whistles while the basement was propped up only by 2x4s. Like a multi-tier wedding cake with a clay bottom.

It's all very well to have fancy reporting and instrumentation and other tools, but if it takes more work to get the core services running than it would to prep and provision an entire OS, that really curbs one's enthusiasm.

It doesn't have to be that way. The OpenStack cloud is an immensely complex system, comprising many sophisticated products that all have to be installed and configured across multiple systems. But they have tools that make the process of setting up a private cloud hardly more painful than installing Microsoft Windows™. I wish that were true of Kubernetes.
 
Don't get me started about those stupid light bulbs.
reply
    Bookmark Topic Watch Topic
  • New Topic