• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Liutauras Vilda
  • Jeanne Boyarsky
  • Devaka Cooray
  • Paul Clapham
Sheriffs:
  • Tim Cooke
  • Knute Snortum
  • Bear Bibeault
Saloon Keepers:
  • Ron McLeod
  • Tim Moores
  • Stephan van Hulst
  • Piet Souris
  • Ganesh Patekar
Bartenders:
  • Frits Walraven
  • Carey Brown
  • Tim Holloway

Microservices architectures

 
Bartender
Posts: 1146
38
IBM DB2 Netbeans IDE Spring Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi there,

I hear more and more about microservices architectures, but the whole concepts seems a bit cloudy to me. I cannot realize a precise idea of what they are and, besides the fact I understood that they rely heavily upon REST services, I can't figure how a microservices architecture may build in practice.
Can you give me some hints and /or any starting point ?
 
author
Posts: 42
1
Eclipse IDE Spring Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi Claude,

This presentation by Arun Gupta seems very practical to me: Refactor your Java EE application using Microservices and Containers

I think it can be a good starting point.
 
Sheriff
Posts: 13510
223
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Speaking of refactoring, you might want to see what the man (Martin Fowler) has to say about building microservices from scratch: http://martinfowler.com/bliki/MonolithFirst.html

Here's more of what Fowler has written about microservices: http://martinfowler.com/articles/microservices.html
 
Claude Moore
Bartender
Posts: 1146
38
IBM DB2 Netbeans IDE Spring Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Junilu, Esteban,

thanks for your reply. I've seen Arun Gupta's conference on youtube, and read Martin Flower's paper about this topic. The idea beneath microservices looks interesting and promising, but I have a lot of doubts about it.

If I did not understand wrong, basically microservices are meant to build up auto-contained applications - or, better, services - following the Single Responsability Pattern: i.e, you develop a services which does only one thing, but it does it in a modular fashion. Each micro services comprehends the whole execution enviroment and libraries needed to work and to interact with other services: so, for example, it may embed Jersey for handling HTTP requests, or some JDBC drivers, or any third part library it needs. Moreover, you need some facility, like Docker, to easily manage deployment of such services.

However, if I build an application as a composition of microservices, I think I will quickly need some medium to orchestrate, manage, let them work together. Isn't this what an appserver already does ? I'm a bit afraid that when a project based upon microservices grows over a certain size, complexity needed to keep things coherent would be greater than complexity introduced by an appserver (or a cluster of appservers, which would guarantee for example the right scalability).

Another aspect that seems potentially problematic to me is the way microservices will operate. A key factor to communication seems to be adoption of JSON as lingua franca, with basically REST calls. That's ok, but what about network overhead and latency ? For years we have been told to avoid as much as possible remoting (speaking about EJBs for example), and now we're going to build an architecture which relies heavily on socket based interoperability. That's a bit weird.

And finally, a lot of microservices principles seem very similar to OSGI way to create applications as composition of bundles.

What do you think about ?
 
Bartender
Posts: 20836
125
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I hadn't gotten into the latest docs on the topic, but in the abstract, I figured it was just the old Unix "Do One Thing and Do it Well" concept. The idea being that instead of writing and maintaing lots of one-off apps for each task, you could string together simple functions and the customization was mostly in the glue.

It has worked fairly well over the years, although it's ironic, since in the Linux world, there's a whole lot of screaming going on thanks to the fact that several "simple systems" (service dependencies, logging, etc.) have recently been conflated into a single monolthic subsystem (systemd), which basically is the antithesis of that idea. To say nothing of the fact that to some of us, who've been burned on other platforms, the whole idea of binary logs is a Bad Idea. The separation of concerns over a tree of text files instead of a conglomerated "Windows Registry" and the fact that they were text files and hence amenable to the rich set of Unix/Linux text utilities has been a major asset over the years.

Counter-architectures aside, it's one thing to be able to string disparate stuff together, another to provide a framework (or frameworks) to contain and manage this stuff. While a single framework might not be adequate, you would like to keep the total number of disparate platforms low, to reduce the learning curve/knowledge base required to keep all the services happy. Be it Docker containers, OSGI, or whatever. Keeping protocols standard helps, too. Stuff like JSON and YAML are simple enough that you can push through hand-generated requests when necessary (another strike against binary data formats).

CORBA, EJB and SOAP all attempted to allow apps to be distributed like this, but they all ran into the same shortcomings. Overhead. One of the things we learned from this is that remote procedure calls can really gum up performance. Thus, more recent attempts have put more emphasis on non-blocking services such as JMS/MQ and ReST.

There's another popular paradigm based on the one-thing-well concept. The Inversion of Control pattern used by things such as Spring Framework and JavaServer Faces. This has been very successful. The main difference here being that all of the service modules are operating in a single framework in a single JVM.


 
Claude Moore
Bartender
Posts: 1146
38
IBM DB2 Netbeans IDE Spring Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Tim Holloway wrote:
Thus, more recent attempts have put more emphasis on non-blocking services such as JMS/MQ and ReST.



This idea isn't so new. In Enterprise Integration Patterns , for example, practically all described patterns rely upon a message-base framework, and nowadays ESB are more or less a sophisticated way to exchange messages among various parts of the whole design. I think you described very well the idea beneath Microservices - do only one thing and do it well, what I'm afraid of are the drawback of distribuited architectures - and microservices are extremely distributed.
 
Tim Holloway
Bartender
Posts: 20836
125
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Well, there are 2 basic types of ways to message: 1) send/reply and 2) post-forward.

The send/reply way is easier to gum up due to latencies. since it's making the workflow synchronous, having to wait for the reply before things can proceed. Post-forward has the advantage of you dump off the work and run away so you don't have to wait.

In many cases, of course, you do want to at least ensure that the posted request has, in fact made it to someplace that ensures (eventual) delivery. which is what the MQ systems are for. So that can be a potential latency, but one which is hopefully much smaller.

Then again, there are cases where you simply don't care at all. I did a system back around last December which was SNMP-based. If an exceptional condition occurred, it fired off a trap, which is a UDP datagram. If the recipient missed the trap packet, no lasting harm, since it was just an early-warning signal and the regular status polling would detect the condition.
 
Author
Posts: 25
5
  • Likes 1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I think Martin's post does the best job currently of defining what microservices are, and there is a good degree of consensus in the core definition from those of us who work with them and talk about them.

My own take on this, which avoids talking about technical implementation detail too much (because that will vary from platform to platform) is:

"Independently deployable autonomous services that work together".

IMHO, the single most important characteristic is that a single microservice can be changed, and deployed into production, independently of any other service. If you can do that reliably, it follows that you are probably doing many of the other things I recommend, either in my book or talks (sidebar - my Principles Of Microservices talk might be a good place for more background on this stuff).

Going over the points/questions raised in this thread:

Claude Moore wrote:
> However, if I build an application as a composition of microservices, I think I will quickly need some medium to orchestrate, manage, let them work together. Isn't this what an appserver already does ?



The job of the appserver, in a Java context, is manifold, but does little to help microservice architectures IMHO. It allows you to run multiple 'separate' services in one JVM, allowing for optimisation of resources. It may provide tools to help deployment and lifecycle management of a single service, and may also provide clustering support. When you start looking at the larger problems of service discovery, configuration, monitoring etc., an appserver only solves these problems if you buy into the appserver wholesale - which means you need it across your whole ecosystem. That means locking yourself into a tech choice that is going to be very hard to change. I'd also say that the tooling provided in the Java space has been lacking - JNDI, questionable cluster management technology etc., and given that the state of the art in this space is changing a lot, I'd want some flexibility here personally.

One other thing to consider. AppServers are great for managing multiple services with a single JVM. They try and provide isolation between services, but can't always do this effectively. I've had numerous problems with services causing deadlocks in the appserver itself, single services using up all the resources and taking down all other things on the same machine etc. In practice, most of the microservice shops I see deploy single services as separate processes (so in java-land probably using an embedded container). Containers then let you put these separate services on to their own isolated OSes more cost effectively than normal virtualisation, thereby providing an additional level of isolation to make systems easier to manage, more robust etc. If they need distributed configuration management or service discovery, they use a dedicated tool for this (Eureka, Ribbon, etcd, consul), connection management (hystrix, polly), load balancing (mod_proxy) etc.

Claude Moore wrote:
A key factor to communication seems to be adoption of JSON as lingua franca, with basically REST calls. That's ok, but what about network overhead and latency ? For years we have been told to avoid as much as possible remoting (speaking about EJBs for example), and now we're going to build an architecture which relies heavily on socket based interoperability. That's a bit weird.



Although JSON over HTTP is commonly used, it isn’t the only mechanism. The use of binary protocols such as Thrift and Protobuffs are widespread - these provide some of the benefits of client/server stub generation along with lean payloads, and are much better at handling changes than Java RMI for example. Also many services communicate via messaging protocols, which can be a mix of binary or textual.

I cover this a lot in chapter 4 of the book - there I focus firstly on the importance of finding the right collaboration style for you (event-based, request/response) then picking a tech that fits that. Binary protocols can be more lean, but can increase coupling. Text-based protocols are great for interoperability and HTTP can scale very well, but won’t be great for low latency.

What is key is that whatever protocol/collaboration style you pick, then pick the tech that best delivers on that given the constraints you have.

Remoting causes problems in a number of ways. Increased latency is one. But you also have to consider many other factors - CAP Theory comes into play much more, meaning you may have to let go of things like transactions, and it can increase your surface area of failure too, as you have to plan for each and every network call failing. The fallacies of distributed programming apply in the context of microservices just as they do with any other distributed system!
 
Claude Moore
Bartender
Posts: 1146
38
IBM DB2 Netbeans IDE Spring Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Thanks Sam for your very detailed answer... if your book is as clear as your answer, well...... I think it's really worth reading !!!
 
Sam Newman
Author
Posts: 25
5
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Glad it was useful!
 
Consider Paul's rocket mass heater.
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!