Trist Post

Greenhorn
+ Follow
since Jan 09, 2007
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
In last 30 days
0
Forums and Threads

Recent posts by Trist Post

I have two questions:

Are there any ORM tools that whete it makes sense make use of stored procedures?

Are SQL produced by the best ORM solutions today generally as efficent (or more so) as that produced by the average developer?
Can class loaders be used as sandboxes to separate different parts of an application from each other from a security perspective or are there better ways to do this?
9 years ago
I have long been a suporter of using metrics but to make them useful one needs good collection and visualization tools. Can you comment on what tools you use/propose?
9 years ago

Stephan van Hulst wrote:In general I would say that breaking a system up in smaller chunks makes it more understandable, not less. However, any design methodology can be taken to the extreme. Making classes or services that do "too little work" can definitely impair the reader's understanding of the system. It's key to identify parts of the system that naturally lend themselves for modularization. Check out this cool article by Martin Fowler: http://martinfowler.com/bliki/MonolithFirst.html

In the article, Martin Fowler explains that it's costly to set up microservices from scratch. This is the primary disadvantage of this architecture style, I think.



That is a very good article and I agree that "divide and conquer" generally is a good idea when building systems (and also can make maintenance easier).

With some common sense the right granularity (i.e. building micro rather than say nano services) should be possible but what about the potential with availability problems? Already building very granular distributed systems poses many new challenges and requires each part to have higher availability than a single monolitic sysystem would need. Building a system from possibly hundreds (or more) services takews this to the next level...

Deploying in the cloud makes this problem somewhat easier to solve since there are very good support (at least in AWS) for on the infrastructure level configure that several (or even an elastic number) of instances should be created and that failed instances should be autromatically re-started) making it easier to build services with very high availablity. But when runing in your own data centre you have to write scripts or use some frameworks like say Zoo-keeper etc (that you must configure and manage) to do theese things and many IT organisations lack the needed knowledge/exprerience of this today...

A similar consern applices in a way for scalability - if the services are so small that many different ones are deployed together in the same container it become more complex to scale. Lets say that microservice A (that is deployed together with B, C ... etc) starts to get a lot traffic - if scaling on the infrastructure level (as mentioned easy to do in for instance AWS) one would then create more identical containers (server instances) and in effect also create more copies of service B, C etc (that are co-deployed with A) even though this is not strictly necessary. This is in contrast to how it works if services are kept large enough to be deployed on their own server instance where scaling becomes more straight forward.

Once agin cloud providers may (perhaps already do?) offer advanced containers that can dynamically create additional containers that only contain "hot" microservices as needed but in your own data center this will again require "advanced plumbing" that is not trivial to set-up and manage.

This all makes me feel that the a good cloud service is, if not a requisite, at least a big advantage to use microservices optimally!

This thesis is supported by the fact that Microsofts new "micro service" based architecture is heavilly promoted towards Azure...
9 years ago
I see many advantages with Micro Service architectures but also some possible pitfalls...

When object oriented design became popular it was not uncommon to see people creating "too small" classes and methods making it very hard to understand (and debug!) the systems (when each piece of code becomes too small you need to keep to many classes and methods in your head at the same time to undestand what is going on).

Back in the days when CPUs where not as powerfull as today ther large number of "virtual" method invocations that followed from this kind of "over engineering" also created problems in performance critical code.

What are the risks of the same things happening with Micro Services - ie. can readability or performance become problems?

Can availability also become problematic when building large systems from huge numbers of small services - i.e. since the total availability of the system is no better than the aggregated availability of all the services together?

If you agree that these problem may occur is the book proposing how to possibly them (by for instance proposing how to find the right granularity and how to use redundancy etc to build micro services that are sufficently highly available etc)??

9 years ago