posted 8 years ago
Well, I think there are two things in your question. The first is the overhead of spinning up a new service. This has got better with new technology. We have the ability to provision machines automatically, then configure them automatically, and deploy our software automatically. Tooling that has come out of companies that handle systems at scale are now readily available to us to handle distributed monitoring and confirguration. And things like docker are making programatic 'virtualisation' faster and more cost effective. All these things are helping reduce the cost of making new services, and are behind why microservices are a plausible option for many now when they weren't in the past.
The other side of this is picking the right sort of comms between services. There are many options. It doesn't have to be textual-based protocols like JSON over HTTP/REST. You can use binary protocols like Protocol buffers, or low latency async messaging protocols etc., when you have different sorts of performance characteristics you want to achieve.
Hope that helps!