Run faster, develop faster, and spend less on hardware.
Of all of the possible things to write about, I find this to be the hardest. The industry is changing at a rapid clip. There is a lot of convergence and a new dawn to software development. The number of devices that developers have to support is tripling, from smart phones, to glasses, to virtual servers. What I want to describe is a way to drastically speed up development time, reduce complexity, and reduce hardware costs. But let's talk a little bit about the trends in the industry.
The idea of an application server is becoming a thing of the past. Today most server-side developers develop services not applications. This is the trend. The new web is no longer just a servlet engine, a database and some JSP/HTML/CSS. Today applications can range from mobile applications to rich HTML 5 and the presentation logic is expected to be in the client. The users expect and perhaps demand a rich user experience. HTML 5 promises and delivers a very rich environment for writing applications. Companies that embrace this will deliver user centric GUIs and be more successful than companies that do not.
THE RISE OF NoSQL
The rise of NoSQL is really the rise of data safety versus a relational database. The emphasis is on horizontal scaling potentially millions of client's data and not forcing application data into a relational model. NoSQL, although originally built to support vertical scaling, has found homes in the hearts of developers who just want to rapidly develop their applications and rapidly iterate and not deal with the hassle of constant schema migration. Schema migration is a difficult process to manage and has historically slowed development down to a crawl. While NoSQL's claim to fame might be vertical scaling, a larger selling point has been more dynamic schema. This has driven NoSQL from the massively scalable to also being used for department level applications that will never use the vertical scaling features. It just works. It is easier then dragging a schema along and it means fewer DBAs, Ops, and trouble.
THE RISE OF REST SERVICES
In days gone by, SOA was a way to break up an application into reusable services. In this back drop of activity, we see the rise of REST development with Java. More people are writing services and using services oriented development, and fewer people are talking about it. SOAP and XML are used less while REST and JSON are used more. The days of SOA and belly button lint inspection are gone. The days of writing services has just begun. Service oriented development is a forgone conclusion. It has become synonymous with the software development.
In the era of HTML 5 and mobile applications, the weight of the presentation logic has shifted back to the client, the service oriented approach has been reborn and repurposed. HTML 5 apps and mobile apps are calling REST services. REST, along with JSON has become the conduit of communication for mobile applications. REST with JSON is the lingua franca of the web. If you are doing REST, you are five times more likely to write that REST service in Java than any other language.
WEBSOCKETS, THE NEW COMMUNICATION BACKBONE
WebSockets are showing up in more places as well. WebSocket is the next generation way to develop services for mobile and HTML 5 applications. WebSocket is part of HTML 5 and provides faster bi-directional communication without the latency of HTTP for request response of REST. HTML5 is synonymous with WebSocket and IndexDB. WebSocket is just baked in. Like REST, Java will dominate this space as well.
IN-MEMORY DATA, THE GOLDEN GOOSE
To handle load and develop more interactive applications, there has been a trend towards non-blocking systems that use principles of mechanical sympathy to optimize applications by writing code that takes advantage of the hardware's multi-core machine effectively. Now instead of spending millions on hardware and software that scales to tens of thousands of transactions per second. Teams have developed software that scales to millions on commodity hardware that cost thousands.
From LMAX Disruptor to Workaday, companies are finding that in-memory data is the fastest way to develop and deliver and scale modern applications. The basic idea is that service requests go through a journal and are replicated before the service is called. The data that is in-memory is the actual operational data. Storage and replication are now background tasks that occur in parallel with the service as much as possible. Storage is simply crash recovery. In-memory data is the actual data. Unlike the NoSQL model, your objects are your data and there is no database per se. Combining logic and data has another name: Object-oriented development.
In this model, disk is like tape backup, it is just a million times faster than tape backup, and memory replaces disk I/O and network I/O to the database, just a million times faster. As you might imagine, systems built this way are very fast.
This allows developers to focus on writing code and not worry about persistence or mapping as much. This is the next logical step in the NoSQL trend. This goes beyond NoSQL to no database, or rather no databases in the operational path. Services own their operational data. Think: "No more mapping, no more cache coherency issues, no more schema migrations, etc.".
This is not to say you don't have databases, you just don't need databases to ensure that your operational data is safe. You can use databases for what they were meant for, reporting and offline analytics. The database no longer needs to be in the operational path. You no longer have to abide by the anti-pattern of using your database for synchronization thus turning your database into a performance choke point.
This approach allows faster development time as no database mapping or schema migration is required. You get the same data safety as you would get from a NoSQL or RDBMS, perhaps even more since the cost of data safety is less. Also since traditional architecture usually requires a lot of caching, it must deal with cache coherency issues. This new approach avoids that by allowing the services to own the operational data. This allows companies to rapidly iterate and come up with their minimal viable applications and focus on providing an awesome user experience rather than spending millions on infrastructure and slowing the development process to a crawl due to schema migrations, cache coherency issues, etc. This approach allows companies to adopt the lean startup philosophy by allowing simpler more rapid iterations. As far as scalability goes, the same hardware can handle 10x to 100x the number of requests so you have less vertical scaling to manage. Do more with less.
A SERVICE ENGINE READY FOR THE MASSES!
Well what about the programming model. Is this in the reach of the everyday developer? How can I use this approach?
Enter stage left, Makai (which is our code name). Makai has it's DNA in JAX-RS, EJB, Spring, etc. It is designed around the way that Java developers write services. It provides the benefits of this new model in a programming model that is familiar and friendly to developers. Instead of learning a new programming model or language, you program in Java.
More to come...
Bill Digman is a Java EE / Servlet enthusiast and Open Source enthusiast who loves working with Caucho's Resin Servlet Container, a Java EE Web Profile Servlet Container.
Resin has supported caching, session replication (another form of caching), and http proxy caching in cluster environments for over ten years. When you use Resin caching, you are using the same platform that has the speed and scalability of custom services written in C like NginX with the usability of Java, and the industry platform Java EE. JCache JSR 107 is a distributed cache that has a similar interface to the HashMap that you know and love. To be more specific, the Cache object in JCache looks like a java.util.ConncurrentHashMap. In addition, JCache JSR 107 defines integration with CDI (as well as Spring and Guice). You can decorate services with interceptors that apply caching to the services just by defining annotations.
Resin 4 has support for JCache, and JCache support is required for Java EE 7.
Let's look at a small example to see how easy is to get started with JCache.
The above works out fairly well, but what if we want to periodically change the helloMessage. Let's say we get 2,000 requests a second, but every 10 seconds or so we would like to regenerate the helloMessage.
The message might be:
Later we would want it to change.
If we wanted it to change every 10 seconds after it was last accessed, we would do this:
For this example, we want to change it every 10 seconds after is was last modified. We would set up the timeout on the creation as follows:
This would go right in the cache method we defined earlier.
Resin's JCache implementation is built on top Resin distributed cache architecture. You get replication, and data redundancy built in.
Bill Digman is a Java EE / Servlet enthusiast and Open Source enthusiast who loves working with Caucho's Resin Servlet Container, a Java EE Web Profile Servlet Container.
Caucho's Resin OpenSource Servlet Container
Java EE Web Profile Servlet Container