Please do not get mad - I'll paste my pitch here - easier to code then advertise.
Took me a long time to put everything together - and here we go:
A year and something ago I posted a topic to discuss the Datalator - Fast RIA Tool and Hosting Environment. The technology was poorly described, the future was hazy and the feasibility question was the main one. I'll try to reintroduce the thing.
There is an idea to browse content (internet browser) and there is an idea to manipulate dynamic data(data-manipu-lator). The Google has addressed the feasibility question with the Google Data. A multiuser spreadsheet is easy to employ - for simple data models. A database is good for pretty much any data and business logic - but is not easy to implement. Here comes an idea to marry the simplicity of a spreadsheet to the power of a database. We tie them with VLSI software components to contain data. Front end of such a component (view+controller) resides on the client side and the component's back end contained on the server - in the database. There is a full-duplex communication protocol between front- and server-side parts of a component. A component is also aware of neighbor components - they talk to each other and adjust the behavior - having some kind of social life :-). A component is flexible enough to represent an SQL table in a complex schema with full CRUD functionality and provide API to define custom business logic. Now lets make it cheap and easy! Here is how:
There is a tool to visually build such a component - complete with a model and a view. There is simple gesture(mouse/keyboard) language to define components relations, to navigate between them and to build complex queries. There is SOAP-like protocol to communicate development and usage of components to the server. The server translates user gestures-requests into sets of SQL statements and serves them to underlying database. And there is no way to introduce a bug into the code - there is no code so far - we'll get the chance later, when trying to go beyond simple CRUD. There is nothing to learn so far - all complexities are encapsulated into that kind of Visual Easiek.
A Datalator Application is a structure of visually defined components with data contained in an SQL database. The Application data are as ACID as the employed database. The client is responsible to support self-healing feedback channel (long pull - no server sockets on the client part!) to receive server notifications on design and content changes. The implementation of our components defines it's lifecycle: definition loaded from the server first, then the component visualized, now content loaded from the server and the component waits for UI input. It sorts and searches the content locally. It changes the content both locally and globally; the changes are propagated to all relevant clients in milliseconds.
Here comes an interesting part - the Java API provides access only to data loaded on the client. A programmer may change the content locally - and the system will take care of synchronizing the data globally. A server-client programming is done exclusively on the client part. The system will take care of keeping the data consistent and transactional. Oh! and the content definition today is: SQL tables, texts, htmls, images and binaries - which makes a suggestion we could handle any type of content.
The architecture does not look obvious. It might even not look good. The implementation is definitely not ideal. But it is more then a proof of concept today. There are a few real-life applications up and running on cheap hardware for months without attention - shoot-and-forget style. Where is the fun and joy of programming you ask? The Datalator dissolves a lot of routine developer tasks to contain, "consist", present and synchronize data. You can build much more complex applications in much shorter time, you can present formal business data in the game-like interfaces and you can build multiplayer games in a matter of days, if you want.
There are a few known drawbacks of the architecture and implementation. Pure Java front-end denies search engines access to your content. Badly designed schema may force constant reloading of long unstructured tables. From the security standpoint - the application is not ready for the open Internet yet - more suitable for LAN/VPN environments. Content encoding algorithm supports only English and binaries(?) so far. And there are no field data on how well it might scale (a few dozen simultaneous clients should not be a problem though).