Can you tell me if this is an appropriate situation in which to apply Hadoop:
We are a collection of toll road operators. We each operate one or more toll roads, and we each have our own set of customers. Every customer has "arrangements to pay" information stored with the toll road operator that owns the customer account. But we want any toll road customer to be able to use any toll road in a seemless way, so that all charges, incurred on any toll road, end up on their home toll road operator account - a true interoperability scenario. To make this possible, every day, we currently
exchange large flat files (several GBs) containing "arrangements to pay" data.
This "arrangements to pay" data is held within specific database tables within our own tolling systems. Some of these systems are custom built, some based on SAP, some on Oracle applications, some use SQL Server and some use Oracle database.
Is it practical to think that we could create a Hadoop cluster of this "arrangements to pay" data so that any toll road operator, at any time, could query the cluster and determine the status of a particular customer's "arrangements to pay" to find out whether, for example, their account is account is active and has a sufficient balance of funds?
I would be very interested to hear your views on how this might be possible.
Regards, Rupert