Hadoop is basically for batch processing. Hadoop jobs (MapReduce, Hive, or PIG) take minutes to hours to complete depending on the data size.
The next gen tools like Spark can be used for fast in-memory processing.
You may run batch jobs and data generated on HDFS can be exposed as a service using HDFS FileSystem
Java APIs.
This would be just like any other service.
You can make it a REST or SOAP service and deploy on to a server like
Tomcat.