Since Hadoop is a software stack which falls in the category of batch processing, a Liferay portal application has to designed accordingly.
In Liferay it can be implemented the "driver" mechanism of the Hadoop which controls a job. It can be implemented coordination of input data loading into HDFS filesystem, extraction of the output data from HDFS filesystem and of course, the interpetation of the results.
How the data structures involved in the MapReduce algorithm is problem specific, the design of Liferay integration is problem specific. Technical interfacing problems are very small, just check it a sample MapReduce application, which kind of jars has to be included into the classpath of Liferay portlet you will find easily.