I'm not sure that 60x20=240 data elements is what most of us would call "huge" data, but it depends on the data and on the number of active users. Also, if the data is read-mostly and shared between users, application scope is a better place to keep it than session scope.
For truly huge data - say 6000 rows x 20 columns, the common practice is not to keep the data in the session object at all. Instead, the controller would re-fetch as much of that data as would be output on the current page display. Which, if you have any consideration for the watering eyes of the users would only be about 20 rows or so at a time.
Refetching the data might seem slow, but Enterprise
Java and Enterprise DBMS's have been crafted to afford efficiencies. Most of the major DBMS's, both open-source and commercial, will cache query requests so that they can handle repeated queries without all the overhead of building them up from scratch. That's especially true when using Prepared Statements.
On the
J2EE server side there are also cache mechanisms. In addition to hand-managed caching, the ORM systems are popular in large part because they can transparently cache queries automatically, as well as managing updates in an efficient and cache-aware way.