This is the kind of thing that's usually best prototyped and benchmarked to make sure that what you "know" is going to happen is, in fact, what's happening.
As Paul mentioned, bulk retrieval of data is pretty expensive and there's not only RMI overhead -- Entity EJBs are going to be created one row at a time ( = 1 SQL request) unless you have a pretty clever prefetch mechanism in the container.
However, I have to wonder if you REALLY are going to need all 1000 rows right at your fingertips in full detail, or merely accessible. Fetching 1000 rows and making them available to Java is a sufficiently large task that if you try to do this in response to a web browser request you run the risk of browser timeouts regardless of how you manage them. Fetching 1000 keys and accessing detail data only on rows of particular interest is a different matter. Statistical clustering of record accesses can play into (or out of) the efficiency of caching mechanisms. You can also play various tricks, such as pairing a JavaBean with an EJB and passing it as a
unit to the EJB for set/get purposes, thus reducing the RMI overhead. If you need fine-grained transactional support, this too is an attribute of EJBs that you'd otherwise need to code yourself.
Only one thing can I guarantee -- no matter how rigorous the theoretical analysis is, actual measured results will contain surprises!
The secret of how to be miserable is to constantly expect things are going to happen the way that they are "supposed" to happen.
You can have faith, which carries the understanding that you may be disappointed. Then there's being a willfully-blind idiot, which virtually guarantees it.