Win a copy of The Little Book of Impediments (e-book only) this week in the Agile and Other Processes forum!
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic

[NX:] caching records?

 
Filip Moens
Greenhorn
Posts: 24
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi guys,
Just started the SCJD assignment (Bodgitt and Scarper, Version 2.1.2)
As such, I am new to this forum, so please be lenient in case I'm not sticking to conventions that may be appropriate here.
Moreover, my native language isn't English, so you (and I) may get confused at times.
Now, I haven't seen a lot of posts regarding the caching of database records.
It seems that most of you are not implementing it this way.
Any particular reason for this?
The way I see it, keeping records in memory shouldn't really be a performance problem.
Let's say a client wants to see all contractors. This would require the server to lock the db file (for the duration of the entire read operation), read all records (I/O, so relatively slow) and create some kind of record Collection or Array (in memory!) to be sent back to the client. This Collection or Array would be of comparable size to the records kept in memory when caching (although only for a short while).
The way I see it, caching seems the better approach.
Then again, I haven't given it too much thought.
(The way I'm proceeding (for now) is to use 1 Data class (implementing the DB interface given by Sun))
Any comments would be highly appreciated.
 
Philippe Maquet
Bartender
Posts: 1872
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi Filip,
Now, I haven't seen a lot of posts regarding the caching of database records.
It seems that most of you are not implementing it this way.

I think the contrary : many people here seem to implement a cache, at least in its simplest form (caching all records in memory)
The way I see it, keeping records in memory shouldn't really be a performance problem.
Let's say a client wants to see all contractors. This would require the server to lock the db file (for the duration of the entire read operation), read all records (I/O, so relatively slow) and create some kind of record Collection or Array (in memory!) to be sent back to the client. This Collection or Array would be of comparable size to the records kept in memory when caching (although only for a short while).

Correct.
Best,
Phil.
 
Andrew Monkhouse
author and jackaroo
Marshal Commander
Pie
Posts: 12014
220
C++ Firefox Browser IntelliJ IDE Java Mac Oracle
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi Filip,
Welcome to JavaRanch.
Phil is correct - there are quite a few people caching their records. It is not really necessary, but it is a nice touch. Performance is one of those issues that is deliberately not detailed in the instructions - you won't get extra marks for making a high performance solution, but you could loose marks if your high performance solution is too difficult for a junior programmer to understand, or if you write code that really performs badly.
If you were concerned about potential memory issues, you could build your cache using SoftReferences which will allow the JVM to reclaim the memory if it really needs to.
Regards, Andrew
 
  • Post Reply
  • Bookmark Topic Watch Topic
  • New Topic