• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Bodgitt and Scarper 2.2.1: Caching

 
Greenhorn
Posts: 7
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
To make my life easier, I've decided to cache the entire database in memory. I've read a bit about others' success with similar approaches.

I'm trying to keep the file-level class (ContractorFileAccess) separate from the memory-caching class (ContractorFileCacheAccess). ContractorFileCacheAccess extends ContractorFileAccess with cached versions of its methods.

It seems I can get away with not creating sequential record numbers at all at the db level. I can just use the (long) offset in the file for each record as the recNo. The memory cache doesn't care at all. It can have its own numbering scheme. Whatever.

When I write the database to disk from memory at shutdown, is there any compelling reason to write records back into the same slots they occupied before? I could just rewrite the file from scratch, "compressing" the removed records as I go.

Can anyone think of a reason not to do that?
 
Sheriff
Posts: 11604
178
Hibernate jQuery Eclipse IDE Spring MySQL Database AngularJS Tomcat Server Chrome Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Adam,

Welcome to the javaranch!

I used also the cache approach (and it made my life a whole lot easier). I didn't compress the file like you are planning to do. The only reason I can think of not to go that way is a possible futur enhancement. Because you use a cache, you could lose all data if server craches for example. To have the least possible loss of data you will write to the data file every hour for example (instead of at server shut down). And if you use compressing algorithm, you will end up with a mess.

And other thing of course is: it's not required, so don't do it

Kind regards,
Roel
 
Adam Dray
Greenhorn
Posts: 7
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Thanks for the reply, Roel!

I don't mean literal compression. I mean database compression, where you remove logically deleted records from the physical file. I just mean writing out the good records and not the deleted records.

The main issue is that I can't guarantee the order of the records without a little work, so the 10th record might end up getting written where the 1st record used to be. Since there is no primary key stored on the record (the recNos are purely an internal thing), it doesn't seem to matter, but the recNo could differ from one run to the next.
 
Bartender
Posts: 2292
3
Eclipse IDE Spring Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Howdy, Adam!

Champion, what is the data structure you are using for caching? For instance, I used Map<Integer, Room> where Integer is the record number and Room is a class that represents a database record. You can face the position of the record in the database file as the record's primary key. Also, for deleted records, I kept them with null value in the Map. So, when there's an entry with null value, then it means it is deleted.
 
Roel De Nijs
Sheriff
Posts: 11604
178
Hibernate jQuery Eclipse IDE Spring MySQL Database AngularJS Tomcat Server Chrome Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Adam,

I know you meant that, but that makes no difference. If you have 3 records in the JTable, 2nd one is deleted and then you write them back to file (= a future enhancement to save each hour to the database file, so you have less data loss if server crashes); the offsets of the 3rd record will be changed and user will get an error that that record doesn't exist anymore (because it became the 2nd record).

Kind regards,
Roel
 
Adam Dray
Greenhorn
Posts: 7
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Roberto,

My cache data structure is a (Hashmap) Map<Long, Contractor>. The Long is the record number and the Contractor is the full record value object. I guess you're right that I can just keep my record-offset as the record number up through the caching layer and then use it to write back into the same position. It is so obvious now that you've said it. ;)



Roel,

That makes perfect sense. I get you now.


Thanks to both of you. I'm sure I'll have more questions later on!
 
Don't get me started about those stupid light bulbs.
reply
    Bookmark Topic Watch Topic
  • New Topic