• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Writing to DB file

 
Ranch Hand
Posts: 39
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
When writing ALL records to the database, Is it better to create a new file and write all header info back out to the file, then write all the data info?

Or is it better to overwrite the existing file...seek to the data part of the file and overwrite the existing data?

What are your views?
 
Ranch Hand
Posts: 1847
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
What makes you write the entire file in one run?
You'd more likely be writing a single record at a time, and call (if needed) call that routine repeatedly (once for each record).
 
Sham Delaney
Ranch Hand
Posts: 39
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I keep all the data in a cache. And dont write to the DB after each update/delete. I wait and write out all the records at specified intervals.
The cached data is the db and it is current..i write to the db file as backup.

Should I be updating constantly to the DB file ?
 
Ranch Hand
Posts: 243
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi,

Keeping the records in memory is certainly one approach for implementing the database server. MySQL Cluster does this for example so this approach definitely exists in the real world.

But I think doing all operations against a file instead is a better long term solution. Some things to think about are:

1. Will you have enough memory as the data grows
2. What happens if you lose power -- you lose all your data from volatile memory
3. In my assignment, a legacy reporting application needs to run reports against the data which it may not be able to do if all data is in memory somewhere

I have gone for always writing updates to the main database file, that is, not to cache updates in memory.

Cheers, Jared.
 
Greenhorn
Posts: 17
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi,

I agree with Jared in that i think a datacache for the assignment is overengineering. The spec has no specific performance requirements so i think that it just makes things more complicated for little benefit. In the real world of course it would be an entirely different matter (but then again its pretty likely we'd be using a database). There are major issues of maintaining data consistency in the event of something unexpected happening so for me, unless someone can persuade me that i have to do it here, i just say avoid it and write to the file as and when.
 
Ranch Hand
Posts: 89
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Sham Delaney:
I keep all the data in a cache. And dont write to the DB after each update/delete. I wait and write out all the records at specified intervals.
The cached data is the db and it is current..i write to the db file as backup.

Should I be updating constantly to the DB file ?



Writing all records whether it changed or not kind of dampens the performance gains of using a cache. You could mark the records 'dirty' when changed and write only those records out when you flush the cache.

I also think using a cache is overkill for the assignment.
 
Jeroen T Wenting
Ranch Hand
Posts: 1847
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
The only cache (of sorts) I use is a cache of IDs of deleted records, this to prevent a complete tablescan whenever a new record is to be inserted (as my assignment states that space taken up by deleted records is to be reused).

And that cache isn't deliberately filled. Only when a deleted record is encountered during a read operation is its location added to the list (and removed of course when it is used to write a new record).

For the already mentioned reasons I decided not to employ a writecache (or even a read-ahead cache) on the database.
It's beyond the scope of the assignment (though an interesting exercise certainly) and poses the risk of dataloss in case of a servercrash.
I'd rather have slightly lower potential performance than a deduction because my caching mechanism isn't working perfectly (for example my read cache being out of synch with my write cache, oops).
 
Sham Delaney
Ranch Hand
Posts: 39
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Has anyone used this type of cache that I am using?

I am wondering If I am doing the right thing now, I have it working fine.
I was going to update the db file when The server crashes or shutsdown and after a specific time interval...like every 15 mins.
 
Jeroen T Wenting
Ranch Hand
Posts: 1847
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
how would you ensure the cache is written in a crash?
 
Sham Delaney
Ranch Hand
Posts: 39
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I dont have that implemented yet. Just got the cache writing out to the file properly at the minute. Havent implemented writing in case of crash or at a specific time interval. Not sure how I am going about it either.
 
Ranch Hand
Posts: 54
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I don't think you can make it without a UPS

I suggest you to take others' advices.
 
Jeroen T Wenting
Ranch Hand
Posts: 1847
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
even a UPS won't save you from an operating system crash or a JVM crash...

We did manage at one point to create a batch process which would recover gracefully from a powerfailure using elaborate logging and transaction recovery systems but that's WAY beyond the scope of this assignment.
 
Sham Delaney
Ranch Hand
Posts: 39
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
In my Database file..it seems like spaces fill the gaps at the end of each field. When Writing these fields back out to the database..I am following the approach Andrew made in his book. Instead of writing out spaces to fill the gap, I write out the field...and skip onto the location of the next field.

My code reads in the updated fields no problem...the same as it does for the other fields.
Is it safe to assume that it is ok to do this?
 
Ranch Hand
Posts: 58
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Sham Delaney:
In my Database file..it seems like spaces fill the gaps at the end of each field. When Writing these fields back out to the database..I am following the approach Andrew made in his book. Instead of writing out spaces to fill the gap, I write out the field...and skip onto the location of the next field.

My code reads in the updated fields no problem...the same as it does for the other fields.
Is it safe to assume that it is ok to do this?



If I understand you correctly, you do not fill the remainder of the space for each field with spaces. That means you terminate using \0, I assume? If so, I don't think that's a good idea. Even if your code reads it correcly next time, remember the assignment says there's "another custom-written application" that uses this file. So I don't think the way you store the data is allowed to deviate from what you find in the file.
 
Jeroen T Wenting
Ranch Hand
Posts: 1847
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
not a good idea indeed. If you overwrite a longer string you get a serious potential for corrupt data.
 
author and jackaroo
Posts: 12200
280
Mac IntelliJ IDE Firefox Browser Oracle C++ Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Sham Delaney:
Instead of writing out spaces to fill the gap, I write out the field...and skip onto the location of the next field.

Hmmm - that is not quite what we do. We have a block that is pre-populated with spaces, and so we can put the fields into that block. Then we write the entire block in one step. This saves the problems of counting spaces for each field, and of dealing with individual fields at a file level.

Regards, Andrew
[ June 20, 2006: Message edited by: Andrew Monkhouse ]
reply
    Bookmark Topic Watch Topic
  • New Topic