• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Normalised/denormalised database tables

 
Ranch Hand
Posts: 1970
1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I was reading Steve Souza's FAQ on this site, about performance, and I read that there could be a performance difference between normalised and denormalised database tables. That could be very interesting to me, if I knew what it meant!

Can anyone help me with a succinct explanation?
 
Bartender
Posts: 2661
19
Netbeans IDE C++ Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Database normalization is the process of structuring your data according to the relational theory. Wikipedia has a nice explanation of normalzations, and the steps to normalize a database.

Normalization focuses on the structure of your database. It may be that this optimal structured database does not meet your performance requirements. You can take a step back, and make a choice between optimal structure and better performance.

Regards, Jan
[ April 16, 2007: Message edited by: Jan Cumps ]
 
author
Posts: 4335
39
jQuery Eclipse IDE Java
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
The simple answer from a programming perspecive, is that data exists in only one place in fully normalized systems and is duplicated in un-normalized. So you can think of it it terms of unique or duplicate date. Unique data has the organizational convenience that updates only need to be made to one table, where duplicate data, you may have to update multiple fields.

Think of a column that keeps a count of the number of similar records. You could update this entry every time you add/update a record in the table, or you could delete this field and recompute similarity every time someone asks for it. There's no clear advantage. Performance-wise fully normalized can be awful, often joining dozens of tables per query. The goal is usually to get as close to fully normalized as possible while still allowing for quick access to methods like count. Often times such data can be maintained by triggers and/or whats called "materialized views".
 
Peter Chase
Ranch Hand
Posts: 1970
1
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
OK, thanks. I was aware of those types of issues and practices, but didn't know it was called "normalisation". Now I do.
 
reply
    Bookmark Topic Watch Topic
  • New Topic