Originally posted by P Lavti:
A normalized design will often store different but related pieces of information in separate logical tables. If these tables are stored physically as separate disk files, completing a database query that draws information from several tables (a join operation) can be slow. If many tables are joined, it may be prohibitively slow.
Well, sort of. Its not performance you should worry about really (though it may be a factor) but the accuracy of your data. In fact denormalization could be used to solve performance problems through too many joins, not just cause them. If you have redundant data in more than one table your business logic needs to maintain it, since you are deliberately not using the logic a relational database provides to do this for you. If you have a teacher table that looks like this:
And a class table that looks like this:
Assuming teacher_id is an FK to teacher, why do we need teacher_name in the class table? If the teacher record changes, something else must remember to change the teacher_name field in the class table.
Originally posted by Arun Kumarr:
Well if you see Paul's reply, you would probably have found that the schema fails 3NF. It's always a best practise to design a normalized database and then denormalize it. Don't consider denormalization while you are designing. Have it as the last step, when you hit performance issues.
Agreed. However one place you could use denormalization in the design is with historical data. Suppose you decided you need to keep the name of the first teacher assigned to a class, then the teacher_name field has meaning. Not a particularaly good way to do it perhaps, but a short cut solution to proper history tables.