• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

use OOP to design database tables schema?

 
Ranch Hand
Posts: 798
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
This question has haunted a while.

Usually, we firstly check the porject requirements and set up tables, and then do 1/2/3-NF normaliztion.

I don't like this way, because it is not Object-oriented way. So any body could share exprience how we use OOP to design complicate table schema/relationship ?

Even a link/book ISBN is welcome. That is very important for me.

Thanks
 
author
Posts: 11962
5
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Start by designing your domain model and use an ORM tool for mapping your domain objects to the database afterwards. By making the domain model take precedence over the database schema, you make your life a lot easier in the long run.
 
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Database tables are not objects and their design is driven by different forces, so it doesn't make much sense apply OO design principles to them.

What you should do is design your data model, driven by data needs, in parallel design your object model, driven by object oriented needs, and *then* provide the mapping between the too.

See http://www.agiledata.org/essays/drivingForces.html
 
Edward Chen
Ranch Hand
Posts: 798
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Dear Lasse,

1. what is "ORM tool" ?
2. could you give me more details? just give me a little simple example.

Thank you very much.
 
(instanceof Sidekick)
Posts: 8791
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
This will be old news to the regulars ... Robert Martin's Agile Software Development has a long case study that designs, tests and builds a fairly complex payroll system with no database. At the end it's pretty obvious how you'd hook one up for persistance, but db design is certainly not on the critical path. Reading it might change the way you see things.

Scott Ambler - site referenced above - has written the book on object relational mapping. Look through his sites, articles, books, etc. There's a wealth of good stuff there.

Object models and database models are seductively similar looking, but starting with the data will give you a lot of data objects that you can act upon, procedural stuff. Very common in stateless web designs. Starting with objects and their responsibilities and finding the data as you go gives you much more robust, object-ish software.
 
Ranch Hand
Posts: 193
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Edward Chen:
Dear Lasse,

1. what is "ORM tool" ?
2. could you give me more details? just give me a little simple example.

Thank you very much.



lot's of ORM tools are there.One of the best is Castor

Originally posted by Ilja Preuss:
Database tables are not objects and their design is driven by different forces, so it doesn't make much sense apply OO design principles to them.

What you should do is design your data model, driven by data needs, in parallel design your object model, driven by object oriented needs, and *then* provide the mapping between the too.

See http://www.agiledata.org/essays/drivingForces.html



I agree with you exactly.And it's proven with my experiance.now i always start with a good database design.That's much better that creating domain model and object model and starting with it.practically the base should be the database design for an application which use database persistance.
 
Lasse Koskela
author
Posts: 11962
5
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Murasoli Maran:
Castor


Let's not forget Hibernate, which seems to have a lot bigger following than Castor does. Apache also has ObjectRelationalBridge. I'd suggest turning to our Object-Relational Mapping forum for follow-up questions.

Originally posted by Murasoli Maran:
now i always start with a good database design.That's much better that creating domain model and object model and starting with it.practically the base should be the database design for an application which use database persistance.


I have to disagree with you here. While I acknowledge that the database is perhaps the most critical part of your system regarding performance tuning, let's not forget that persistence is just infrastructure. Your customer is not paying you for persistence -- he's paying you for features. Yes, one of those is probably that all application data should be persistent, but if you raise the database design to be the most important model you have you'll easily dig yourself into a hole labeled maintenance nightmare. A good domain model let's you move quickly when new features need to be incorporated into the system and let's you design your database (or other means of achieving persistence) as you see best.
 
Lasse Koskela
author
Posts: 11962
5
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ilja Preuss:
See http://www.agiledata.org/essays/drivingForces.html


Also see http://www.agiledata.org/essays/databaseRefactoring.html
 
Murasoli Maran
Ranch Hand
Posts: 193
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Lasse Koskela:

I have to disagree with you here. While I acknowledge that the database is perhaps the most critical part of your system regarding performance tuning, let's not forget that persistence is just infrastructure. Your customer is not paying you for persistence -- he's paying you for features. Yes, one of those is probably that all application data should be persistent, but if you raise the database design to be the most important model you have you'll easily dig yourself into a hole labeled maintenance nightmare. A good domain model let's you move quickly when new features need to be incorporated into the system and let's you design your database (or other means of achieving persistence) as you see best.



for me,

I mean when a system is designing,which is database driven,creating the db design first will be the best and easiest way to create application.Theratically in OOAD,one may convince that creating a domain model and class design model and then creating persistance model will be ideal,but in practcal way,the easiest and effective method is creating database design first and creating object model according to the db design.I am not degrading OOAD,it's a very good methedology.But as ilja preuss said,DB tables and object models are not in a sync pattern(at least in design stage).So starting with db design seems always better and less timeconsuming.

Your customer is not paying you for persistence -- he's paying you for features.



You can include features in design approach(db design first) too.DB design is created after getting the requirement list.

i donno i am correct or not.but i just told my opinion in this subject.
 
Stan James
(instanceof Sidekick)
Posts: 8791
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Starting from data and starting from behavior will give you very different results. Starting from behavior is more likely to give you the long term benefits of OO such as reusability and ability to survive change. It's expecially good for a stateful fat client such as a Swing app to have a rich object model first. On the other hand starting from data is dead simple, fast and hard to resist for stateless CRUD applications that project a database onto the screen with little behavior. Use the style that fits the task.
 
blacksmith
Posts: 1332
2
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Ilja Preuss:

Database tables are not objects and their design is driven by different forces, so it doesn't make much sense apply OO design principles to them.

What you should do is design your data model, driven by data needs, in parallel design your object model, driven by object oriented needs, and *then* provide the mapping between the too.


Yes, exactly. The corollary is that it doesn't matter whether you do the data model or the object model first, because you have to have done them both before you can connect them.

I do think that the Scott Ambler links - or at least the database refactoring link - miss an important implication of a point he himself makes.

He points out, quite correctly, that database refactoring is much more difficult than code refactoring, both because database refactorings must preserve what he calls "informational semantics" as well as behavioral semantics, and because databases tend to be coupled to a lot more stuff. He goes so far as to say that database refactoring "is incredibly difficult in practice".

But then he goes ahead and recommends refactoring freely in small increments.

It seems to me that if refactoring is much more expensive for databases, it makes sense to take steps to avoid having to do it as much. In particular, it might be worthwhile to spend a little more time getting the schema closer to correct at the beginning, as he mentions data professionals are wont to do - especially since, while database refactoring is more expensive, initially writing a database's worth of schema and scripts is much cheaper than writing an application's worth of code.
 
Ranch Hand
Posts: 5093
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
[b]He points out, quite correctly, that database refactoring is much more difficult than code refactoring, both because database refactorings must preserve what he calls "informational semantics" as well as behavioral semantics, and because databases tend to be coupled to a lot more stuff. He goes so far as to say that database refactoring "is incredibly difficult in practice".[/b[

And therefore it makes a lot of sense to design the database first. If you get the database hammered out and working correctly it will be less likely to need major changes afterwards during application development.

If you start with the application and design the database (or rather not-design the database) to match your ever changing needs for persistence as you go through the development of the application the database is more likely to become a mess of data duplication and redundancy.
 
Lasse Koskela
author
Posts: 11962
5
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Jeroen Wenting:
And therefore it makes a lot of sense to design the database first. If you get the database hammered out and working correctly it will be less likely to need major changes afterwards during application development.

If you start with the application and design the database (or rather not-design the database) to match your ever changing needs for persistence as you go through the development of the application the database is more likely to become a mess of data duplication and redundancy.


So your reasoning is that you'll encounter less change in requirements during the project if you start with the database design? Surely that can't be the case...
[ June 25, 2004: Message edited by: Lasse Koskela ]
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Jeroen Wenting:
He points out, quite correctly, that database refactoring is much more difficult than code refactoring, both because database refactorings must preserve what he calls "informational semantics" as well as behavioral semantics, and because databases tend to be coupled to a lot more stuff. He goes so far as to say that database refactoring "is incredibly difficult in practice".

And therefore it makes a lot of sense to design the database first. If you get the database hammered out and working correctly it will be less likely to need major changes afterwards during application development.



Unless you find out that the database design doesn't meet the needs of your application, which is quite likely if you design the database before the application, isn't it?
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Warren Dew:
I do think that the Scott Ambler links - or at least the database refactoring link - miss an important implication of a point he himself makes.

He points out, quite correctly, that database refactoring is much more difficult than code refactoring, both because database refactorings must preserve what he calls "informational semantics" as well as behavioral semantics, and because databases tend to be coupled to a lot more stuff. He goes so far as to say that database refactoring "is incredibly difficult in practice".

But then he goes ahead and recommends refactoring freely in small increments.

It seems to me that if refactoring is much more expensive for databases, it makes sense to take steps to avoid having to do it as much.



Well, the problem with refactorings is that they get much more costly the longer you wait to do them; at least that's the case with code - I guess it's true for database design, too.

So *if* we need to refactor, the more costly it is, the more frequently we should do it, to be able to do smaller steps.

In particular, it might be worthwhile to spend a little more time getting the schema closer to correct at the beginning, as he mentions data professionals are wont to do - especially since, while database refactoring is more expensive, initially writing a database's worth of schema and scripts is much cheaper than writing an application's worth of code.



Yes, I think it is worhwhile to wait as long as possible before comitting to a first database design, so that we have as much information as possible. But once we do have a database, we need to refactor it as mercilessly as the code, else cruft will build up and will cost even more than the small refactoring steps. Or so it seems to me...
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Murasoli Maran:
I agree with you exactly. [...] now i always start with a good database design.



Sorry, than you don't agree with me at all...

That's much better that creating domain model and object model and starting with it.practically the base should be the database design for an application which use database persistance.



As someone else already said, the database is just an implementation detail of persistence. What the customer really cares about is the behaviour of the system. The latter is the critical part and should shape the system.
 
Sheriff
Posts: 7001
6
Eclipse IDE Python C++ Debian Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Ilja wrote: As someone else already said, the database is just an implementation detail of persistence.

I agree that this is true in a lot of "stand-alone" applications. On the other hand I've often worked on systems where the database (either by design or by accident) also forms a key communication protocol. Several applications with different change rates, different development teams, and different technologies rely on SQL access to the same database.

I'll admit this is poor design, equivalent to providing public read/write access to variables, but the truth is that you can make SQL calls from just about any language on any platform, while the more refactorable alternatives (such as web services, CORBA and the like) are clumsy and limited.

In situations such as this, you simply cannot change the database structure without preserving backward compatibility.
 
Warren Dew
blacksmith
Posts: 1332
2
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Lasse Koskela:

So your reasoning is that you'll encounter less change in requirements during the project if you start with the database design? Surely that can't be the case...

I think the reasoning is that the vast majority of code refactorings aren't responses to changes in the requirements, but are instead responses to changes in the implementation.

In fact, some people consider that part of the definition of refactoring - refactoring is when you change the code around to clean it up specifically without changing the behavior. By that definition, you should almost never have to refactor a proper database schema, since if you start out with a proper database design, and maintain the normal forms whenever it gets changed, it never gets dirty in the first place. The only exception I can think of would be breaking up tables for performance reasons.
 
Lasse Koskela
author
Posts: 11962
5
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Frank Carver:
On the other hand I've often worked on systems where the database (either by design or by accident) also forms a key communication protocol. Several applications with different change rates, different development teams, and different technologies rely on SQL access to the same database.


Yes. Data level integration can get nasty. Scott Ambler talks about this as well in his book, Agile Database Techniques.

Originally posted by Warren Dew:
So your reasoning is that you'll encounter less change in requirements during the project if you start with the database design? Surely that can't be the case...

I think the reasoning is that the vast majority of code refactorings aren't responses to changes in the requirements, but are instead responses to changes in the implementation.


The term "refactoring" wasn't mentioned in the post I was replying to.

The problem with "If you get the database hammered out and working correctly it will be less likely to need major changes afterwards during application development" is that there will be major changes afterwards (that's what the CHAOS reports have been telling us since 1995...) and that has nothing to do with how much time have you spent designing your schema before you knew about the upcoming change in requirements.

I do acknowledge that for a simple-enough application, pretty much any approach works out fine. However, those simple-enough applications aren't too interesting, are they?
[ June 25, 2004: Message edited by: Lasse Koskela ]
 
Warren Dew
blacksmith
Posts: 1332
2
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hm ... I think I read, perhaps misread, "less likely to need major changes" as "likely to need less major changes". If you read the phrase to be saying you have a good chance of getting through the whole process without changing the database at all, I agree that seems unrealistic.

When I read "to match your ever changing needs for persistence as you go through the development of the application", though, I definitely think of refactoring as part of what's "ever changing". And my experience is indeed that if people approach the database as being as mutable as code, "a mess of data duplication and redundancy" will be the result.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Frank Carver:
I agree that this is true in a lot of "stand-alone" applications. On the other hand I've often worked on systems where the database (either by design or by accident) also forms a key communication protocol. Several applications with different change rates, different development teams, and different technologies rely on SQL access to the same database.

[...]

In situations such as this, you simply cannot change the database structure without preserving backward compatibility.[/QB]



True, but even in this situation we don't want the object model to be driven by the database structure. We don't even want it to reflect the fact that communication is going through a database, but we want the latter to be encapsulated in a persistance/communication layer, don't we?
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Warren Dew:
I think the reasoning is that the vast majority of code refactorings aren't responses to changes in the requirements, but are instead responses to changes in the implementation.



They are responses to changes in the implementation *because of* changes in the requirements. More specifically, it is a response to the fact that most often the design of the system degenerates when code is changed due to changes in the requirements.


In fact, some people consider that part of the definition of refactoring - refactoring is when you change the code around to clean it up specifically without changing the behavior.



Yes, but it's almost always done directly after the behaviour *was* changed. Refactoring is the last step of implementing a new requirement, done when the code already works.


By that definition, you should almost never have to refactor a proper database schema, since if you start out with a proper database design, and maintain the normal forms whenever it gets changed, it never gets dirty in the first place.



I don't have experience with database design, but I assume that it's similar to code design: mixing making something work and making it pretty is hard. Therefore after having made it work, we take a look at the resulting design, spot flaws that got introduced while making it work and fix them - we refactor.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Warren Dew:
And my experience is indeed that if people approach the database as being as mutable as code, "a mess of data duplication and redundancy" will be the result.



This couldn't happen if "always refactor until there is no more duplication or redundancy" were part of their professional behaviour, could it?
 
Warren Dew
blacksmith
Posts: 1332
2
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Ilja Preuss regarding refactoring:

Yes, but it's almost always done directly after the behaviour *was* changed. Refactoring is the last step of implementing a new requirement, done when the code already works.

Interesting. I usually take just the opposite approach - I first do the refactoring needed to support making the changes cleanly, then I make those changes. But I agree that either way works, for code.

I don't have experience with database design, but I assume that it's similar to code design: mixing making something work and making it pretty is hard.

I do have some experience with database design, and in my experience, it's different from code design. Making a database schema work - including being free of data duplication, or more explicitly, meeting third normal form - automatically makes it pretty. Unlike code, you can't do the one without the other. There is then no refactoring required.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Warren Dew:
Interesting. I usually take just the opposite approach - I first do the refactoring needed to support making the changes cleanly, then I make those changes. But I agree that either way works, for code.



Yes, agreed - I sometimes do this, too.

Making a database schema work - including being free of data duplication, or more explicitly, meeting third normal form - automatically makes it pretty.



Why does a database schema need to meet third normal form to *work*? Isn't that exactly the "make it pretty" part?
 
Warren Dew
blacksmith
Posts: 1332
2
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Ilja Preuss:

Why does a database schema need to meet third normal form to *work*?

Because a database that has duplicated locations for the same data has the following feature: you can put the data in, and then you can take the data out, and you can get something different from what you put in. That, to me, is not just "not pretty" - it's "not working".
 
Lasse Koskela
author
Posts: 11962
5
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Warren Dew:
Why does a database schema need to meet third normal form to *work*?

Because a database that has duplicated locations for the same data has the following feature: you can put the data in, and then you can take the data out, and you can get something different from what you put in. That, to me, is not just "not pretty" - it's "not working".


IANADBA*, but isn't it pretty common to move towards a denormalized schema when doing performance tuning? (surely that doesn't count as the "database stopping from working" even if it does result in an ugly schema?)

* = I Am Not A DBA
 
Warren Dew
blacksmith
Posts: 1332
2
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Lasse Koskela:

IANADBA*, but isn't it pretty common to move towards a denormalized schema when doing performance tuning? (surely that doesn't count as the "database stopping from working" even if it does result in an ugly schema?)

Do you mean "when doing performance tuning when it's actually needed", or do you mean "when doing performance tuning because some guy who thinks optimizing is a cool thing to do wants to do it, even if it breaks the code/database"? The latter case might be common, but I suspect it usually involves a broken database to begin with, as such guys are unlikely to actually understand third normal form....

Denormalizing a database - at least for third normal form, I can't vouch for sixth and seventh normal form - is cognate to removing locks between contending threads. Yes, it can result in faster performance. On the other hand, the resulting code is arguably broken. It's just that in sufficiently extreme cases, it might conceivably be justified to break the code for performance.

Note that there are legitimate schema optimizations, such as splitting tables, that don't denormalize with respect to third normal form. These do tend to be used where the databases are very large, such as at financial institutions. As for all optimizations, though, they are better done at the end of the process, not early - that's one respect in which databases don't differ from code.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Warren Dew:
Denormalizing a database - at least for third normal form, I can't vouch for sixth and seventh normal form - is cognate to removing locks between contending threads. Yes, it can result in faster performance. On the other hand, the resulting code is arguably broken.



You must be using a different definition for broken than me. For me, a system is broken when it doesn't work as expected (when it fails its tests). Code can contain duplication and not be broken, and I think the same is true for a database design.
 
Stan James
(instanceof Sidekick)
Posts: 8791
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I like to get to 3rd normal first because it improves understanding and is nicely self-consistent. Then denormalize only when somebody proves there is a performance problem that can be solved this way.
 
Warren Dew
blacksmith
Posts: 1332
2
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Ilja Preuss:

You must be using a different definition for broken than me. For me, a system is broken when it doesn't work as expected (when it fails its tests). Code can contain duplication and not be broken, and I think the same is true for a database design.

Databases are about the data, code is about doing stuff with the data. Duplicated data in a database is like duplicated threads in a program. In my experience, when I fire off two threads where I really only wanted one, the result is generally unexpected behavior.
[ July 02, 2004: Message edited by: Warren Dew ]
 
Lasse Koskela
author
Posts: 11962
5
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Warren, are you saying that there is no mid-ground between "broken" and "perfect"? Something named "good enough", perhaps?

By the way, my stance is with what Stan said -- optimize only when you need to -- but I do believe optimizing by denormalizing does not mean "breaking the system".
 
Warren Dew
blacksmith
Posts: 1332
2
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Lasse Koskela:

Warren, are you saying that there is no mid-ground between "broken" and "perfect"? Something named "good enough", perhaps?

Of course there's a middle ground. In database design, third normal form is generally considered to be that middle ground. "Perfect" would require seventh normal form.

I also don't think that "broken" and "good enough" are mutually exclusive. There are major commercial web sites that crash every couple of months, but they seem to have decided it's cheaper to reboot and continue every now and then than to find and fix the bugs, and I can imagine they may well be right. What's "good enough" depends on the application.

For some applications, hiring someone who doesn't understand databases and just tosses a bunch of fields together may indeed be "good enough", just as hiring someone who doesn't understand object oriented programming and writes his application as a zillion global functions and variables may be "good enough". However, in any project where it makes sense to take the effort to get good programmers who can write quality code, I have a hard time believing that it makes sense to accept inferior quality in the database.
 
Sheriff
Posts: 17644
300
Mac Android IntelliJ IDE Eclipse IDE Spring Debian Java Ubuntu Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Just thought I'd add my own experience to the mix here. This has more to do with the database design driving the OO model being a Bad Thing. One bad effect I have seen (and which I'm currently having to deal with) is that of leaky abstractions whereby one-to-one mapping of business objects to tables results in the business objects having inappropriate knowledge of the persistence details. Leaking of persistence details into the object model is bad because it can steer you into making bad design decisions.

For example, one symptom of this type of leaky abstraction that I have learned to look for is the existence of a type field in the business objects. This "smell" can indicate that the class that has "types" should be refactored into subclasses, one subclass for each "type". After all, isn't that what OO is for? It's easy to fall into this trap when you are coding straight JDBC because you can usually persist instances of a class and its subclasses into the same table. Adding a type field in the object model makes persistence straightforward. However, your business logic code usually ends up with a lot of "if (type==this) else if (type=that)" non-OO code.

Tools like Hibernate can help you plug this leak and avoid screwing up your object design by providing the capability to map the class of an object to an appropriate value in a "type" column in the database.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Junilu Lacar:
For example, one symptom of this type of leaky abstraction that I have learned to look for is the existence of a type field in the business objects. This "smell" can indicate that the class that has "types" should be refactored into subclasses, one subclass for each "type".



Or a Strategy or something...
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Warren Dew:
For some applications, hiring someone who doesn't understand databases and just tosses a bunch of fields together may indeed be "good enough"



Warren, I don't see anyone advocating doing this.

However, in any project where it makes sense to take the effort to get good programmers who can write quality code, I have a hard time believing that it makes sense to accept inferior quality in the database.



And still even the best database developer we can get for a project might still be a human - he might make mistakes, he might make compromises because of an important deadline or he might just learn something while he's working on a project.

Therefore I have a hard time believing that on a real project the database design will always be in a perfect state, that we will never find a single flaw, or just a single place where the design could be improved a little bit. Database refactoring exactly is about not accepting such a flaw once it got introduced, but about correcting it.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Warren Dew:
Databases are about the data, code is about doing stuff with the data. Duplicated data in a database is like duplicated threads in a program. In my experience, when I fire off two threads where I really only wanted one, the result is generally unexpected behavior.



First, I don't fully buy this analogy. Second, if the two threads don't cause the system to malfunction, the system isn't broken.
 
Greenhorn
Posts: 9
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Methods & Tools published an article about "Database Modelling in UML". This article offers a step by step approach to transform a UML object-oriented class model in a purely relational database.

http://www.methodsandtools.com/archive/archive.php?id=9
 
Murasoli Maran
Ranch Hand
Posts: 193
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Stan James:
On the other hand starting from data is dead simple, fast and hard to resist for stateless CRUD applications that project a database onto the screen with little behavior. Use the style that fits the task.



If there is a simple and effective way and ambiguous way in which decisions can be argued,wiser is the former.But database design should be perfect.

If one thing is simple,it does'nt mean it's not effective.

Remember,i am talking abt database driven applications.While designing games and some more functional and less data projects,starting with OOAD will be better.Otherwise it's simply a foolishness to start with OOAD.

ER and relational model having it's own strengths.Tables are designed according to a specific entity.Entities are very different from each other.It does'nt have any ambiguity.Entities are very different and only related with FK.

and u can create OOAD objects according to a table.coz tables are distinct entities.It's more easier than OOAD objects and polymorphism.Ploymorphism is good,but while coming into DB side,it's problom than good.and require corrections and corrections again and again.Time is also a very important thing.One should think in all dimensions.Productivity is related with time and speed of thinking and speed of decision making.If a decision is simple,taking it itself require maturity.fools realise simplicity only after making lots of errors and doing tons of work.Selecting a simple and effective decision needs it's own intelligence.Running behind trend does'nt require that.

what i suggest is make javabeans according to database tables.put these objects in a seperate package.use a DatabaseFacade as an interface to this package.

Design domain model according to functionalties related to business other parts,Think in the way of patterns.Patterns are cool.and i am not against OOAD.while domain design,OOAD will be required.But only use it from backward to forward.from database to OOAD.not OOAD to database,Intergrating to OOAD is easy.becoz javabeans created from databases are very much OOAD friendly.Then connect to database acording to DatabaseFacade.

so the entry point to the database from domain model is only the interface DatabaseFacade.

So the support of different databases does'nt require change in domain model.use Adapter pattern for that.

Make a project as many small individual parts.Reduce coupling as low as possible.then redisigning will be very simple.

I said what i am thinking.I have just started designing small scale apps.only having 2 and half years exp in software field.And if anybody can convince me.it's good.But i have'nt convinced to use OOAD with the above discussion.I think it's only a pro-trend psycho.

Design is not compleately science.It's a combination of talent and science.

MM
[ July 14, 2004: Message edited by: Murasoli Maran ]
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Murasoli Maran:
Remember,i am talking abt database driven applications.



Can you please explain what you mean by "database driven applications"? Thanks!
 
Consider Paul's rocket mass heater.
reply
    Bookmark Topic Watch Topic
  • New Topic