• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Ron McLeod
  • Paul Clapham
  • Devaka Cooray
  • Tim Cooke
Sheriffs:
  • Rob Spoor
  • Liutauras Vilda
  • paul wheaton
Saloon Keepers:
  • Tim Holloway
  • Tim Moores
  • Mikalai Zaikin
  • Carey Brown
  • Piet Souris
Bartenders:
  • Stephan van Hulst

OOP considered harmful (was: C. Date on UML book)

 
Leverager of our synergies
Posts: 10065
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I think this discussion could be much more constructive if we could talk about *your* *experience* regarding OOP - what projects you did and where OOP failed *you*. Currently it is my impression that you *don't have* any experience.
"*don't have* any experience with OOP" - almost true, not any serious experience anyway, but with "*don't have* any experience" I would utterly disagree. Part of misunderstanding seem stem from the fact that I had different kind of experience, and from this my alternative viewpoint OOP seem overcomplicated. Too much craft and too little science. I do not know what kind of experience you have other than OOP, I was surprised that you found it natural that fundamentals get revisioned. What would you say if someone claimed he has better understanding of multiplication table every time he use it? Relational model is close to multiplication
table.
You said:
Well, relational normalization does only normalize data, so it seems to me. A big part of the OOP movement is the goal to also normalize behaviour. You are right that till now there isn't a mechanical way to do that, so we have to resort to patterns, refactorings and other design principles to guide us until we find one (if there even *is* one to find).
Relational model defines data as well as queries and updates on data. What is left out? C.Date said not so much - presentation and communication with user. What do you think?
"You are right that till now there isn't a mechanical way to do that" - their normalization theory is rather mechanical, that's why I wondered how much of what OOP currently does can be hold by their model. I am sure there are applications like aforementioned XML parsing, or scientific calculations, or compiler design that are not good fit for RM, but large part of mainstream programming should be susceptible, it seems.
They said that the public is unaware of what RM in fact is and what I see only proves their claim. I checked "Oracle DBA Bible" - huge book, and there was one paragraph of 4 sentences about checking data integrity other than keys (they called it "check" if I am not mistaken): check constraint can hold arbitrary logical expression. - something like this. Date claims that this is a central concept of his model!
 
Mapraputa Is
Leverager of our synergies
Posts: 10065
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
OO is a science, like biology or quantum physics. There was a time when a doctor was as likely to kill you as help you...
Good example. Perhaps to say "science -- no science" means to put the distinction in too extreme form. Yet there is distinction. Until it was discovered that the blood has different groups, blood transfuse from one person to another had erratic behavior. Sometimes it worked Ok, sometimes not. I even believe that some people were better at it than other. After it was discovered that only certain groups are compatible, the problem was basically solved. What it takes now is some training and hygiene.
From my experience RM is this theory of blood groups. If it has different domain with OOP - fine. But when they overlap, I believe RM (at least as a theory) is superior.
 
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Mhh, now that I have slept over it and the thread seems to regain some deeper seriousness, I'd like to join again to contribute...

- I think, this could be a nice way to develop what is called "Domain specific languages" - yet another alternative to OOP.

</blockquote>
In Smalltalk you could easily write things like
[whatever processing we want to put here] while: '<(.*)>(.*)</\\1>' matches: reader getUserInput
You would do this by adding the while: matches: method to the Block class (I don't know Smalltalk very well, so the following probably isn't the propert syntax - there is missing some interpunctation, for example):

IMO, the OO solution is very clean for this problem. It's certainly much harder in Java - but just because it *isn't* a fully OO language; blocks aren't objects, for example.


to quotes from books which often seem to be taken out of context.
Quotes always are taken out of context, by definition.


Ah, yes, of course...
I meant that it seemed to me as if you were also *interpreting* them out of the context of the book. I might be wrong...


It seems to me that Mr. Bloch basically said that composition is in all ways superior to inheritance, the only disadvantage is that it precludes callback functionality.


I think he said that it is superior *in flexibility*. OTOH, if you don't need that flexibility, inheritance can in fact be superior to composition in complexity (don't know wether he mentions that).


Does that mean we get rid of inheritance now? It would simplify the language greatly.


Simplifying the language doesn't necessarily also simplify the programming in it - otherwise we would all start programming in terms of turing machines, wouldn't we?


Here I am going far out of competence, so hopefully somebody will correct me if I missed something.
First, we get rid of class-object distinction, since the only purpose of "instantiating" a class as an objects was to call super-class' constructor


No, the purpose of instantiating a class is to create an entity which represents a state. In a typical program, we have a fixed number of classes, but need a variable number of entities. Therefore we can't get rid of instantiating in some form.

we get rid of constructors. If we need to provide initialization, simple assignment will work. PL/1-like structures will do the job. Or are there other benefits of using constructors?


The main benefit of constructors I see is that they communicate what you have to do to get an object in a consistent state. In Smalltalk they are implemented as simple class methods.
I don't know about PL/1 structures and it seems hard to find something on the net - can you provide a small example?


Hierarchy of interfaces is here, because through interfaces we are getting polymorphical calls.


We can get rid of that, too - as all the dynamically typed languages show. You can resolve polymorphic method calls simply by matching method signatures.

There are mountains of problems we are freed ourselves from (default non-argument constructor and what will happen if superclass doesn't have one, constructors must not invoke overridable methods, etc etc etc.)


These are again Java issues, not OO issues, I think.


I am going to walk through an example to see if this will work, for not to be too theoretical... Ilja, you do not have to respond, if you think it's too theoretical and you are not interested, maybe somebody else will.


Well, currently it seems as if I were interested... :-)

Why my posts are always so long, I have no idea...


Might be worthwhile to think about it...
[ August 26, 2002: Message edited by: Ilja Preuss ]
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Mapraputa Is:
[QB]Currently it is my impression that you *don't have* any experience.
"*don't have* any experience with OOP" - almost true, not any serious experience anyway, but with "*don't have* any experience" I would utterly disagree.


Of course I meant "experience with OOP"! I never wanted to imply anything about your overall experience, sorry.

Part of misunderstanding seem stem from the fact that I had different kind of experience, and from this my alternative viewpoint OOP seem overcomplicated. Too much craft and too little science. I do not know what kind of experience you have other than OOP


Well, many years of procedural programming (Basic, Pascal, C, Modula, Perl etc.), some minor excursions to functional (Opal) and declarative (Prolog) programming, as well as a littel bit of Assembler.


I was surprised that you found it natural that fundamentals get revisioned.


I think what you need to take into account is that - though OOP isn't really *that* new - it is only recently becoming part of mainstream, and not surprisingly the fundamentals get interpreted differently by people coming from different directions. OOP simply isn't that well-explored yet that those things already have settled down.

What would you say if someone claimed he has better understanding of multiplication table every time he use it?


I would congratulate him, as I think learning is a good thing. But I have the feeling that I don't understand what you are getting at here...?

Well, relational normalization does only normalize data, so it seems to me. A big part of the OOP movement is the goal to also normalize behaviour. You are right that till now there isn't a mechanical way to do that, so we have to resort to patterns, refactorings and other design principles to guide us until we find one (if there even *is* one to find).
Relational model defines data as well as queries and updates on data. What is left out?


It's more a question of maintainability, extensibility and testability.
For example, how do you resolve duplicated logic in queries/updates? (I am thinking of the type of logic as in your while-matches example here.)
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Mapraputa Is:
Map, I don't see why you keep quoting Bloch to make a point -- he doesn't agree with you at all.
This is an interesting theore... ah, sorry, interesting practical problem. Is "agreement" symmetric, i.e. does
M a B => B a M
Or its's Ok when M a B but B not a M?


Well, Bloch probably wouldn't agree that you fully agree with him, so if you still would fully agree with him, you would contradict yourself...

In case of OOP (I should probably say more careful "Java version" of OOP) it stroke me that you get problems even if you do not do anything disadvised. And some problems got discovered later, so creators of Java API think they are doing perfectly good things, yet later their classes exhibit "erratic behavior", as Bloch put it, or, in other words are incorrect.


I don't think that the goal of Java ever was to become the perfect OO language - after all they were not targeting OO specialists, but the myriad of (mostly procedural) programmers struggling with the complexity of C++. There are many languages with much better implementation of OO concepts - but also commercially less successfull.

I worked with relational databases and had very satisfying experience, I've been missing SQL functionality ever since. Part of my rant against OOP stem from its apparent desire to do in a very complicated way what could be done with DB with far less efforts.


Time and again I read reports about problems of testing and maintaining stored procedures and the like. Possibly this is only because of lack of experience and/or tools. I don't know anyone who is categoricaly dismissing RDBMs as usefull tools, OTOH.

Particularly, because I tend to trust people who can explain things in such a clear and strict manner, rather than define fundamental concepts as "something with responsibilities".


Interestingly, in my daily programming efforts I find this definition to be very helpfull - much more effective than any more formal one I might have heard of.
[ August 26, 2002: Message edited by: Ilja Preuss ]
 
Sheriff
Posts: 7001
6
Eclipse IDE Python C++ Debian Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I'm always a little wary of stepping into someone else's battle, but I have been thinking about this for the last few days and come to a few tentatitve thoughts. Feel free to laugh, shout, throw things, etc.
First, Map and Ilja obviously each have a somewhat different "take" on the topic, and some of the heat in this discussion seems to come from confusion, despite them both being precise and eloquent in their own areas. I apologise in advance if I misunderstand either of the positions.
Map, you seem to be concentrating on programming as modelling (and the Relational Model in particular). In these terms a software system is defined as a description of a solution to a problem. This has a lot of support from literature, as it has been the predominant view throughout most of the history of software. (I shall call this the "static" view).
Ilja, you seem to be concentrating on programming as the actual process of development, which starts from nothing, and produces systems to meet and contiue to meet evolving requirements and environments. (I shall call this the "dynamic" view).
Each of these two viewpoints embodies assumptions, which serve in part to "muddy the waters".
The static view embodies the assumption that there exists some way of evaluating whether one solution is more "correct" than another, hence Map's appeals to math, logic and set theory. From the dynamic view this is almost irrelevant. It might take ten times as long to decide on the "best" data model for a problem than it would to code an "almost the best" solution, and during that time the problem might have changed anyway.
The dynamic view embraces the idea that producing software is about solving real world problems in a "good enough" way. The assumption is that any workable solution is better than no solution or a late solution.
"dynamic" advocates have a tendency to think of "static" advocates as impractical, ivory-tower academics, who never actually get around to producing any code.
"static" advocates have a tendency to think of "dynamic" advocates as clueless hackers who churn out any old rubbish without thinking about it.
Neither of these stereotypes are wholly true, of course
Now, on to the subject at hand. While relational data modelling may be both theoretically and practically powerful, it does have one immense drawback. Changing such a model is not something that can be done lightly. If you've ever had to maintain and update a complex relational data model, and all the supporting code which assumes its structure you'll know what I mean.
The well-known "third normal form" is a great way to model a fully-understood problem. But if that problem changes, then huge swathes of table definitions and data will probably also need to change. If you decide to plan for that sort of change, you may decide to choose one of the less-specific higher normal forms, but these come with an overhead of less understandability and decreased performance.
After a bit of this, it's easy to become sick of tiny changes in the model rippling throughout the code, and want to design systems which isolate such decisions as much as possible. One of the destinations of this train of thought is the idea of describing interfaces between areas of design/code in terms of the behaviour they offer rather than the data they model.
This sort of experience is why many "dynamic" advocates favour Object-Oriented, Functional, or List-Processing styles. They all share this ability to concentrate on behaviour in an attempt to limit unnecessary code and data changes.
To summarise:
A Relational Model is a fine thing for an essentially static problem. If the data is well understood and unlikely to change, then it may well be worth the effort building a good relational model.
If the problem is (and be honest here) less than fully understood, transient, or likely to change, then a style which emphasises simplicity of changes may well be a better aproach.
Now, what have [i]I[/i} misunderstood ?
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Frank Carver:
I'm always a little wary of stepping into someone else's battle, but I have been thinking about this for the last few days and come to a few tentatitve thoughts. Feel free to laugh, shout, throw things, etc.


Well, may I also feel free to thank you for sharing your elaborate thoughts?


Ilja, you seem to be concentrating on programming as the actual process of development, which starts from nothing, and produces systems to meet and contiue to meet evolving requirements and environments. (I shall call this the "dynamic" view).


Well, I would prefer to start at least form some sort of "vision" instead of nothing, but ok...

The dynamic view embraces the idea that producing software is about solving real world problems in a "good enough" way. The assumption is that any workable solution is better than no solution or a late solution.


Here I disagree. I don't think that "good enough" really is good enough. But I also think that you can't get to the ideal solution by pure thinking - you have to try solutions to learn from your experiences. Well, possibly in RM the experiences are so well understood that you really can formalize them and apply them mechanically (though I'd rather doubt it).
Certainly in OOP we don't have this form of formalism, so you don't get to the ideal solution by purely mechanical thinking - you rather converge it by analyzing your current solution and adjusting it by what you learned.
OTOH, *there are* some principles I like to apply rather stringently when doing OOP. http://www.objectmentor.com/resources/listArticles?key=topic&topic=Design%20Principles is a good resource in this regard. Additionally, there are very helpfull heuristics, such as Code Smells and accompanied Refactorings. All of this is still under discussion in the community, admittedly.

Now, on to the subject at hand. While relational data modelling may be both theoretically and practically powerful, it does have one immense drawback. Changing such a model is not something that can be done lightly. If you've ever had to maintain and update a complex relational data model, and all the supporting code which assumes its structure you'll know what I mean.
The well-known "third normal form" is a great way to model a fully-understood problem. But if that problem changes, then huge swathes of table definitions and data will probably also need to change. If you decide to plan for that sort of change, you may decide to choose one of the less-specific higher normal forms, but these come with an overhead of less understandability and decreased performance.


Of course that might be just because of a lack of "refactoring" tools for relational models.
In an XP context, Ron Jeffries once reported about a project where they started to refactor the database model from day one, over the time building the tools they needed to make that a painless effort. It sounded as if the team got very good at transforming the database at their will.
OTOH, that *was* an OO project - and probably their wasn't much of logic in the database...
 
Frank Carver
Sheriff
Posts: 7001
6
Eclipse IDE Python C++ Debian Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Ilja wrote: I don't think that "good enough" really is good enough.
I guess this is an example of problems with communication. In purely grammatical terms, "good enough" is always good enough :-they are the same words! I didn't intend any slur, but somehow, "good enough" seems to have taken on aspects of an insult.
For me, "good enough" is the goal to aim for. I want any system I produce to be good enough that the customer is satisfied. I want any code I produce to be good enough that it can be maintained and updated as needs change. I want any process I use to be good enough to produce product on time and in budget.
How or why I might manage to make it "better than good enough" in one area without compromising another, and how I might honestly know that it is "better" is something which is still not clear to me.
To me, "good enough" is "good" with acceptance tests
OTOH, *there are* some principles I like to apply rather stringently when doing OOP. http://www.objectmentor.com/resources/listArticles?key=topic&topic=Design%20Principles is a good resource in this regard.
Looks interesting. I shall spend some time studying that list.
 
Mapraputa Is
Leverager of our synergies
Posts: 10065
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I'm always a little wary of stepping into someone else's battle,
It's not someone else's battle, everybody are invited to participate! I was worrying that it's starting to look like "Ilja vs. Map" only, thank you for your contribution!
I'll write more later, but one observation... As usual, after all misunderstadings and miscommunications are cleared, there is almost nothing to argue about
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Frank Carver:
Ilja wrote: I don't think that "good enough" really is good enough.
I guess this is an example of problems with communication. In purely grammatical terms, "good enough" is always good enough :-they are the same words! I didn't intend any slur, but somehow, "good enough" seems to have taken on aspects of an insult.


Well, I didn't take it as an insult, I just disagreed... But I think I have to qualify:
When we are speaking about the *external* qualities of software, I am OK with "good enough". After all, as a developer it isn't my responsibility to tell the business people what to spend their money on.
But it is *my* responsibility to be as effective as possible when working for their money. Now, some people might claim that the internal quality also has only to be "good enough", that working hard at prevent bugs in a module the customer doesn't care that much about or improving a design that already works is wasting time and money. It is my believe - backed up by my slight experience and statements of much more experienced people I highly respect - that producing high quality software is more cost effective than producing "good enough" software.
I think there are several forces at work:
- you don't really know how the project will evolve, what will be important to the customer in a week. Probably even the customer doesn't know for sure. Fixing the then important things later will cost much more than fixing them now - possibly even more then fixing *them all* now. (Or, with other words: having a high internal quality makes it possible to be very flexible about the external quality.)
- the "Broken Window" effect: http://c2.com/cgi/wiki?FixBrokenWindows
- pride for the product, knowing to produce something of high quality, is a big motivating factor to me (and things I read suggest, for many other developers, too), probably more motivating than anything else


OTOH, *there are* some principles I like to apply rather stringently when doing OOP. http://www.objectmentor.com/resources/listArticles?key=topic&topic=Design%20Principles is a good resource in this regard.
Looks interesting. I shall spend some time studying that list.


I think that is the time to confess that I haven't read them all in their entirety yet (though I know the concepts at least sketchily from other sources), let alone incorporated into my working habits - so that should have rather read "principles I *would* like to apply"... :roll:
 
Ranch Hand
Posts: 1365
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
OO code can be proven to work, and in fact they kept making me do these awful proofs by structural induction all summer long. OO need not be limited to experimentation and 'good enough'.
The dudes at NASA probably don't do a huge amount of OO for performance reasons, but the dudes at companies like Microsoft and Sun are betting everything on the fact that their OO software for managing an enterprise won't suddenly lose track of all their money.
 
Frank Carver
Sheriff
Posts: 7001
6
Eclipse IDE Python C++ Debian Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Ilja wrote: I think there are several forces at work:
- you don't really know how the project will evolve, what will be important to the customer in a week. Probably even the customer doesn't know for sure. Fixing the then important things later will cost much more than fixing them now - possibly even more then fixing *them all* now. (Or, with other words: having a high internal quality makes it possible to be very flexible about the external quality.)
- the "Broken Window" effect: http://c2.com/cgi/wiki?FixBrokenWindows
- pride for the product, knowing to produce something of high quality, is a big motivating factor to me (and things I read suggest, for many other developers, too), probably more motivating than anything else

Ah, but that just shows that you have a set of "acceptance tests" for your own code which include these things. All I'm claiming is that there has to be some sort of "enough" for every "good". Otherwise you'd never know when to stop.
How do you, personally, know when to stop refactoring and move on to new functionality? Any answer other than "I never stop" (which I wouldn't believe anyway), shows you have some sort of "enough" for that "good".
How do you know when to stop writing unit tests for a piece of new functionality and deliver the code?
And so on.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by David Weitzman:
OO code can be proven to work, and in fact they kept making me do these awful proofs by structural induction all summer long. OO need not be limited to experimentation and 'good enough'.


My point is exactly that code isn't good enough just because it works - except when it really won't be touched anymore. In fact, most of the code is not for the computer, but for people - the computer doesn't care how you name your classes, how big your methods are and how your code is formatted. But developers care - and those who have to maintain your code certainly do!
 
Ranch Hand
Posts: 86
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
My 2 pence
OOP makes it easier for humans to reason about systems and programs that in some way model them.
I think a system with large amounts of SQL would be really hard to support.
Tom
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Frank Carver:
How do you, personally, know when to stop refactoring and move on to new functionality? Any answer other than "I never stop" (which I wouldn't believe anyway), shows you have some sort of "enough" for that "good".


Well, when I am working on already well factored code, I stop when I don't see any more ways to improve it. I don't stop because I think it shouldn't be improved more, but because I don't know how to do it. Of course, possibly this is because it *is* in the best possible state, but how should I know? In fact, that is probably not the way to bet, anyway...
When I get an idea of how the code could be improved further (say, when standing in the shower at the morning), I will grap the next opportunity to do the refactoring.
When working on a big ball of mud (as I do currently ), it's basically the same, though I pay attention to also frequently deliver value in the form of new functionality.

How do you know when to stop writing unit tests for a piece of new functionality and deliver the code?


That is a little bit more tricky - I have to think about it...

And so on.


Yes - please go on!
 
Frank Carver
Sheriff
Posts: 7001
6
Eclipse IDE Python C++ Debian Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Well, when I am working on already well factored code, I stop when I don't see any more ways to improve it. I don't stop because I think it shouldn't be improved more, but because I don't know how to do it. Of course, possibly this is because it *is* in the best possible state, but how should I know? In fact, that is probably not the way to bet, anyway
So, you admit that you stop refactoring when you reach some personal limit.
I didn't really want to get into a definition match with this. My intended contrast was between the sort of practical approach which admits that sometimes you must stop before perfection in order to proceed, and the converse, that perfection is a more important end in itself.
Maybe it's just that I've seen too many projects repeatedly delayed by people fussing with models based on abstract principles rather than learning by doing.
That is a little bit more tricky - I have to think about it
If it helps, the stock answer from the XP folks seems to be "keep writing tests until fear turns to boredom". It's very subjective, but illustrates the point, I guess.
 
Mapraputa Is
Leverager of our synergies
Posts: 10065
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
IMO, the OO solution is very clean for this problem. It's certainly much harder in Java - but just because it *isn't* a fully OO language; blocks aren't objects, for example.
Not sure if this should be considered a part of a fully OOP language... I read this is called "meta-level programming". Perhaps these are orthogonal concept. Microsoft had an interesting project called "Intentional programming", then they seem to cancel it for unknown reason, and I cannot find any information on the Internet besides "does anybody know what happened to IP?"
Charles Simonyi. "The Death Of Computer Languages, The Birth of Intentional Programming".
ftp://ftp.research.microsoft.com/pub/TR/Tr-95-52.doc
or
http://citeseer.nj.nec.com/simonyi95death.html
"Finally the death of syntax?"
Smalltalk's Reflection system is wonderful. And Java's Reflection is like a virgin - read-only
I think he said that it is superior *in flexibility*. OTOH, if you don't need that flexibility, inheritance can in fact be superior to composition in complexity (don't know wether he mentions that).
It doesn't look so to me... There is a whole section about what to do if you decided to allow inheritance, definitely doesn't look "simpler" to me, not from developer's side anyway. For a client of the class it *is* easier to say "extends" and get all instance methods "for free" rather then to copy method definitions and insert method calls, but if we decided to get rid of inheritance, I suppose we could redefine "extends" clause semantics so it would do all this automatically.
I think, we should call our refactored version of Java "Josh" - from "Java, Joshua" and with overtones of "Gosh!!!..."
Thinking about an example, "Animal" class/interface is too contrived, we can argue for long time whether it should be extended or not and whether it should allow to treat "people" as "animal" or not... I was thinking abut extending an XML parser. There are such thing as CDATA section in XML, which according to specification must not be parsed and must be sent to an application basically "as is". CDATA are often used to hold JavaScripts, so we want our parser to perform a check whether JavaScript syntax is correct :roll:
That I am not sure how DOM parsers work should definitely help :roll:
Simplifying the language doesn't necessarily also simplify the programming in it - otherwise we would all start programming in terms of Turing machines, wouldn't we?
Sure! But this deserves a separate post...
No, the purpose of instantiating a class is to create an entity which represents a state. In a typical program, we have a fixed number of classes, but need a variable number of entities. Therefore we can't get rid of instantiating in some form.
A state of what? Class? Why variables cannot represent it? And why do we need a variable number of entities?
The main benefit of constructors I see is that they communicate what you have to do to get an object in a consistent state. In Smalltalk they are implemented as simple class methods.
Ah, I meant what distinguishes constructors from "normal" methods technically. Simply initializing method will work, of course.
I don't know about PL/1 structures and it seems hard to find something on the net - can you provide a small example?
Just named combinations of other variables, which can be arrays or other structures. Finkel gives this definition:
"Structured types are built out of other types. Arrays, records, and pointers are structured types.
record
Third: character;
Fourth: integer;
Fifth : (Apple, Durian, Coconut, Sapodilla,
Mangosteen)
end;"
We can get rid of that, too - as all the dynamically typed languages show. You can resolve polymorphic method calls simply by matching method signatures.
Actually this was my initial idea, but then I though interfaces can serve as a level of indirectness, isolating from knowing concrete implementation. Instead of referring to methods by classes they belong to, we refer by interfaces, so implementations can vary independently. Not sure I understood you right, but it seems that "resolving polymorphic method calls simply by matching method signature" doesn't provide this effect.
For example, how do you resolve duplicated logic in queries/updates? (I am thinking of the type of logic as in your while-matches example here.)
What is "duplicated logic"? Ah, you mean if I have a query and I want to use its result in other queries?
Well, Bloch probably wouldn't agree that you fully agree with him, so if you still would fully agree with him, you would contradict yourself...
I am in no position to neither agree nor disagree with Bloch, all I can do is to quote him with awe. It's like I have a reference to "Bloch" object and redirect all my method calls to it...
Time and again I read reports about problems of testing and maintaining stored procedures and the like. Possibly this is only because of lack of experience and/or tools. I don't know anyone who is categoricaly dismissing RDBMs as usefull tools, OTOH.
"Stored procedures" are officially considered harmful! People use them to improve performance and Date argues that this is not how perfomance should be improved. Unfortunately, my knowledge ends here, so I am unsure how exactly performance should be improved. "categoricaly dismissing" - no, the point was that application programmer should not handle business rules, this whole task should be left to RDBMs and currently it is not so.
[ August 26, 2002: Message edited by: Mapraputa Is ]
 
Mapraputa Is
Leverager of our synergies
Posts: 10065
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
The static view embodies the assumption that there exists some way of evaluating whether one solution is more "correct" than another, hence Map's appeals to math, logic and set theory. From the dynamic view this is almost irrelevant. It might take ten times as long to decide on the "best" data model for a problem than it would to code an "almost the best" solution, and during that time the problem might have changed anyway.
First, we need to separate two tasks:
1) developing a programming language
2) developing an application in this language
My appeal to math, logic and set theory is particularly strong when we are talking about task # 1. I do not see any reason why not to put a language development on solid ground. What is there in a language that warrant for fuzziness? Language is a formal system and as such is susceptible to examination and proofs by formal methods.
Application development is different in many ways. Its creators aren't expected to have a good knowledge of "math, logic and set theory", requirement are often fuzzy, incorrect, incomplete, everchanging etc. etc. Finally, there simply can be not enough time to develop the best possible design. There are many other problems, I am sure. Application developers use "math, logic and set theory" indirectly, maybe without thinking or even knowing about it, like one of literature personages (have no idea how to spell his creator's name) was talking prose and did not know that. Application developer use "math, logic and set theory" when their tools - languages are based on them. Look at XPath, look at RegExp, they all provide abstractions built as combination of several logical/set operations and those abstractions aren't departed from their "theoretical" ground that far, they are still recognizable. OOP artifacts are also built of sets, (what else they can be built of?) But they departed from the language of sets so far, that it's very hard to track them back.
The well-known "third normal form" is a great way to model a fully understood problem.
I do not see a big difference between modeling data in "normal forms" and in terms of OOP. In both cases you need to decide which fields (pieces of data) are there, what are relations between them, what should happen if this value changed, what should happen if that entity gets deleted - you do all the same with OOP, I believe. The only difference that with RM you express constraints in declarative form, and with OOP you need to program all logic yourself. And in my impression process of normalization is an order of magnitude simpler than OOP design.
But if that problem changes, then huge swathes of table definitions and data will probably also need to change.
Did you ever have to perform radical changes in design? In my experience, some details always changes, but main entities and relationships between them survive. And if you need to perform drastic changes in design with OOP, I suspect it's not a trivial task either?
And accidentally, speaking about changes, (maybe OOP will change it, I do not know) main problem I had was not changes in software, it wasn't that difficult, but to migrate megabytes of data to new version - this was a really labor-consuming task. Ironically, the easiest time I had was when we worked with truly relational DBs - you could use all power of SQL to construct new tables out of old, and then if everything is OK delete old ones. The further from SQL, the more "data conversion" programs I had to write: open a file, set up a cycle, read a record, do some processing... OMG. It would be Ok, if I did not know how much easier it could be
If you decide to plan for that sort of change, you may decide to choose one of the less-specific higher normal forms, but these come with an overhead of less understandability and decreased performance.
The main problem with "less-specific normal forms" is that your database is in logically inconsistent state because of redundancy, which will easily yield incorrect result on queries. To keep data intact you now need to implements checks yourself, instead of database doing it for you. I am not sure if this helps to plan for changes...
Speaking about changes, there is one interesting difference: whether to consider changes in RM easy or difficult, but they all are equally easy or equally difficult, there is no intrinsic mechanism to plan for certain kind of changes. In contrast, patterns in Java seem to make certain kind of changes easier than other.
"Another problem of design patterns is that they allow for adaptability, but only if the need for certain adaptability was anticipated and the appropriate design patterns were applied in the design phase. This is referred to as "the preplanning problem."
Krzysztof Czarnecki, Ulrich W. Eisenecker, "Generative programming", p. 295
- I read this only today and was puzzled how it is different from all this hype that "patterns allow for changes" or something. Definitely some critical approach wouldn't hurt our industry
So in this sense Thomas Paul was very right when he said about up-front design.
Later the book shows that "the preplanning problem" doesn't exist in Smalltalk because of its wonderful reflection system.
(by the way, if you are interested in all kinds of exotic languages and concepts, than it's a great book! Hm, maybe I need to write a review for our bunkhouse... Unfortunately, it requires some theoretical knowledge, and a lot of stuff there is just above me).
If the problem is (and be honest here) less than fully understood, transient, or likely to change, then a style which emphasises simplicity of changes may well be a better aproach.
I wrote a lot here and then deleted. The problem is we have no scale to measure and compare "easiness". In particular, I believe that well designed OOP application is easy to maintain, extend and all this. What about badly designed one? I heard an opinion that there is a huge difference in quality of OOP applications, and badly designed ones are a nightmare to maintain. Now the question: how many well/badly designed OOP applications are out there? Ilja blamed college education and said he expects students to actively look for better information. How many will, do you think? Top 10%? My impression is that OOP is really for top 10% of programmers; rest will only bring havoc...
[ August 27, 2002: Message edited by: Mapraputa Is ]
 
Mapraputa Is
Leverager of our synergies
Posts: 10065
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Turing machines and OOP languages
What are most important things that distinguish programming languages? What sort of
1) basic abstractions (building blocks)
2) glue/ways to combine this abstractions into new ones
they provide.
How can we define what kind of "basic abstractions" is the best? I can sketch three categories:
a) most basic = most universal in purely theoretical sense. Nothing is more universal than sets, so sets, their immediate offshoots (lists, trees...), set operations (projection, union) = Relational Model and Relational Algebra. XPath, the language to query trees, RegExp, the language to query unstructured otherwise texts, fit here.
b) abstractions specific for certain domains. All so-called "domain-specific languages" fit here. It is said that using DSL (if there is one) significantly increase productivity.
c) abstractions "natural" to people's way of thinking. Here I must say that abstract thinking isn't natural to people at all. The most natural way of dealing with an abstraction is a metaphor, which often express something abstract in terms of more concrete. Very often this "concrete" are spatial relations. "Metaphors we live by" gives this example:
"Harry is in the kitchen
Harry is in Elks
Harry is in love
The sentences refer to three different domains of experience: spatial, social and emotional. <...> the concept "IN" in the first sentence emerges directly from spatial experience in a clearly delineated fashion. ... The other two sentences, however, are instances of metaphorical concepts."
My main problem with OOP languages is they apparently do not fit any of the categories above.
Their abstractions are too far from basic, not specific to any particular domain, and too abstract (=unnatural) to people's way of thinking.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Mapraputa Is:
[b]"Another problem of design patterns is that they allow for adaptability, but only if the need for certain adaptability was anticipated and the appropriate design patterns were applied in the design phase. This is referred to as "the preplanning problem."
Krzysztof Czarnecki, Ulrich W. Eisenecker, "Generative programming", p. 295
- I read this only today and was puzzled how it is different from all this hype that "patterns allow for changes" or something. Definitely some critical approach wouldn't hurt our industry
So in this sense Thomas Paul was very right when he said about up-front design.


I disagree. The conclusion would be correct if you had to stick with the patterns you decided upon upfront. But you don't - well designed and -tested OO systems are rather easy to refactor.

Later the book shows that "the preplanning problem" doesn't exist in Smalltalk because of its wonderful reflection system.


I don't get this - could you elaborate?

Ilja blamed college education and said he expects students to actively look for better information. How many will, do you think? Top 10%? My impression is that OOP is really for top 10% of programmers; rest will only bring havoc...


Well, a simple solution would be to educate them yourself. All it would take is having at least *some* experienced OO programmers and a collaborative working environment where the others could learn from them.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Mapraputa Is:
My main problem with OOP languages is they apparently do not fit any of the categories above.
Their abstractions are too far from basic, not specific to any particular domain, and too abstract (=unnatural) to people's way of thinking.


And nevertheless there are many people (including me) who find the abstractions of OOP to be very helpfull for writing good software. I wonder wether there might be a fourth category missing...
Perhaps we can get to it by thinking about how the following (more or less random) concepts fit into the categories:
- for-loops
- procedures
- modules
- function-pointers
- if-statements
- IO handling
- multi-threading
[ August 27, 2002: Message edited by: Ilja Preuss ]
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator


I think he said that it is superior *in flexibility*. OTOH, if you don't need that flexibility, inheritance can in fact be superior to composition in complexity (don't know wether he mentions that).
It doesn't look so to me... There is a whole section about what to do if you decided to allow inheritance, definitely doesn't look "simpler" to me, not from developer's side anyway. For a client of the class it *is* easier to say "extends" and get all instance methods "for free" rather then to copy method definitions and insert method calls, [...]


But the client of the class will be a developer, too - most often from the same team, often even *the same* developer!

but if we decided to get rid of inheritance, I suppose we could redefine "extends" clause semantics so it would do all this automatically.


I am confused - I think this is exactly what "extends" is doing: implicitely forwarding all calls to inherited methods to the super class.
Again, I suspect that it is rather hard to speculate about the balance between inheritance and composition without having experienced it...

There are such thing as CDATA section in XML, which according to specification must not be parsed and must be sent to an application basically "as is". CDATA are often used to hold JavaScripts, so we want our parser to perform a check whether JavaScript syntax is correct :roll:
That I am not sure how DOM parsers work should definitely help :roll:


The responsibility of a DOM parser is to parse XML - nothing more. It shouldn't have to know anything about Javascript. So, here delegation certainly would be the proper solution.

No, the purpose of instantiating a class is to create an entity which represents a state. In a typical program, we have a fixed number of classes, but need a variable number of entities. Therefore we can't get rid of instantiating in some form.
A state of what? Class? Why variables cannot represent it?


A state of anything the system is about - the state of a customer, for example.
Yes, a variable can represent this. A variable *is* an instance of some form of class.

And why do we need a variable number of entities?


Because, in the above example, we don't know how many customers the system will have to handle.

I don't know about PL/1 structures and it seems hard to find something on the net - can you provide a small example?
Just named combinations of other variables, which can be arrays or other structures. Finkel gives this definition:
"Structured types are built out of other types. Arrays, records, and pointers are structured types.
record
Third: character;
Fourth: integer;
Fifth : (Apple, Durian, Coconut, Sapodilla,
Mangosteen)
end;"


Oh, like a C struct. Simply spoken, a class without behaviour...


We can get rid of that, too - as all the dynamically typed languages show. You can resolve polymorphic method calls simply by matching method signatures.
Actually this was my initial idea, but then I though interfaces can serve as a level of indirectness, isolating from knowing concrete implementation. Instead of referring to methods by classes they belong to, we refer by interfaces, so implementations can vary independently. Not sure I understood you right, but it seems that "resolving polymorphic method calls simply by matching method signature" doesn't provide this effect.


Yes, it does, as we neither refer to interfaces or classes any longer, but only to the methods we expect an object to have.
For example, the method

could be called with any object that understands the close() message - a Stream, a Socket, a Reader etc. - without the need for implementing a common Closeable interface.

For example, how do you resolve duplicated logic in queries/updates? (I am thinking of the type of logic as in your while-matches example here.)
What is "duplicated logic"? Ah, you mean if I have a query and I want to use its result in other queries?


No, I mean what do you do if two queries have similar, but not identical structure?
[ August 27, 2002: Message edited by: Ilja Preuss ]
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Editing my post somehow lost this part, so again...

Originally posted by Mapraputa Is:
IMO, the OO solution is very clean for this problem. It's certainly much harder in Java - but just because it *isn't* a fully OO language; blocks aren't objects, for example.
Not sure if this should be considered a part of a fully OOP language...


I think it is usefull enough that it should.
BTW, who said that in OO "*everything* is an object"?
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Frank Carver:
Well, when I am working on already well factored code, I stop when I don't see any more ways to improve it. I don't stop because I think it shouldn't be improved more, but because I don't know how to do it. Of course, possibly this is because it *is* in the best possible state, but how should I know? In fact, that is probably not the way to bet, anyway
So, you admit that you stop refactoring when you reach some personal limit.


Yes, but it is a limit I find in *me*, not in the code...


I didn't really want to get into a definition match with this. My intended contrast was between the sort of practical approach which admits that sometimes you must stop before perfection in order to proceed, and the converse, that perfection is a more important end in itself.
Maybe it's just that I've seen too many projects repeatedly delayed by people fussing with models based on abstract principles rather than learning by doing.


OK, I think I see the misunderstanding here: for me, modeling is not something I do *before* writing code, but mainly *by* writing (and refactoring) code. So, I don't have to stop before perfection to proceed - I converge perfection *by* proceeding. I wouldn't know how to even have a clue about in which direction perfection lies without getting "real world feedback" about my current not-so-perfect solution.
There are times, for example, when I don't see clearly in which direction the design wants to be refactored. In such situations most often it helps just to implement the next little bits of functionality, factor the new code into the design - and the desire of the code becomes more and more obvious.


That is a little bit more tricky - I have to think about it
If it helps, the stock answer from the XP folks seems to be "keep writing tests until fear turns to boredom". It's very subjective, but illustrates the point, I guess.


Mhh, the point is so subtle that it took me some "warming up" to find it: Someone thinking that "good enough" software is sufficient probably doesn't fear bugs as much as I think he should.
For me, every bug that is not found by my tests is an indication that the tests obviously were not good enough - so I try to learn how to write tests that will catch these types of bugs in the future.
[ August 27, 2002: Message edited by: Ilja Preuss ]
 
mister krabs
Posts: 13974
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I just wanted to comment on Date's article. I think that Date is looking for generalities where application dependent implementations are called for. The ELLIPSE - CIRCLE debate is interesting but purely academic and meaningless. Whether CIRCLE should inherit from ELLIPSE depends on the business requirements more than anything else. Do I actually need to implement a CIRCLE object? Do my business requirements say that all CIRCLE objects must be treated differently than ELLIPSE objects? (For example, every CIRCLE must be green and centered on the page.) If not, why even have a CIRCLE object? Isn't a CIRCLE simply an ELLIPSE that has two variables equal?
All this stuff is rather amusing but last I checked, we write applications for real users with real business requirements. Don't their requirements get factored into our application design? Or are we all just too advanced to worry about real-life implemetations?
 
Thomas Paul
mister krabs
Posts: 13974
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Well, when I am working on already well factored code, I stop when I don't see any more ways to improve it.
Yikes! I don't want you working on my systems.
Managers have to worry about real life things like time and money that get in the way of producing little gems of code that programmers like to write. I write code that passes QA and is the best that I can produce given the restraints of time and budget.
OOP helps here because my code is much more likely to be maintainable if I use standard OOP techniques.
[ August 27, 2002: Message edited by: Thomas Paul ]
 
Mapraputa Is
Leverager of our synergies
Posts: 10065
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
"No, I mean what do you do if two queries have similar, but not identical structure?"
I suspect we may talk about the same thing. And this is a very interesting question!
"Similar but not identical" - in other words there are parts that are the same, and part that are not? Mmmm... it may look different from your perspective... Let's see how it will be in SQL
I have a query to find all employees working in IT department:
SELECT employee-id FROM employees WHERE department IN (SELECT department_id FROM departments WHERE name="IT")
(my SQL is rusty so there may be some incorrectness, and "real" SQL could be more complex, but as an illustration this should work Ok)
Now I need for some reasons to find all employees that not only work in IT department but also are younger than 21
1) I can simply copy&paste my old SQL that I tested and I know worked correct and add new part
SELECT employee-id FROM employees WHERE department IN (SELECT department_id FROM departments WHERE name="IT") AND age < 21
Disadvantages of this solution are so obvious, so we wont spend too much time on it.
2) I can "refactor" common part as a macro and use something like
&IT_Department instead of "FROM employees WHERE department IN (SELECT department_id FROM departments WHERE name="IT")
Advanced macros even allow to use parameters so we do not need to stick with "IT" department.
I must admit, this looks ugly, because it intrudes into the language syntactic structure and breaks it.
3) As an example of built-in features, XSLT has "include" and "import" declarations, so common parts can be factored out, included, and in case the "import" declaration if used, inner XSLT's statements have priority in case they clash with imported ones.
DTD's extension mechanism is similar, it's interesting how with a minimal set of tools for including/overriding data declarations, adjustable and extensible constructions can be built - I was finding parallels with OOP many times.
These are mechanism that do not break language syntax.
This brings us to the idea.
Besides "fixed" set of features every language has a history of use. Being used it acquires idioms, patterns, etc. All those are built "on top" of a language and aren't language constructions. This introduces many problems which I wont list since they listed in this many time mentioned paper, whose main purpose is to "extend" OOP language with constructs that express "patterns" and thus make them "first class" citizens
This is how the language can evolve - it doesn't need to be the same after years of use, it should incorporate pearls of wisdom.
Now back to our "duplicated logic" - these are "idioms on the level of application". Another application wont need them. Currently they are accumulated as "user-defined" extensions to the language: classes and methods in OOP, functions, templates etc. but it would be interesting to think about what benefits their inclusion in a language itself could give... They should not go to the "main" language, but they could make "dialect" of the main language. One of developers goals then would be to find these "idioms" and include them in their "dialect" of the "mother language".
For those goal the language should provide mechanism for extensions. Smalltalk has built-in reflection mechanism, other languages have to resort to macros. From what I understood, "Intentional programming" project went as far as possible in this direction...
[ August 28, 2002: Message edited by: Mapraputa Is ]
 
Mapraputa Is
Leverager of our synergies
Posts: 10065
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I disagree. The conclusion would be correct if you had to stick with the patterns you decided upon upfront. But you don't - well designed and -tested OO systems are rather easy to refactor.
I would only note here that probably any well designed system is rather easy to refactor and easy of refactoring is pretty much orthogonal to OOP...
Later the book shows that "the preplanning problem" doesn't exist in Smalltalk because of its wonderful reflection system.
I don't get this - could you elaborate?

They call it "noninvasive, dynamic composition based on intercepting messages". As far as I understood, it means you can dynamically attach objects to other objects at run-time, while the code is untouched. You really do not need up-front design for it.
And nevertheless there are many people (including me) who find the abstractions of OOP to be very helpfull for writing good software. I wonder wether there might be a fourth category missing...
Rats, I wanted to leave the fourth undefined category for extensibility and forgot Bad design. So what do you think, it is? Anything
that proved to be useful in practice?
Perhaps we can get to it by thinking about how the following (more or less random) concepts fit into the categories:
"for-loops", "if-statements" - basic blocks
"procedures, modules" - I forgot, what's the difference? A module is a container for procedures?
"function-pointers” - not sure what is it.
"IO handling", "multi-threading" - good candidates for domain-specific abstractions.
I am confused - I think this is exactly what "extends" is doing: implicitely forwarding all calls to inherited methods to the super class.
Not "exactly". With how inheritance is implemented now, if a superclass added a method, a subclass will get it without recompilation. If we re-define "extends" to provide explicit method calls at the code level, the automatic inheritance of new methods will be prevented.
The responsibility of a DOM parser is to parse XML - nothing more. It shouldn't have to know anything about Javascript. So, here delegation certainly would be the proper solution.
Then what would be an example of "is a" concept? Isn't it what specialization is about - to add extra services a base class doesn't provide? Like Frame extends Window in Java AWT... If the problem is only in JavaScript nature tangential to the task of XML parsing, let's do that: it's possible to include binary objects in XML if they are base64 encoded. So our parser will decode such objects and send to application. Is it better?
Because, in the above example, we don't know how many customers the system will have to handle.
I did not understand that... How many customers - isn’t it orthogonal to the task of parsing? Or you are talking about another example?
Yes, it does, as we neither refer to interfaces or classes any longer, but only to the methods we expect an object to have.
Ah, I see. I mentally already got rid of instances and was left only with classes and interfaces, and you insist on instances presence. Not sure if I am opposed to instances or not. if we do not have inheritance,
SomeObject.someMethod(...) is the same as someMethod(pointer to SomeObject's data, ...)?
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Thomas Paul:
Well, when I am working on already well factored code, I stop when I don't see any more ways to improve it.
Yikes! I don't want you working on my systems.


Mhh, if you are always that fast at judging someone, I wouldn't want to work for you...

Managers have to worry about real life things like time and money that get in the way of producing little gems of code that programmers like to write. I write code that passes QA and is the best that I can produce given the restraints of time and budget.


I am doing the same. It is my experience, though, that holding the code as clean as possible makes me the most productive.
Perhaps I should mention that I am using the XP-definition of "simple code" here:
The code is "perfect" when it (in order of priority):
1. does all the customer asked for (runs all the tests, if you are doing full XP)
2. contains no duplicated code
3. expresses all the ideas that need to be expressed
4. has minimal number of classes and methods
2 & 3 constitute the "once and only once" rule - everything the code contains is expressed as clearly as possible at exactly one location.
1 & 4 assure that the code exactly contains the functionality the customer asked for - no more, no less.
So, when I am speaking about "improving the code", I am *not* speaking about gold-plating or the like, but about finding the simplest, cleanest (and therefore most extensible and maintainable) solution to the customers currently most important problem.

OOP helps here because my code is much more likely to be maintainable if I use standard OOP techniques.


Yes, that is my experience, too.
 
Mapraputa Is
Leverager of our synergies
Posts: 10065
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Honest warning: inflammatory content inside.
This time I refactored all inflammatory responses into this post :roll:
Not sure if this should be considered a part of a fully OOP language...
I think it is useful enough that it should.

OMG...
Uncle Bob wrote: "And even those things are result of the yin and the yang in proper combination"
If to think further, what is the one single idea behind those two? Orthogonality! They are orthogonal! Otherwise there would not be "yin and yang" and would be only one yin-yang. Orthogonality is the main driving force in the universe!
It should go, but it doesn't mean OOP without it is not "full". Of course, nothing in OOP has strong definition, so how can we know which OOP is full and which is not...
BTW, who said that in OO "*everything* is an object"?
OMG-2. You are not saying that it was me, are you? :roll: I quoted the expression to say that this is a bad idea and now my words were taken out of context!
Who first said it - I dunno. Grady Booch? My textbook?
Again, I suspect that it is rather hard to speculate about the balance between inheritance and composition without having experienced it...
It's like to say that in order to understand processes that happen during thunder storm, you need to be stroked by a lightning! But this deserves a dedicated post...
 
Mapraputa Is
Leverager of our synergies
Posts: 10065
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Inner rant: "Experience considered harmful"
1. Experience is fragmentary. For example, you are talking that "it's easy to refactor" from POV of developer who probably doesn't create API's for broad public consumption. Joshua Bloch creates APIs for broad public consumption and I observe his notion of inheritance is different.
2. Experience can be misleading. We all have experience that the Sun rotates around the Earth, but...
3 It's unclear how much experience is enough. I did wrote a lot of small programs, but I guess it's not enough. How long should I experience it? How big the project should be?
4. Experience, being personal, makes is biased. We are talking about the mechanism, why do I need to experience consequences? To become passionate and partial about it? this is precisely what I am trying to avoid.
5. "Learning by experiencing" is dangerous. I
found it tremendously useful to get an idea of what language features are before actually using them. All my previous life I did otherwise and learnt "how it should be done" after the project was finished. And you know what? Tomorrow there was another language, so we had to start from scratch again. As soon as one becomes an expert, he has to learn something new. Fundamental knowledge would be a salvation in this situation, but we sniff scornfully at it.
6. Experience helps to understand each other only if we have similar experience. Otherwise we can argue forever what is easier to maintain, OO program or RM.
7. Solid theory is free of this shortcomings. It encompasses different kinds of experiences and systematizes it, finds hidden regularities and thus simplifies routines, provides common language and common conceptual frame to discuss different kinds of experience...
8.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Thomas Paul:
I just wanted to comment on Date's article. I think that Date is looking for generalities where application dependent implementations are called for. The ELLIPSE - CIRCLE debate is interesting but purely academic and meaningless. Whether CIRCLE should inherit from ELLIPSE depends on the business requirements more than anything else. Do I actually need to implement a CIRCLE object? Do my business requirements say that all CIRCLE objects must be treated differently than ELLIPSE objects? (For example, every CIRCLE must be green and centered on the page.) If not, why even have a CIRCLE object? Isn't a CIRCLE simply an ELLIPSE that has two variables equal?
All this stuff is rather amusing but last I checked, we write applications for real users with real business requirements. Don't their requirements get factored into our application design? Or are we all just too advanced to worry about real-life implemetations?


Well, OTOH the Liskov Substitution Principle *is* important for the development of reliable (OO-) software: http://www.objectmentor.com/resources/articles/lsp.pdf
 
Frank Carver
Sheriff
Posts: 7001
6
Eclipse IDE Python C++ Debian Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Whooee, this is getting complicated now ...
Map wrote: (about whether code blocks should be first-class objects) Not sure if this should be considered a part of a fully OOP language...
This is definately one of the things I miss most in Java. After using a language with first-class code blocks, Java's "create a whole pointless class with just one method" seems so clumsy and (worse)confusing to a reader.
Ilja wrote: (about knowing when to stop refactoring)Yes, but it is a limit I find in *me*, not in the code...
Really, that doesn't matter. From a philosophical point of view "the code" doesn't actually exist. It's just a bundle of characters. "the code" as a concept only exists in the understanding of the person working with it (which is why readability, unambiguity and clarity are so important). Expressions like "the code is too complex" or "I'm not happy with the complexity of the code" or "I don't fully understand the code" are all aspects of the same thing. They are not entirely equavalent, sure, you might feel you understand something, but worry that others might not, and so on. Essentially, though, code just "is". All of this reasoning is about what takes place in the minds of developers.
Ilja wrote: (about knowing when to stop writing unit tests) Someone thinking that "good enough" software is sufficient probably doesn't fear bugs as much as I think he should.
I still feel we have a basic difference of opinion of what "good enough" means. You still seem to be using "good enough" to mean something like "not really good", or "partially good", or "good only in some aspects". Do you refute my statement that ' "good enough" is "good" with acceptance tests ' ?
Map wrote: (about trying to refactor SQL using macros) I must admit, this looks ugly, because it intrudes into the language syntactic structure and breaks it.
And that's the point about why SQL (and SQL-based languages such as PL/SQL) is not well suited to maintenance and changing requirements. Can you tell me with your hand on your heart that you are not tempted to just "cut-n-paste" because of this linguistic clumsiness? Even though you yourself have admitted the dangers of it.
Map wrote: They should not go to the "main" language, but they could make "dialect" of the main language. One of developers goals then would be to find these "idioms" and include them in their "dialect" of the "mother language". For those goal the language should provide mechanism for extensions. Smalltalk has built-in reflection mechanism, other languages have to resort to macros. From what I understood, "Intentional programming" project went as far as possible in this direction...

Don't forget that there are other languages (such as FORTH), which have been quite successful in their own niches. FORTH is a fine example of such a "meta" language. Solving a problem in FORTH is often described as a process of developing a vocabulary so that you can express the answer in a single word.
Map wrote: With how inheritance is implemented now, if a superclass added a method, a subclass will get it without recompilation. If we re-define "extends" to provide explicit method calls at the code level, the automatic inheritance of new methods will be prevented.
You seem to be implying that this is a bad thing. To me it's just a choice. Both options seem equally irritating if your application would be sompler with the other behaviour.
It might be useful at this point to consider how C++ allows modifiers to its "extends" (":") operator. You can choose to automatically inherit all the public features of the parent class as public features of the child class (closer to Java's inheritance model), or to inherit them as private in the child class (closer to what Map and Bloch might prefer ?)
Map wrote: Ah, I see. I mentally already got rid of instances and was left only with classes and interfaces, and you insist on instances presence. Not sure if I am opposed to instances or not. if we do not have inheritance,
Now I am puzzled. Althogh I can see how you can get rid of interfaces (using "per message" reflection as in Smalltalk) and classes (using "prototype" semantics as in Self and Brain etc.), getting rid of instances seems really pointless. Please explain how you would be able to keep arbitrary collections of any even slightly complex data groupings.
Map wrote: SomeObject.someMethod(...) is the same as someMethod(pointer to SomeObject's data, ...)?
Well, almost. The reason a different syntax is used is to distinguish the idea of a "message" from that of a "method". A message sent to an object is conceptually subject to processing to choose which actual method to invoke (one defined in a child class, a parent class, or whatever) or maybe to throw some sort of "unknown message" or "invalid priveleges" exception. This sort of processing is hard to understand and communicate to a reader or programmer if the syntax is the same as for a regular, direct, method call.
Map wrote: For example, you are talking that "it's easy to refactor" from POV of developer who probably doesn't create API's for broad public consumption. Joshua Bloch creates APIs for broad public consumption and I observe his notion of inheritance is different.
Incisive point. Thanks.
Map wrote: Inner rant: "Experience considered harmful" ...
Equivalent rant: "Thought Experiments considered harmful"
For each point in your list it would be equally possible to come up with a justification why purely sitting there and thinking about potential issues without experience is also just as bad. Applying experience without thought is as bad as applying thought without experience. None of us is "Mentor of Arisia", after all.
 
Thomas Paul
mister krabs
Posts: 13974
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I just wanted to comment on Map's "the sun revolves around the earth" example as proof that experience is not always helpful.
When we write code, we are trying to solve a business problem. If the user has a need to calculate the area of a circle and using pi as around 3 is good enough for their needs, then we don't go out and buy a super-computer to calculate pi to 30 trillion digits. We provide a solution that solves the business need.
It used to be back in the old days that the sun was thought to revolve around the earth. And with the observations that were available and the needs of the community at the time, that was good enough. As observations became more accurate and the business requirements changed, the model became increasingly complex but the solutions it provided were still good enough. When Copernicus and Galileo came out with their radical notion of the earth going around the Sun, the math failed. If I had written a program using their theories it would have failed the "business" needs of the day (think sailing ships). A program based on the idea that the earth revolves around the Sun would not have been good enough. It wasn't until Kepler figured out that planetary orbits were ellipses that the math finally worked. At that point we could have refactored our code and released "Planetary System 2.0".
One more point about experience. There is no way to become good at something without experience. Would you want a doctor who has never even performed an appendectomy to do open heart surgery? Even if he read all the books and did all the research?
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Frank Carver:
[QB]Ilja wrote: (about knowing when to stop refactoring)Yes, but it is a limit I find in *me*, not in the code...
Really, that doesn't matter. From a philosophical point of view "the code" doesn't actually exist. It's just a bundle of characters. "the code" as a concept only exists in the understanding of the person working with it (which is why readability, unambiguity and clarity are so important).


OK, then let me rephrase: It is a limit I find in my ability to improve what I think of as code, not in what I think of as code itself.
I never think "this code is good enough, I should stop to improve it to not waste time". Rather I find me thinking "I don't know how to further improve this code yet, so let's stop for now".
Don't you see a qualitative difference?


Ilja wrote: (about knowing when to stop writing unit tests) Someone thinking that "good enough" software is sufficient probably doesn't fear bugs as much as I think he should.
I still feel we have a basic difference of opinion of what "good enough" means. You still seem to be using "good enough" to mean something like "not really good", or "partially good", or "good only in some aspects".


For me, "good enough" is implying that I can reach a state where I no longer have to think about improving the quality of the code - why should I do if it already is good enough?
It also seems to imply to me that you could say something like "the code would be even better if we did XYZ, but it is already 'good enough', so we don't have to do it". Probably sometimes I fall into this trap, but it isn't something I would advise.
 
Thomas Paul
mister krabs
Posts: 13974
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ilja Preuss:
For me, "good enough" is implying that I can reach a state where I no longer have to think about improving the quality of the code - why should I do if it already is good enough?
It also seems to imply to me that you could say something like "the code would be even better if we did XYZ, but it is already 'good enough', so we don't have to do it". Probably sometimes I fall into this trap, but it isn't something I would advise.

I have lots of code that I know I can improve. But it's already gone to QA and if I make a change now it will cost thousands of dollars to retest it. And for what? The code produces the results the users want, performs well, and is stable. Sure I could have written it better had I more time and more upfront knowledge about the changes that would be required as we moved along the development path. But the code is good enough. It does exactly what it is supposed to do and the users are happy. Can it be refactored to make it better? Absolutely. And my comments left in the code explain exactly how I would make it better if I had the time and money. Next time the code is worked on maybe we can squeeze those changes in.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Mapraputa Is:
[QB]Inner rant: "Experience considered harmful"
1. Experience is fragmentary. For example, you are talking that "it's easy to refactor" from POV of developer who probably doesn't create API's for broad public consumption. Joshua Bloch creates APIs for broad public consumption and I observe his notion of inheritance is different.


Well, I don't have as much experience as Bloch in developing published APIs, but some I have (in the Banner example, the baseclass was published, whereas the subclasses were internal - the external clients used a factory method to get the correct instance, based on information from a database).
So, from my limited experience, and from what I read from more experienced people, it certainly is more complicated to refactor in the presence of a published interface, that is, clients which aren't under your control. So in those circumstances some more pre-planning probably is necessary (though it doesn't necessarily all have to be done as pre-designing - scheduling the development work so that the published interfaces stabilize early might be even more critical).
IME I would even say that in general OOP *helps* here (compared to procedural), as because of the better decoupling, necessary changes do have less far reaching impact.


2. Experience can be misleading. We all have experience that the Sun rotates around the Earth, but...


No, that was a conclusion based on insufficient experience. And that conclusion was only corroded by widening the experiences, not by purely theoretical thoughts.


3 It's unclear how much experience is enough. I did wrote a lot of small programs, but I guess it's not enough. How long should I experience it? How big the project should be?


Experience isn't something you can impartially get enough from, IMO. That doesn't mean that experience is worthless for assessing something.


4. Experience, being personal, makes is biased. We are talking about the mechanism, why do I need to experience consequences?


Because we are talking about the practical consequences, so I hope. If not, I am wrong in this thread...


5. "Learning by experiencing" is dangerous. I
found it tremendously useful to get an idea of what language features are before actually using them. All my previous life I did otherwise and learnt "how it should be done" after the project was finished.


The value of "Learning by experiencing" lies in rapid feedback, IMO. Giving this feedback only at the end of a project certainly isn't effective - working together with an experienced developer as mentor might be.


6. Experience helps to understand each other only if we have similar experience. Otherwise we can argue forever what is easier to maintain, OO program or RM.


We can do that anyway. But the less experienced we were, the earlier the debate would become meaningless, IMO.
As you might have noticed, I am less interested in your theoretical arguments, but in your practical experiences using relational programming. If those experiences don't match my mental model, I will accept that that model might be wrong and that I only can get more sure about it by (preferably) becoming more experienced myself, or by comparing it with experiences of other people I respect. YMMV.


7. Solid theory is free of this shortcomings. It encompasses different kinds of experiences and systematizes it, finds hidden regularities and thus simplifies routines, provides common language and common conceptual frame to discuss different kinds of experience...


And nevertheless, if theory and experience contradict each other, I know which to bet on...
 
Thomas Paul
mister krabs
Posts: 13974
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
A comment on Map's issue with automatic overriding of methods:
This is a potentially serious problem. Imagine we have two classes, A and B. B is a child of A. A has two methods run() and walk(). B inherits these two methods and adds skip() and hop(). The person who wrote A decides to add a new method to A called hop(). hop() has nothing to do with hop() from B. They just happen to be named the same thing. We have created a potentially serious error because B is overriding a method without actually meaning to. This means that polymorphism may force the wrong version of hop() to run under certain circumstances.
C# responds to this issue be forcing an "override" contract between A and B. A and B must agree to which methods can be overridden by B by using the "virtual" and "override" keywords.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Thomas Paul:
But the code is good enough.


Frank, do you now understand where I am coming from?
 
Thomas Paul
mister krabs
Posts: 13974
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
The code is "perfect" when it (in order of priority):
1. does all the customer asked for (runs all the tests, if you are doing full XP)
2. contains no duplicated code
3. expresses all the ideas that need to be expressed
4. has minimal number of classes and methods

I just wanted to comment on this list. 1 is mandatory. No system is worth anything if it fails 1. No matter how well it fits in 2,3,and 4, if it fails 1 it is worthless. That being said, 2, 3, and 4 are all elements that we work on given the restraints of time and budget. If we have to sacrifice something (and I have never worked on a system where I had unlimited time and money) then 2, 3, and 4 are where we make sacrifices. At some point we say that it is good enough and move on to the next project.
 
In the renaissance, how big were the dinosaurs? Did you have tiny ads?
Gift giving made easy with the permaculture playing cards
https://coderanch.com/t/777758/Gift-giving-easy-permaculture-playing
reply
    Bookmark Topic Watch Topic
  • New Topic