• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Banning concrete inheritance

 
author and iconoclast
Posts: 24207
46
Mac OS X Eclipse IDE Chrome
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
In another thread we were starting to talk a little about the design of an ideal language. One of the problem features of Java is that it allows concrete inheritance, which is pretty much always a bad idea. In fact, I'd go a little further and suggest that inheritance of type from an actual class, concrete or even abstract, is a bad thing. Try to imagine a Java-like language in which inheritance from a class was always C++-like "private inheritance", and the only way to inherit type was by implementing an interface. In other words, you can inherit methods (public or private) from a class, but the LSP doesn't apply for inheritance from classes. A Child is-not-a Parent.

Now, superclasses become either adapters or tools. You never inherit an unwanted bequest like Serializability via inheritance. It actually becomes impossible to paint yourself into a concrete-inheritance corner. You suddenly have a very strong incentive to define all methods to accept interfaces as parameters.

I wonder if even this is missing a crucial final step. Maybe classes aren't types at all -- only interfaces are. What would the language look like if you couldn't even declare a variable of class type?
[ September 05, 2005: Message edited by: Michael Ernest ]
 
Ranch Hand
Posts: 1608
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Without revealing anything about how I think such a language might look like, I have spent an enormous amount of time just thinking about how I can convince an arbitrary software developer of what I assume you are also discovering. Joshua Bloch makes a small mention of it in his book (Effective Java), but comes nowhere near what I believe is a satisfactory justification. I vaguely recall a mention of it in GoF, but I can't be too sure (Kai, are you reading this? I'm sure you found a reference to it once?). Allen Holub goes a few steps further, but I believe he has inadvertantly fallen victim to some of the attached preconceptions. I cover this point here: http://www.jtiger.org/articles/why-extends-is-not-evil.html

In any case, I don't believe that there exists a thorough and justified explanation of this concept, since it must be understood that there is a huge critical mass of people that will refute everything that is said without even thinking about it or certainly not in any great detail - at least, this is my experience (ironically, these same people will call *me* religious - go figure?). The ultimate proof for me is that you can digress the reasoning all the way back to software requirements, the very fundamental problem statement directly contradicts the validity of concrete inheritance as a solution (further explanation omitted for brevity ). It's also important to note however, that 'concrete inheritance' is not sitting out there on its own on Planet Evil. There are many other concepts, particularly in Java, that are 'just as bad' because of the same reasoning that 'concrete inheritance' is bad. I have thought about writing a book, but I play squash three times a week, chess on weekends, full-time job with IBM, part-time job lecturing at a university, two kids, <insert-more-excuses-here/>.

As far as concrete inheritance goes (to distinguish it from other related flawed concepts), I believe I have a workaround that is optimal. However, *every* Java class implies the use of concrete inheritance (e.g. equals/hashCode/toString), so you cannot escape it. I have written a half decent collections API (sorry, it's proprietrary) simply because I got tired of all the nastiness that comes with the language and the existing flawed implementation (sorry Josh). For example, another 'nasty' is "declaring two or more methods on an interface when they are not symbiotic (belong together or not at all)". equals/hashCode are not symbiotic, therefore, even if they were on an interface, (where they belong) to give you a snippet, I have:
interface Equalable<T>{boolean isEqual(T t) throws NullPointerException;} // always gotta handle that dirty null case, yuk!
interface Hashable{int getHashCode();}
interface EqualHashable<T> extends Equalable<T>, Hashable{}
So a proper hashtable contract looks something like this:
HashTable<EqualHashable, Equalable> thus proving that they are not symbiotic operations.

To look at where this rule has been violated, how do you declare a java.util.List immutable? ... by having it declare to throw UnsupportedOperationException from mutating operations (puke yuk!). Preferred is compile-time type safety, not runtime. To take a specific from the interface the "add" and the "get" operation are not symbiotic (your requirements say so when you need an immutable List). They should be split into separate interfaces, and when it is time to expose only the immutable interface, expose as minimal as is required. Wouldn't it be nice if the "add" operation just wasn't there at all? (therefore, causing compile-time safety).

The optimal workaround to the existence of concrete inheritance is to isolate it to a single case and apply rigorous testing to it and expose an appropriate interface to the underlying implementation. Unfortunately, it becomes very laboursome and often times, I simply concede (the shining case is javax.swing.* where concrete inheritance is prevalent - on that note, anyone ever tried to write unit tests for a swing app? Ever wondered why it is close to impossible? hint: C******e I*********e, also failure to expose all public methods on an interface - another nasty). Alternatively, one could rewrite whatever dependency it is that mandates the use of concrete inheritance, but as noted, no matter how hard you try, it's intrinsic to Java - it cannot be escaped.

How's that for an intro? I hope you understand now why I hold back on providing justification - I've barely even started, and my fingers (all 6 of them) are out of breath.
[ August 31, 2005: Message edited by: Tony Morris ]
 
Ranch Hand
Posts: 1170
Hibernate Eclipse IDE Ubuntu
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I think the problem is more the conflation of implementation and interface inheritance. If you could only use one at a time, then things would be a bit cleaner. Doing away with concrete inheritance would effectively just be dividing implementation inheritance from interface inheritance as using composition is basically implementation inheritance without the interface inheritance that Mr. Friedman-Hill alluded to.

I find in my code I inherit only from abstract classes. And the more I refactor the more I do away with inheriting of both implementation and interface. I put the interface on at the childmost class. I wouldn't even do abstract classes for parents if it was not for the headache of forwarding calls and occasionally the SELF problem of the composition technique.

Also an alternative to mad forwarding is to hand out a reference to the compisitee. But as easy as this makes my code, its supposed to be violation of encapsulation. So instead I end up with huge classes so as not to hand out references things which make up the functionality. There could be an easier way no doubt.
[ September 01, 2005: Message edited by: Mr. C Lamont Gilbert ]
 
Ranch Hand
Posts: 531
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
The pencil-eye-tus cures are always the most dangerous IMO. We can all agree that it hurts when you stick a pencil in your eye. The solution is not to eliminate pencils.

Allowing concrete inheritence is not a problem. Abusing that facility is. You can use a decorator pattern to shield yourself from concrete inheritence when appropriate.

Now, if you do something stupid like, oh, create a class called Stack that extends Vector, that's just asking for trouble. A stack isn't a vector, so don't do that. Likewise, a Sandwich isn't a Fruit Salad, whether ot not it contains most of your implementation needs. When you have a bona fide IS-A relationship you benefit both from immediate re-use and refactors/enhanced code in the base class.

I say, look to your current language first (Java, C#, C++...) and declare it broken if and only if it prevents you from implementing a "correct" system.
[ September 01, 2005: Message edited by: Rick O'Shay ]
 
Ranch Hand
Posts: 580
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hola! Interesting topic. I certainly take your point Ernest that most concrete inheritance is either downright bad practice or a typing saver. But I think maybe there are some valid reasons for it. For instance the Template Method pattern. Where for example, I wanted to ensure implementors of a particular method in my interface met certain pre and post conditions. For example:



You could do this with composition but it would not be very elegant. I guess you could argue this pattern is a cludge for getting round the language flaw that Java does not allow you to specify pre and post conditions at a language level.

anyway, some thoughts,
Don.
 
Ernest Friedman-Hill
author and iconoclast
Posts: 24207
46
Mac OS X Eclipse IDE Chrome
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Don Kiddick:
For instance the Template Method pattern.



My proposed language would let you do this, but the idea is that the children of Super would not be Supers themselves. Super might be an implementation of ISuper. Children could extend Super, but they would neither be an instance of Super nor ISuper unless they declared themselves to implement ISuper. Having done so, you'd end up with child objects just like you'd want. If you don't do that, then you get something like C++ "private inheritance" -- you get the methods of Super, but not the types Super or ISuper.

What this does is it forces you to use interfaces to express inheritance, and makes it impossible for you to paint yourself into any concrete inheritance corner -- even if Super is a concrete class!
 
Tony Morris
Ranch Hand
Posts: 1608
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator


For instance the Template Method pattern.


The Template Method pattern can be replaced by a more appropriate set of dependencies, and still be the Template Method pattern (for those who are somehow religiously bound to it ). GoF set an excellent precedent, but tnot an eternal truth.

Ernest, are you proposing a language? I already have one underway.
 
Ernest Friedman-Hill
author and iconoclast
Posts: 24207
46
Mac OS X Eclipse IDE Chrome
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Tony Morris:

Ernest, are you proposing a language? I already have one underway.



I guess I should have said "my hypothetical language" instead of "my proposed language". I am just interested in talking about this idea. The more I'm blathering on here, the more interested I'm becoming in this imaginary language that completely separates inheritance of type from inheritance of implementation. It seems to me that it might capture some of the best of what strong typing can do, while still giving you a lot of the flexibility that you get from the Smalltalk/Ruby sort of weakly-typed language.
 
Mr. C Lamont Gilbert
Ranch Hand
Posts: 1170
Hibernate Eclipse IDE Ubuntu
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I agree with you again Mr. Friedman-Hill. Seperation of the types of inheritance so it can only be stated and never implied is a good thing.

Interfaces only inherit interfaces, classes only inherit classes. Thats a really solid idea. Unfortunately it will never happen in Java Even worse is this java 1.5 thing that I don't really want to bring up here because I know its a big mess.
 
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I have the strong feeling that this is an instance of different Software Development Attitudes.

If you have a Directing Attitude, concrete inheritance probably looks evil, because you can do some very bad things using it - as the Sun developers did. (java.util.Date vs. java.sql.Date is another pet peeve of mine... - on the other hand I suspect that the Sun developers would have been creative enough to produce an even bigger mess with Swing if they wouldn't have had concrete inheritance at their disposal. :sigh

If you have an Enabling Attitude, concrete inheritance just is a feature that, if used wisely, allows for simpler code reuse in some instances.

Now guess what camp I'm in in this case...
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ernest Friedman-Hill:

I wonder if even this is missing a crucial final step. Maybe classes are types at all -- only interfaces are.



No, interfaces aren't really types, either. Interfaces plus contracts are.

Take the Comparator interface, for example. It is only a complete type together with the JavaDoc which explains that implementations have to define a total ordering.
 
Tony Morris
Ranch Hand
Posts: 1608
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
There are many ways to arrive at the conclusion that concrete inheritance is an implicit requirement defect. I can think of a few subtle ways, and a few not-so-subtle. But let's start with the relatively obvious. First, one must define an axiom, since, I'm sure most people agree that "X is evil" is an incredibly broad statement that is often an over-simplification. In fact, who is to say a requirement defect is evil? The axiom is that software requirements are specified completely, without omission or excess, and that the software implementation meets those requirements, without omission or excess. Sound reasonable? I think so, but the clear definitions are often what go unnoticed, but I don't think I'll change that today. Let's just leave it at that for now.

Now let's take a common problem, that has drastic implications, despite often not being recognised. Developers write "unit tests", which is an unfortunate name. I prefer the term "software requirements". For example, assertEqual(2 + 2, 4); can be approximately interpreted in English as "a requirement of the addition operator is that when the operand 2 and another operand 2 are applied, the result evaluates to 4". I use the term "approximately" because English language can never correctly be used to express software requirements, since it is proven (through inference, not experience) to be impossible due to the ambiguity of the language. The common problem is that software that is "under test" (validating against its requirements) will often not meet 100% coverage. In fact, often times it's impossible to make 100% coverage. Let's define 100% coverage. 'Code coverage', that is, where a piece of code is executed, versus 'system state coverage', that is, where the system enters states. For example, given the code boolean b = x == 0; to make 100% code coverage during test, this code would have to execute with a value of x that is 0, and a value that is not 0. For this code to meet 100% system state coverage, this code would have to execute with all (2^32) values of x. Sometimes (almost always) it is impractical to achieve 100% system state coverage; for example, if you write a SHA-256 hash algorithm implementation, the very nature of SHA-256 mandates that 100% system state coverage is unachievable in a reasonable amount of time. Therefore, one should always achieve 100% code coverage, then make extrapolation of requirements from there. For example, in an expression that depends on an integer value (such as the earlier example), one might use the values, Integer.MAX_VALUE, Integer.MIN_VALUE, 0, -1, 1, and a few random numbers in between, and from there, state that "since the requirement is met for these values it is reasonably assumed that for all other values, the unstated, but assumed, requirement is also met". We (developers) do this all the time in our unit tests, even if inadvertantly, only I have simply formalised it slightly in a few sentences. Now the problem is simple, what if you cannot achieve 100% code coverage? That is, under no circumstance are you able to hit a piece of code that you have under test? Do you not agree that there is either a requirement defect (that the "unit tests" do not completely, without omission or excess, specify the software requirements) or an implementation defect (that the software requirements are fully stated, and that the implementation does not meet all requirements without excess or omission)? If so, the remained of your steps to clarity are relatively trivial - it's only the formalisation and abandoning of preconception and doctrine that seems to be the difficult part (from my experience at least). It's important to note here that I refer to an "implementation defect" as a "requirement defect", since under a formalised (but very verbose) axiom, an implementation defect cannot exist. The reasoning behind this is omitted from this rant for brevity. I encourage anyone to think about it in a time of reflection anyway. For your own amusement, you might want to look at the JTiger code coverage report. For every example where 100% code coverage is not achieved, I can refer you to the implicit requirement defect that mandates it. For example, JTiger depends on some flawed software requirements; the Java language itself, Apache Ant, the JavaBeans specification. Without these dependencies, I can guarantee 100% code coverage, but this is a concession that I was willing to accept. I challenge readers to find all of these sofwtare requirement defects. http://www.jtiger.org/test-coverage-report/index.html It's important to note that this is one of many (I went through about 15 or so with a colleague just yesterday) ways to arrive at the same conclusion. That is, that concrete inheritance is an implicit requirement defect, assuming sound requirements <insert-formalisation-of-definition-here/>. I am 99.99% convinced of this, and I yearn for a sound refutation, but I'm even more confident, that if one does indeed exist, that it's certainly not going to occur on an internet forum, since all of those refutations that are proposed were invalidated a long time ago after some very deep analysis with my trusted critics (who all proposed those same refutations, over and over, until they finally conceded - it took hours on my behalf to prove it to them).

Now, given this assumption that 100% code coverage with stated extrapolations due to the impracticality of achieving 100% system state coverage achieved without omission or excess is a formal specification of requirements, and anything to the contrary is a requirement defect, which I hope all readers have accepted, let's move on to concrete inheritance. I realise that I have left a huge amount of reasoning out, but as I've said before, it's extremely verbose, and I only have 6 functional typing fingers. Have you ever attempted to achieve 100% code coverage and not been able to? Have you ever written software using the javax.swing package, and had difficulty achieving 100% code coverage? Have you ever asked yourself why that is (what concept is prevalent in java.swing? Concrete inheritance is everywhere!)? Now, concrete inheritance is one of many axioms that use the same (though very much abbreviated) reasoning that I provide above. For example, did you know that you should never pass parameters to a constructor? Yes, that's right, it's an implicit requirement defect. Declare all constructors private, implement only one interface, an interface must contain one or more methods, or it must inherit from two or more interfaces. Never use a singleton. Two or methods on the same interface must be symbiotic (the perfect example of breaking this rule is java.util.List - ever tried to return an immutable List?). All public methods must exist on an interface. The list goes on; and if I weren't so busy, I'd write the book that I pretend to be writing with complete reasoning, as in, absolutely complete. Until then, I'll continue to encourage deviation from what I perceive only as the result of propaganda, marketing campaigns and the like, since once this hurdle is behind, the remaining analysis is really quite trivial (relatively). The problem is, how do I force objectivity onto the reader? That is, how do I say, "think for yourself, but don't think like me"? I'm still trying to come up with an introduction...

Please, please, please don't make me get any more verbose. Send me a chocolate cookie so that I get off my butt and formalise this entire concept completely and accurately instead. My fingers are sore, bye.
 
High Plains Drifter
Posts: 7289
Netbeans IDE VI Editor
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I can't even read a block paragraph that large. Rather, I don't want to; it looks too much like work, of which I have more than enough. Maybe you could spend a moment and break it up for us?
 
Michael Ernest
High Plains Drifter
Posts: 7289
Netbeans IDE VI Editor
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ernest Friedman-Hill:

I wonder if even this is missing a crucial final step. Maybe classes aren't types at all -- only interfaces are. What would the language look like if you couldn't even declare a variable of class type?

.
Cosmetically, I don't think it would change the coding process for people who haven't dug deeply into the language. Your savvy programmers with a few years of experience? I bet their code isn't that far off.

When I first dove into Java I showed some code to my father-in-law, who had been around the block more than once in programming. He rattled off analogs in C that structurally matched what I had shown him. His own exception facility, his own ADT library -- it was an impressive body of work. The idea that all this had been formalized in Java didn't escape him, but since he'd done the majority of that work himself, it hardly blew him away either.

I think it'd be fun to map out a language like that, in sketch terms of course. I've been thinking a while myself about writing a language that starts with what good sample code would look like, then working back to the structure needed to support it. A somewhat different project than yours, EFH, but one that speaks to the final question you posed. Why not decide how it should look, then work backwards?
 
Tony Morris
Ranch Hand
Posts: 1608
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
For those with a more open mind, it is not always obvious that English is an ambiguous language such that it cannot be used to specify software requirements. In fact, I know of one such person who spent a number of research years proving it! Or more specifically, researching the true nature of specifying software requirements.
 
Ernest Friedman-Hill
author and iconoclast
Posts: 24207
46
Mac OS X Eclipse IDE Chrome
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I cleaned this thread up, as it contained several posts that strongly violated the "be nice" rule. In truth, none of these posts were without merit, but they made their points in a way which isn't acceptable here on the Ranch, and so had to be deleted.

If you had a post of your deleted, now you know why. This document is explicitly about the "Meaningless Drivel" forum, but it's really about the kind of discussions we want to see everywhere in the Saloon. If you'd like to rejoin this discussion, you're more than welcome to -- but please try to follow our community standards as outlined there. We'd appreciate it.
 
Ernest Friedman-Hill
author and iconoclast
Posts: 24207
46
Mac OS X Eclipse IDE Chrome
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ilja Preuss:
I have the strong feeling that this is an instance of different Software Development Attitudes.



Interesting link, and point well taken.

My recent worry about this whole "concrete inheritance" issue is actually not due to mistakes made by Sun in the APIs, although those are legion. My interest is due to design mistakes I myself made as a younger man, when Java itself was quite new, that I even now still have to live with. Ten years ago, I was, unsurprisingly, ten years less experienced, and didn't know the headaches I was causing. I feel the need to do some directing of my own sweet self, via a time machine. Barring that, a language that stops other younger programmers from making similar mistakes seems quite appealing to me.
 
Ernest Friedman-Hill
author and iconoclast
Posts: 24207
46
Mac OS X Eclipse IDE Chrome
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Michael Ernest:
I've been thinking a while myself about writing a language that starts with what good sample code would look like, then working back to the structure needed to support it. A somewhat different project than yours, EFH, but one that speaks to the final question you posed. Why not decide how it should look, then work backwards?



Funny you should say that. I'm actually working on a "quick start" guide for a certain software product using this same sort of methodology. Writing some nice, clean, simple examples, then adding the necessary convenience APIs and/or syntactic sugar to realize them. I have Perl's "make simple things simple, and difficult things possible" mantra in mind.
 
(instanceof Sidekick)
Posts: 8791
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

I've been thinking a while myself about writing a language that starts with what good sample code would look like, then working back to the structure needed to support it.



Exactly the path taken by Mike Cowlishaw with REXX. He and his friends wrote hundreds of programs with pencil & paper before implementing one line of the interpreter. The first interpreter was like version 3 of the language. In fact the first version had only one X; the second X was added to distinguish the changes, I think.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ernest Friedman-Hill:
My recent worry about this whole "concrete inheritance" issue is actually not due to mistakes made by Sun in the APIs, although those are legion. My interest is due to design mistakes I myself made as a younger man, when Java itself was quite new, that I even now still have to live with.



To live with in which ways? Why can't you refactor, for example?

Ten years ago, I was, unsurprisingly, ten years less experienced, and didn't know the headaches I was causing. I feel the need to do some directing of my own sweet self, via a time machine.



Well, yes, that feels familiar...

Barring that, a language that stops other younger programmers from making similar mistakes seems quite appealing to me.



I have the vague feeling that making mistakes simply is part of the learning experience. So I'm not sure that a language should protect us from that. I feel more inclined to belief that beginners should be guided by experienced mentors. Might just be me, though...
 
Ernest Friedman-Hill
author and iconoclast
Posts: 24207
46
Mac OS X Eclipse IDE Chrome
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ilja Preuss:

To live with in which ways? Why can't you refactor, for example?



Public APIs are forever
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ernest Friedman-Hill:


Public APIs are forever



Unless you do something about it.

We have a policy that allows us to remove deprecated API one year after it was deprecated.

It's sometimes painful to have to maintain the deprecated API for a full year, but it works arguably better then having to maintain it until the end of the world...
 
Ranch Hand
Posts: 657
Spring VI Editor Clojure
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Well, at least now we know who's a Director and who's an Enabler around here...

 
Tony Morris
Ranch Hand
Posts: 1608
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
By ensuring symbiotism on your interface operations, only exposing as much implementation detail as you need to (such as declaring only final classes, private constructors and avoiding other language constructs that violate encapsulation) you would never have the problem of having a broken public API. In fact, that is one of the many joys of inheritance; if you need to add an operation, you simply inherit from existing operations while maintaining type safety. If you need to *remove* an operation, you didn't have symbiotism in the first place, and as a result (though not directly), it's an invalid software requirement, and the whole thing should be rewritten; of course, in practice, this may be unacceptable, in which case, you provide the optimal workaround, which is context dependant.

It seems to me that we continually talk about encountered problems without acknowledging the underlying fundamental flaws. Concrete inheritance is an implicit software requirement defect. Does that mean you endeavour to avoid it? Yes, but in practice, you have third party dependencies that mandate the use of concrete inheritance, so you implement the optimal workaround. In fact, the Java language itself mandates the use of concrete inheritance. class X{} contains concrete inheritance; it's totally unavoidable, hence this is one of thousands of reasons for the need for a new language with a migration path (which is simple - compile to VM bytecode, or specifically, remain bytecode independant - tricky, but possible).
 
Michael Ernest
High Plains Drifter
Posts: 7289
Netbeans IDE VI Editor
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hm. Did C# make that improvement?
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Tony Morris:
By ensuring symbiotism on your interface operations, only exposing as much implementation detail as you need to (such as declaring only final classes, private constructors and avoiding other language constructs that violate encapsulation) you would never have the problem of having a broken public API.



My preferred approach to public APIs is to only publish interfaces and factories.

Still, I don't see how any approach could fully prevent the need to change a public API. After all, the requirements on the API change with time, and with the requirements possibly even what is considered implementation detail.


In fact, that is one of the many joys of inheritance; if you need to add an operation, you simply inherit from existing operations while maintaining type safety. If you need to *remove* an operation, you didn't have symbiotism in the first place, and as a result (though not directly), it's an invalid software requirement, and the whole thing should be rewritten; of course, in practice, this may be unacceptable, in which case, you provide the optimal workaround, which is context dependant.



I'm not sure I'm following you - just want to notice that "adding an operation" is not my typical use of inheritance.


It seems to me that we continually talk about encountered problems without acknowledging the underlying fundamental flaws. Concrete inheritance is an implicit software requirement defect.



You keep saying that, and I keep not understanding what you mean by it. So until we find a way for me to understand what you mean by "concrete inheritance being an implicite software requirements defect", I feel unable to discuss in which ways I see it as a fundamental flaw.

Part of my problem might be related to the fact that I don't see requirements as a strong driver in deciding about the usage of inheritance. To me, maintainability and extensibility are much stronger forces on the structure of the solution domain (i.e. the system that is supposed to solve the problem) then the structure of the problem domain (the requirements) is.
[ September 10, 2005: Message edited by: Ilja Preuss ]

 
Tony Morris
Ranch Hand
Posts: 1608
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator


My preferred approach to public APIs is to only publish interfaces and factories.

Still, I don't see how any approach could fully prevent the need to change a public API. After all, the requirements on the API change with time, and with the requirements possibly even what is considered implementation detail.


I assume you prefer factories because you worked out that Java constructor semantics are a violation of encapsulation (and again, can digress to a requirement defect). This is a good thing. However, the point I am making is that that symbiotism must exist in the case of interface operations, otherwise, you have a horrible flaw. The most obvious case is java.util.List which contains a whole bunch of operations that are not symbiotic. How often have you wanted to erase some of those operations? For example, to provide immutability. This is a perfect example of the need to separate symbiotic operations. Technically, and pedantically speaking symbiotism can never exist between two operations, but in practice, it is often safe to assume that the non-symbiotic relationship will never be a requirement of clients. The important question is, "where is the threshold?". I'll let you be the judge, but make sure you are a well-informed judge first

Almost always my interfaces contain one and only one operation unless they are subinterfaces, in which case, they inherit from two or more interfaces and declare no additional methods (though I don't have any practical examples). Following the rule of symbiotism never causes the 'broken public API' syndrome by definition. Another way to look at it is that by declaring two operations on an interface implies that you have the ability to foresee future, that is to say, all clients who want Operation1 will always want Operation2 as well. That's a very bold claim, even if it appears sensible on first intuition. The "how things are" versus "how things appear" design pattern (Shall I patent that? ). Declaring a single operation on an interface declares that today (at this point in time), this operation belongs on this interface; if this operation dies, so does the whole interface. It makes no sense to say this about two operations unless they are extremely tied together intrinsically (approaching absolute, but of course, can never arrive since symbiotism requires specification of implementation detail).


I'm not sure I'm following you - just want to notice that "adding an operation" is not my typical use of inheritance.


I understand that, and I'm sure a lot of people would agree with you. I blame marketing campaigns, books, etc. for the portrayal of this false doctrine.


You keep saying that, and I keep not understanding what you mean by it. So until we find a way for me to understand what you mean by "concrete inheritance being an implicite software requirements defect", I feel unable to discuss in which ways I see it as a fundamental flaw.

Part of my problem might be related to the fact that I don't see requirements as a strong driver in deciding about the usage of inheritance. To me, maintainability and extensibility are much stronger forces on the structure of the solution domain (i.e. the system that is supposed to solve the problem) then the structure of the problem domain (the requirements) is.


Believe me, I'm just as, if not more, frustrated about it than you are. I have discussed it in great detail with my trusted critics (most of which once held similar opinions to yourself) and I have won the debate; we all agree on this. I have explained how it can digress to software requirements (within a reasonably assumed axiom). However, one thing I have not done is formally expressed it in writing, and every day, I think about how I would go about that to ensure I cover all the important points, without boring the reader, and being able to convince even the most ignorant audience (like I did with my said critics). That's why I got annoyed at Holub's lame attempt http://www.jtiger.org/articles/why-extends-is-not-evil.html

Anyway, I apologise for leaving out what is clearly an important detail in my argument, but I don't feel that I would do the topic justice on an internet forum just by continuous babbling - I prefer a formal proof, since I also know that one exists.
 
Mr. C Lamont Gilbert
Ranch Hand
Posts: 1170
Hibernate Eclipse IDE Ubuntu
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
In reviewing my code I notice all (slight exageration) my classes are either abstract or final. I guess I dont have much concrete inheritance. Its probably from my attempt to use composition. Im not so rigid in my approach anymore, but some things linger.
 
Michael Ernest
High Plains Drifter
Posts: 7289
Netbeans IDE VI Editor
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ilja Preuss:

I have the vague feeling that making mistakes simply is part of the learning experience. So I'm not sure that a language should protect us from that. I feel more inclined to belief that beginners should be guided by experienced mentors. Might just be me, though...


I'm reminded of some corny wisdom one sees now and then: "There are two kinds of fools in the world; one who never does what he's told, and one who never does anything else."

Making mistakes is one way to call it; experimentation is another. Finding fault in the process seems to me needlessly harsh. Finding the edge at which things seem to work by themselves or don't is, outside the realm of personal safety, a critical step. People who aren't willing to experiment never get the insights that come from that.

It's a far worse crime as a teacher, in my opinion, to insist students do as you do, think as you think. If they're not making choices, you're not helping them do anything but copy you. And while that may be fine for mastering the fundamentals of any practice, it's no good for teaching people to go out and learn on their own.

I like your word guidance. That's the key.
 
Ernest Friedman-Hill
author and iconoclast
Posts: 24207
46
Mac OS X Eclipse IDE Chrome
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I can't find anything specific to disagree with here, but this line of argument still makes me uneasy. If giving the programmer all the rope they need to hang themself is a good idea, why are we programming in Java rather than C++? Surely C++ lets you do many things that Java doesn't.

Good programming tools let you think about the problem, rather than about your process. Rather than asking people to avoid certain possibilities because they lead to trouble, doesn't simply making those troublesome possibilities illegal make everyone ultimately more productive? It's one less thing to worry about. Isn't that what Java was designed to accomplish?

So is Java Newspeak already? Or is it not Newspeak enough?
 
Wanderer
Posts: 18671
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
[CLG]: In reviewing my code I notice all (slight exageration) my classes are either abstract or final. I guess I dont have much concrete inheritance.

No, concrete inheritance refers to inheriting an implementation, as opposed to inheriting only a declaration. Whenever you extend a class, you're inheriting method implementations - that's what's meant by concrete inheritance here. Conversely if you implement or extend an interface (with a class or interface, respectively), you're not inheriting implementations, only declarations. If you've got abstract classes, you must be extending them - thus, you're using concrete inheritance.
[ September 12, 2005: Message edited by: Jim Yingst ]
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ernest Friedman-Hill:
I can't find anything specific to disagree with here, but this line of argument still makes me uneasy. If giving the programmer all the rope they need to hang themself is a good idea, why are we programming in Java rather than C++? Surely C++ lets you do many things that Java doesn't.



Good point. And I think the answer is: there needs to be a balance. Don't keep something just because *someone* might find it to be useful; don't drop something just becauce *someone* might misuse it. I'd even guess that the balance has to be different for different needs - that might be one of the reasons why there is more than one language...

I think a good question to ask could be "how much would an experienced developer miss a feature after one year working without it?"

Being bold enough talking about me, I'm sure I don't miss pointer arithmetics. I'm not sure wether I'd miss multiple inheritance if I'd have had more experience with it. I'm quite sure I'd miss concrete inheritance (though it might be possible to design a language feature that could replace it). I'm not at all sure I'd miss static type checking.


Good programming tools let you think about the problem, rather than about your process. Rather than asking people to avoid certain possibilities because they lead to trouble, doesn't simply making those troublesome possibilities illegal make everyone ultimately more productive? It's one less thing to worry about.



Personally I think that *avoiding* concrete inheritance is throwing out the baby with the bath water. My current approach is to use it "wisely".

Isn't that what Java was designed to accomplish?



If so, it certainly had to be balanced with other goals - market acceptance, time to market, development costs...


So is Java Newspeak already? Or is it not Newspeak enough?



Sorry, my dictionary doesn't know "newspeak"...
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Michael Ernest:

Making mistakes is one way to call it; experimentation is another. Finding fault in the process seems to me needlessly harsh. Finding the edge at which things seem to work by themselves or don't is, outside the realm of personal safety, a critical step. People who aren't willing to experiment never get the insights that come from that.



Just today I read in "Lean Software Development" that Science is most efficient when around half of the experiments fail, because failed experiments generate way more new knowledge than succeeded ones.

The more we need software development to be innovative, the more we probably should allow software development to be experimental?
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ernest Friedman-Hill:
Good programming tools let you think about the problem, rather than about your process. Rather than asking people to avoid certain possibilities because they lead to trouble, doesn't simply making those troublesome possibilities illegal make everyone ultimately more productive?



I'm a learned precision mechanic. Most of the time in the first year of the apprenticeship we learned how to rasp.

Of course rasping isn't very productive for most things. At the end of the apprenticeship, you don't rasp very much - but you don't totally ban rasping either, you just know when to do it and when not to do it. And of course you've learned about some things you should *never* do, such as using a drill to deburr a brass component (I had to learn that the hard way - fortunately, noone got hurt :eek .

OK, this isn't a perfect analogy. But perhaps there is *something* to think about in it...
 
Michael Ernest
High Plains Drifter
Posts: 7289
Netbeans IDE VI Editor
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
EFH: I can't find anything specific to disagree with here, but this line of argument still makes me uneasy. If giving the programmer all the rope they need to hang themself is a good idea, why are we programming in Java rather than C++? Surely C++ lets you do many things that Java doesn't.

ME: Can you make a rope that's good for lariats but not for nooses?

EFH: Good programming tools let you think about the problem, rather than about your process.

ME: And whenever they come out, I'll be in line for a set!

EFH: Rather than asking people to avoid certain possibilities because they lead to trouble, doesn't simply making those troublesome possibilities illegal make everyone ultimately more productive?

ME: Giving up power for safety is a touchy and subjective topic. If we can express that trade-off as improved ease or convenience -- or less hair-pulling around the debugger -- it's easy to make the case. I remember very few early Java adopter students I had missing pointer arithmetic. I remember lots of them mourning the loss of operator overloading. Loss of multiple inheritance was also a substantial point of contention for some power users.

EFH: It's one less thing to worry about. Isn't that what Java was designed to accomplish?

ME: I'm sure you meant to say XML, not Java.

Seriously though: by way of promoting rapid development, yes. Another key Java goal was to induce massive migration from C/C++. I don't imagine Java could have succeeded on the second point without supporting concrete inheritance. But because Java has removed some problematic aspects of C/C++, it's easier to see what else we could also profit from removing.

I think general-purpose programming is still more art than science and will be for some time. Art, in this sense, means making effective choices from the materials at hand. Peter van der Linden said back in '94 that program proofs aren't ready for prime time and I don't see what's changed since. We still need to counsel new programmers on good choices until the tool that paves not the road to hell is realized.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Michael Ernest:
I think general-purpose programming is still more art than science and will be for some time.



My gut feel is that designing innovations always will be - be it software, cars, buildings, whatever...
 
Ernest Friedman-Hill
author and iconoclast
Posts: 24207
46
Mac OS X Eclipse IDE Chrome
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ilja Preuss:
Sorry, my dictionary doesn't know "newspeak"...



George Orwell, 1984. A state-engineered human language in which it is impossible to express ideas the totalitarian state doesn't want you to have. Obviously this is usually used disparagingly, although there's an obvious positive parallel to computer language design.
 
Mr. C Lamont Gilbert
Ranch Hand
Posts: 1170
Hibernate Eclipse IDE Ubuntu
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Jim Yingst:
[CLG]: In reviewing my code I notice all (slight exageration) my classes are either abstract or final. I guess I dont have much concrete inheritance.

No, concrete inheritance refers to inheriting an implementation, as opposed to inheriting only a declaration. Whenever you extend a class, you're inheriting method implementations - that's what's meant by concrete inheritance here. Conversely if you implement or extend an interface (with a class or interface, respectively), you're not inheriting implementations, only declarations. If you've got abstract classes, you must be extending them - thus, you're using concrete inheritance.

[ September 12, 2005: Message edited by: Jim Yingst ]




Interesting. I don't see a problem with inheriting from classes designed exclusively for that purpose. As stated earlier, I think the problem comes in when you inherit interfaces through your extension. Basically I dont have any classes that appear both as inheritable and as types used anywhere. So you can call it concrete inheritance, but it would be tough to prove as you would never ever see the parent class anywhere.

I hope you get my meaning here?
 
Ernest Friedman-Hill
author and iconoclast
Posts: 24207
46
Mac OS X Eclipse IDE Chrome
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
C.L., I think we're on the same wavelength. All the evil, in my view, comes from extending a class and making use of the type this bestowed. Inheriting implementation from a class is one thing, but inheriting type is quite another.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ernest Friedman-Hill:
Inheriting implementation from a class is one thing, but inheriting type is quite another.



Can you give us an example of an occasion where inheriting the type from a class got you into trouble? Did you violate LSP, or something?
reply
    Bookmark Topic Watch Topic
  • New Topic