• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Ron McLeod
  • Paul Clapham
  • Tim Cooke
  • Devaka Cooray
Sheriffs:
  • Liutauras Vilda
  • paul wheaton
  • Rob Spoor
Saloon Keepers:
  • Tim Moores
  • Stephan van Hulst
  • Tim Holloway
  • Piet Souris
  • Mikalai Zaikin
Bartenders:
  • Carey Brown
  • Roland Mueller

why use assertion rather than xception Handling

 
Ranch Hand
Posts: 1608
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator


The conditions you are asserting must be tested as rigorously. You are picking up where the compiler leaves off.


This is the important point right here. You're right, you're using runtime safety where compile-time type safety cannot be used. I use the term "cannot" here because the language does not allow it, or if it does, it is so vastly misunderstood, that most developers don't know how to apply it. For example, the very existence of 'null' violates the very fundamental objectives of the Java programming language; that is, compile-time type safety. What do you do if you're passed a null reference to a public API? You're forced to handle it at runtime, optimally, with immediate failure (e.g. NullPointerException). I understand that this point is glossed over, since I haven't seen it sprawled across the front page of sun.com, nor do I believe it ever will be. I can think of hundreds more points just like it. (While on the topic, and to answer Ilja's question on another forum regarding the language... without going into too much detail, it is a type-safe object oriented language specification and API specification that doesn't have a primary agenda of repairing the thousands of mistakes of Java/C#/LISP/Smalltalk/Scheme, but does so as a mere consequence. To fully answer your question, I'd prefer to present you with the API specification, which is incomplete (to be honest, is mere scratch notes in a friend's SVN repos). The language and API specification are bound (dependent) at well-defined points, but there is no dependency on a bytecode representation specification. This means that the language can compile to JVM bytecode, .NET bytecode, native bytecode, etc.). The language is vastly different to Java, much simpler, and I predict much easier to learn and use. I theorise in this way because most of the Java that I write is working around its brokenness, which is extremely annoying.)

In any case, that Java is an extremely suboptimal example of a type safe language aside, the very existence of the assert keyword again violates the same objective. It makes no sense to have "fail early, but only when I switch it on". The optimal place to fail is at compile/build time. If you cannot do that for whatever (and it better be good) reason, you fail *immediately* at runtime. Not because "maybe I'll fail", not because it's a "developer error, not an application error" (where have I seen that before?). When it comes down to requirement analysis, you'll find that these terms turn into nothing more than artificial replacements for concepts that have non-trivial explanations. They are ill-defined at best (please don't reiterate the marketing literature here).

I hope this is enough explanation to satisfy those requesting it.
[ August 30, 2005: Message edited by: Tony Morris ]
 
Ranch Hand
Posts: 531
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Insofar as we are all using Java here the discussion of a potentially superior language is moot. SmallTalk had extremely limited success and with good reason. Eiffel is still a colossal dud ten years plus after the fact. They can tell you how great it is knowing there's little chance you use it or every will. In reality the problems are many and deep.

C++ leveraged the massive C language culture and cultural artifacts and was wildly successful from a commercial perspective. Java did the same, leveraging the C++ culture and its artifacts. Successful doesn't begin to describe the impact of Java.

So, here on Earth the dominant business language is Java -- I won't presume to speak for other planets, just Earth, tee hee. We used to have a version that had no assert facility. That changed, and I started using it to great effect. It is responsible for a materially significant increase in agility. Yes, it does require extralinguistic gymnastics in the form of ant-driven unit tests, but last time I checked our customers don't care whether we used a lagnuage facility or waved a rubber chicken over the build machine while chanting.
[ August 31, 2005: Message edited by: Rick O'Shay ]
 
author and iconoclast
Posts: 24207
46
Mac OS X Eclipse IDE Chrome
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Not to hijack too much, but Tony, tell us what language (real or imaginary!) you'd program in, given a perfect world? Of all the Java experts I know, you seem to be the one that likes Java the least...
 
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Tony Morris:
You're right, you're using runtime safety where compile-time type safety cannot be used. I use the term "cannot" here because the language does not allow it, or if it does, it is so vastly misunderstood, that most developers don't know how to apply it. For example, the very existence of 'null' violates the very fundamental objectives of the Java programming language; that is, compile-time type safety. What do you do if you're passed a null reference to a public API? You're forced to handle it at runtime, optimally, with immediate failure (e.g. NullPointerException).



I agree that a language in which you could declare a variable to not allow the null value would be more elegant (do you know Nice?).

On the other hand, I'm not sure that all things that you might want to assert for could effectively be checked at compile time by any language.

(While on the topic, and to answer Ilja's question on another forum regarding the language... without going into too much detail, it is a type-safe object oriented language specification and API specification that doesn't have a primary agenda of repairing the thousands of mistakes of Java/C#/LISP/Smalltalk/Scheme, but does so as a mere consequence. To fully answer your question, I'd prefer to present you with the API specification, which is incomplete (to be honest, is mere scratch notes in a friend's SVN repos). The language and API specification are bound (dependent) at well-defined points, but there is no dependency on a bytecode representation specification. This means that the language can compile to JVM bytecode, .NET bytecode, native bytecode, etc.). The language is vastly different to Java, much simpler, and I predict much easier to learn and use. I theorise in this way because most of the Java that I write is working around its brokenness, which is extremely annoying.)



I would be very interested in learning about such a language. (Of course it could also be somewhat frustrating, because having a good language and making it a successful mainstream language are two totally different things, obviously.)

In any case, that Java is an extremely suboptimal example of a type safe language aside, the very existence of the assert keyword again violates the same objective.



I really don't think that "being statically type safe" was a main objective of Java.

It makes no sense to have "fail early, but only when I switch it on". The optimal place to fail is at compile/build time. If you cannot do that for whatever (and it better be good) reason, you fail *immediately* at runtime.



That I fully agree with!


I hope this is enough explanation to satisfy those requesting it.



I'm satisfied for now, thanks! Time will tell for how long...
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Rick O'Shay:
>> Anyway, I don't think that having bugs being concealed and hoping for the best is very professional.

In addition to the mountain of best practices I am adding asserts with rigorous unit tests. What you are suggesting is that regardless of how well you build your system and how rigorous your testing, if you turn off asserts it's unprofessional.



I don't think that turning off asserts is unprofessional. I think there are probably valid reasons for disabling asserts, but "we should simply hope that bugs correct themselves" isn't one.

All the asserts that are made dispensable at production time by rigorous testing can obviously safely be enabled, as they can't possibly fail. Those that will fail at production time are obviously those that slipped through our testing net, so it seems to me that "I have tested enough" can't possibly be a good excuse for disabling them. What am I missing?


In fact disabling is the recommended practice.



I know that it is a common practice, which doesn't make it a *good* practice.

Who does recommend it? A quick google search only brought up articles that gave the performance argument for disabling. Actually one of them argued that checks that aren't expensive shouldn't be implemented using the assert feature, because they shouldn't be disabled in production.

You can do that without killing a user's thread when a constraint violation slips through the cracks. I think that is the point of contention. Is it better to crash the thread or continue on? In the case of assertions (keeping in mind we still have all of the business logic and exceptions) the consensus appears to be continue. I agree it is not a cut and dried issue.



I don't see that consensus - where do you see it?

In my experience, continuing simply doesn't make sense in most cases - it just leads to unwanted system behaviour, probably even other exceptions.

Now what I didn't think about (due to missing experience) is that assertions actually throw an Error instead of an Exception. Is that what causes your concern of "killing the thread"?

If you argue that I should throw a regular exception instead of using an assert, I could certainly live with that...

>> When when disabling the assert, the problem is still there, it's just not as visible.

Of course. Nobody disputes that. See previous paragraph.



OK. I like problems to be visible. That allows me to take actions in response. I think I should know about them, as early as possible.

You have to have your business logic to take care of expected "errors" and exceptions for unexpected environmental errors. The assertions test errors that the programmer makes that are outside the scope of the compiler. If the unit test fails the build fails just as hard as some source not compiling.



I fully agree up to this point!

So if we assumed that we don't have any bugs in the system in production, I'd agree that we could disable the asserts. I still wouldn't say "should", simply because I don't see how that would make a difference.

But in the case that there still is a bug, I want to know about it, and I actually want the user to know about it, so that we both can take proper actions. That's why I want the asserts to be enabled in production.

>> Yes. My argument is that disabling asserts doesn't make a product more robust. It just conceals problems

If you fail to adequately test then we agree. Testing and asserts are inextricably bound.



I don't follow this reasoning. If there is a problem that would be covered by an assert, of course we didn't test adequately for that problem. That doesn't mean we didn't try. A failing assert just is valuable feedback in that case, so it seems to me - we can use it to learn about the weak spots in our tests, and fix them.

And a system that simply cancels the current action and notifies the user of a failed assert doesn't typically cause more havoc than a system that is allowed to continue to work in an invalid state, in my experience.

It is my (quite subjective) impression that the typical *real* reason for disabling asserts is the assumption that a failed assert causes more havoc regarding the trust of the user into the system than a system that "just" behaves strangely. But that's more a cause of systems with crappy quality, not by the asserts in general, in my not so humble opinion.


>> It still seems to me that "turn all on" could be the default without affecting the other two use cases at all.

Agreed, as long as you have the option of turning them all off, too.



Well, yes, that's what I assumed!
 
Rick O'Shay
Ranch Hand
Posts: 531
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
In a nutshell, you believe assertions should be left on in production. In my opinion they should be left off. We have to agree to disagree on this point.

Perhaps consensus is the wrong word but the recommended practice is to turn off in production. I agree with that recommendation. I see a danger of using them like exceptions when they are not. I see a danger of side effects bleeding through assert statements and changing runtime behavior.

Asserts add value, you still have the same responsibilities in the runtime you always had. If you are depending on asserts in production you aren't using them correctly from my perspective. They should be used as compiler extensions in conjunction with unit tests. Clearly subject to interpretation and varying opinion, but that's mine.
[ August 31, 2005: Message edited by: Rick O'Shay ]
 
Rick O'Shay
Ranch Hand
Posts: 531
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
>> But in the case that there still is a bug, I want to know about it, and I actually want the user to know about it, so that we both can take proper actions. That's why I want the asserts to be enabled in production. <<

You may be thinking, gosh, how can anybody disagree with that? So I think I should expand on how disagreement is possible here. Back up to 1.3 when you didn't have asserts. How would this issue manifest? How would it be handled? Now, you still have all of those responsibilities. An error will be manifest and reported or not. Fast-forward and add constraint checks using assertions. Add unit tests to validate those assertions, or use functional tests, or even keyboard monkeys if that's what you have.

Let's analyze. You added considerable value with your fail-early strategy based on assertions. Go through your test cycle and ship it. Now, you turn off assertions. You are back to the standard facilities for error handling. Because assertions are turned off, you aren't abusing them as cheap exception management. You also don't have assert expression dependencies. In fact, you should repeat your functional/integration tests without the assert facilities.

Your strategy will toast threads at runtime if an assertion fails, mine will yield the natural production result, whatever that may be, and be handled or not. I'm never wholly dependent on asserts, you could slip in that direction.

Let's say that both points of view have merit -- even though secretly I know you are wrong. No wait! That just slipped out, sorry
 
Ranch Hand
Posts: 1170
Hibernate Eclipse IDE Ubuntu
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
The only value in asserts is turning them off at runtime. If you are not turning them off at runtime, you should be using exceptions.

Personally, I don't have any asserts that are not backed up eventually by exceptions. I do asserts within my internal libraries, and exceptions on the API. assert helps me to catch the bug earlier so I can see where it is (and fix it), not so the program crashes gracefully.
 
Tony Morris
Ranch Hand
Posts: 1608
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Ernest Friedman-Hill:
Not to hijack too much, but Tony, tell us what language (real or imaginary!) you'd program in, given a perfect world? Of all the Java experts I know, you seem to be the one that likes Java the least...



I certainly don't dislike Java, I just like to acknowledge its flaws. This at least allows me to make informed decisions, such as "don't use the assert mechanism". The language is full of flaws, and it surprises me that some of the workarounds that are applied usually are a response to a flaw, but it goes unrecognised. I can't "dislike" Java, I implement the 1.5 specification as part of my day job every day!! I also believe that what we have today is extremely suboptimal, and that the critical mass beliefs are nothing more than a product of propaganda (maybe that's too strong a word). For example, nobody out there (in Sun marketing) will tell you that the autoboxing of integer values between -128 and 127 is a dirty hack that is a result of exposing a constructor (which should always be declared private). There are *hundreds* of examples.

In any case, I've started working on my own language and API specification along with another person who has a similar outlook as I do, but admittedly, time is rare for both of us these days, so it has stalled somewhat. The important point is, that I believe that it will highlight all the existing flaws in other languages in its simplicity and a practical demonstration through its API Specification. These two must be written entirely from scratch since the JLS and J2SE APIS are both intrinsically tied to some very fundamentally flawed concepts. For example, I claim without a doubt that using concrete inheritance is an implied design flaw, since it cannot fit into valid requirements (assuming a very broad axiom). After some very thorough discussion (many hours) with some of my biggest, and my opinion, most intelligent and objective critics (i.e. in person, not on the internet since it is too tiring), it has been determined, even by those who once religiously believed otherwise, that the best possible refutation to my claim is that it "saves typing in a Java context". I eagerly await anything that is more substantiating, since I too, enjoy learning, but until then, my assertion stands. Now if the axiom of requirements is "to save as much typing as possible", then the use of concrete inheritance may well be valid, however, my claim assumes no such axiom. I also claim that requirements cannot be accurately expressed in English - only approach accuracy, so that makes it difficult to discuss. In any case, with a relatively superior language specification, this refutation becomes moot, since there is no typing to save; it simply just isn't a valid construct or concept.
 
Ranch Hand
Posts: 688
Mac
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hi Punts,
This seems to be a one really, classic, high quality discussion going on.
Rather then adding something I just like to sit aside and read the thread everyday.
 
Rick O'Shay
Ranch Hand
Posts: 531
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
>> nobody out there (in Sun marketing) will tell you that the autoboxing of integer values between -128 and 127 is a dirty hack...

It keeps those wrappers cached. You will not notice it unless you are in the habit of comparing objects with == for some reason. The same "problem" exists with the string pool, or user-defined pools.

I am sure that if you look very hard you can find microscopic flaws along the outer edges of anything.
 
Tony Morris
Ranch Hand
Posts: 1608
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I'm well aware of the hacked up caching mechanism, but I pose the question to you, "why -128 to 127?"; in fact, who has a crystal ball that determines what the optimal caching scheme is? Is it fair to say "oh it will handle X% of the general case?". If so, who did the survey? I certainly wasn't asked.

The fundamental problem is that the new keyword is guaranteed to return a new instance, which is incredibly unfortunate. A decent language (with the same stated objectives of Java) would have explicit return values from constructors (thus 'new' returns a contractual type only, with no semantics about whether or not it is the same instance, and so not violating encapsulation), or in the case of Java, the optimal workaround is to declare all constructors private and use static creation methods. It's incredibly verbose, but it's necessary in order to ensure at least partially correct design. This allows you (the client of the contract) to configure the cache to your needs. It's incredibly arrogant of any API to do this for you in such a way that it sees fit. A non-private constructor violates encapsulation in the same way, only worse (since it also exposes, and binds the client to, implementation detail) as a non-private field.

If you wish to use Agile processes, or Test Driven Development as your basis for correctness, then I don't see why these issues are overlooked; they appear to be on the forefront! ...at least to me and a few others I discuss it with.
 
Ernest Friedman-Hill
author and iconoclast
Posts: 24207
46
Mac OS X Eclipse IDE Chrome
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Tony, I share your negative feelings toward concrete inheritance, because I've gotten majorly burned by the legacy it can leave you. I would love to talk about this more -- I'm going to start another thread in this forum. Please look for it.
 
Rick O'Shay
Ranch Hand
Posts: 531
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
>> why -128 to 127?

The language requires these wrappers be cached for performance reasons. A single byte index has obvious performance advantages assuming those values represent the most commonly used integers. You can skip bounds checking if your index is a byte and your array has 256 values. The JVM requires this cache so plenty of optimization is possible.

4 out of 5 dentists surveyed agreed: 256 is the right number of integer wrappers to cache. That's good enough for me.
[ September 01, 2005: Message edited by: Rick O'Shay ]
 
Tony Morris
Ranch Hand
Posts: 1608
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
OK, let's look at it from a requirements perspective. I realise I'm in dangerous waters here because the complete explanation is extremely verbose. Java has another language deficiency - specifically, the existence of primitieve data types. Nonetheless, it provides "wrapper" types that allow you to interface to a primitive type through a reference. Now, English requirements are implicitly deficient and can only approach completeness - unit testing is one way to improve on this i.e. to express requirements through unit tests. In any case, when you, for example, "wrap an int", ask yourself, what is your requirement? Is it *really* to "represent an int as a java.lang.Integer"? I'll bet not; in fact, who cares what type it is? And then, who cares if it is a new instance or not (since they are immutable types by contract)? What you really want is "to represent an int as a reference type", nothing more; anything more is exceeding requirements. The best (though not optimal) way you can meet this requirement in Java is with a method, such as TheIntegerContract get(int i). TheIntegerContract would typically declare all the public methods of java.lang.Integer. Then you could interface with the int through the contract. You don't care if it came from a cache, if it's the same or a new instance, whatever. Your requirement is met, and not exceeded. Of course, the method could always return 'null', so you must stipulate that it "shouldn't", and that if it does, you get runtime safety only with an explicit fail early (i.e. if(ret == null)fail() .

By the same token, by exposing the java.lang.Integer contructor, you're forcing clients to accept the implementation detail of "a new instance is returned" (implied by the new keyword). You're exceeding requirements - you don't care, and you shouldn't *ever* care. It's the job of the contract implementation to care about that, not the client of the contract.
 
Mr. C Lamont Gilbert
Ranch Hand
Posts: 1170
Hibernate Eclipse IDE Ubuntu
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
First, you complaints about ints and using a constructor hack. Is this in the specification, or is this part of a single vendors implementation? I found also that private inner classes to use hidden exposed constructors, but how does this affect me? Security is not through hiding constructors, and security is the only time to be concerned with such hidden details.

And about this 'exposed constructor' I simply can not agree. I can't think of any reason why 'new' should not produce a new object. If it didn't then it should not be called new. call it 'get.' Besides, I fail to see your problem with this. if you want to hide such detail you can easily use a factory method. Is that not enough, or do you want to force us all to do the same? To what end?

I have read many of your posts and yet to find a concrete example of a flaw and its coresponding drawbacks. All I see is 'philosophizing.'
 
Rick O'Shay
Ranch Hand
Posts: 531
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I think everybody agrees that exceeding requirements is generally poor form and that exposing implementation details is to be avoided. I do not see any practical problem in this particular example. As a practitioner I depend on the JCP to deliver features that solve more problems than they create while the problems that are create are either inconsequential or utterly trival to work around. Autoboxing delivers benefits that on balance are better than we had without it.
 
Tony Morris
Ranch Hand
Posts: 1608
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator


I have read many of your posts and yet to find a concrete example of a flaw and its coresponding drawbacks. All I see is 'philosophizing.'


I am merely attempting to encourage the required mindshift and objectivity before one can even begin starting to reason, and look at concrete examples, etc.

As I said, the full explanation, with examples, indisputable proof (at least, against the very weak arguments that are presented here, and that were refuted and agreed upon as superfluous many eons ago by my trusted critics), alternatives, clear definitions of an axiom, how it relates to software requirements, exceptions (a portion of a stated axiom), optimal workarounds and their drawbacks, how to assess the practical approach (e.g. banning concrete inheritance implies that you do not use Java, but non-technical reasons may mandate that you do) etc. is extremely verbose and worthy of a text on its own. I'd write a book, but I'm too busy procrastinating on Java forums. I certainly have no intention of writing that book on this forum. I'll just continue to encourage analysis on your (the arbitrary reader) behalf. If it makes any difference, it's extremely enlightening - I remember the day that I believed all the orthodoxical bollocks that seems to get thrown around as if it were some kind of eternal truth and I thank our creator, whoever that might be, for giving me the ability to think for myself.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Rick O'Shay:
Perhaps consensus is the wrong word but the recommended practice is to turn off in production. I agree with that recommendation.



I was just skimming through "The Pragrammatic Programmer" and by chance on page 120 I found chapter 22 "Dead Programs Tell No Lies", with Tip 32 "Crash Early", followed by a section titled "Crash, Don't Trash". The last sentence reads "A dead program normally does a lot less damage than a crippled one." I couldn't agree more...


I see a danger of using them like exceptions when they are not. I see a danger of side effects bleeding through assert statements and changing runtime behavior.



Having them disabled by default doesn't remove that danger, as far as I can tell, does it?
 
Rick O'Shay
Ranch Hand
Posts: 531
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
If you unit test post-disable (and test failure response in addition to correct behavior) it does help prevent use of assertions as exceptions rather than the programmer errors they are supposed to be testing. That will also expose any assert statement dependencies.

OK, so here is a counter argument. You leave them enabled and you use them as exceptions and you inject application dependencies in the assert statements from time to time. If they are always on then so what? The "side effects" are now just operational code. The assert exception is just another unchecked exception.

Here is my response: you just lost the ability to sprinkle your code with assertions without regard to performance. You also lost the intent: check for programmer errors not exceptional conditions.

This is a value-add mechanism that does not absolve you of your current responsibilities in terms of exceptions and error handling in general. IMO disabling in production promotes the true raison d��tre for asserts. There is zero performance penality for saturating your code with assertions.

Assertions will help you crash early and often in new and intersting ways, but only if you don't have to worry about the affect they will have on production. No footprint. No performance impact. Sweet.
[ September 05, 2005: Message edited by: Rick O'Shay ]
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Rick O'Shay:
Back up to 1.3 when you didn't have asserts. How would this issue manifest? How would it be handled?



We wrote an Ensure class which had methods such as ensureNotNull(message, object) etc. Those methods throw an Exception if a contract is violated.

If asserts where enabled by default, using them would be more elegant in 1.4+. The way they work now, we will continue using our homegrown solution.


Your strategy will toast threads at runtime if an assertion fails, mine will yield the natural production result, whatever that may be, and be handled or not.



I'm still not sure what exactly you mean by a "toasted" thread, and why you want to avoid it. I'm still a fan of "fail early".
 
Rick O'Shay
Ranch Hand
Posts: 531
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
>> ensureNotNull(message, object)

If you are littering your code to check whether Colin Coder or Penelope Programmer made a mistake then this is inappropriate. Ultimately you are creating a mountain of dead code that simply takes up space and time.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Rick O'Shay:
>> ensureNotNull(message, object)

If you are littering your code to check whether Colin Coder or Penelope Programmer made a mistake then this is inappropriate. Ultimately you are creating a mountain of dead code that simply takes up space and time.



Believe me, if it wouldn't provide value, we wouldn't do it. In fact, when we started this practice, I was very skeptical about it.

Of course our situation might be kind of "special", working on an eight year old, half a million lines of code legacy application. Every small tool that helps us getting it under control is very welcome. Which is not to say that we were rather successful in the last three years.

But if you have a suggestion on what might be more appropriate, it would be more than welcome!
 
Rick O'Shay
Ranch Hand
Posts: 531
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Well, I believe I understand the motivation and purpose. I also agree that you are adding value there. I would also note that I did the same thing before the assert mechanism was available in Java 1.4. The purpose was the same: discover buggy code before releasing it. Testing was mandatory.

The difference is they were applied at very specific interface points since performance was a consideration. What was the cost of a given condition expression? Was it called in a loop? Was the same value being checked at several different location before mine was tested? If so why add another?

The ability to turn these checks off meant I could all but ignore how long the condition took to evaluate. I did not care if there was a loop. It did not bother me that the same value was checked half a dozen times before it got to my method. I was using these to check for programmer errors, not to replace my existing exception handling and general error checking.

Unit testing should be performed with assertions enabled, then again with assertions disabled. You should have a very high level of confidence at that point. That assertion is very unlikely to fail in production. The likelihood that you simply neglected to test one condition or another is way higher. You won't have a hard failure in those cases anyway. Would you prefer to keep those assertions in place in production? Not worth the cost IMO. That is where we disagree. I'm not saying no way, rather, on balance I want them off and I benefit from the liberal application that promotes.
[ September 05, 2005: Message edited by: Rick O'Shay ]
 
Rick O'Shay
Ranch Hand
Posts: 531
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
>> we were rather successful...

I'm guessing the bulk the success attributed to contract enforcement were realized during testing not in the field. Not entirely but mostly. Turning them off in production might have been less rewarding but not significantly assuming you have solid testing.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Rick O'Shay:
>> we were rather successful...

I'm guessing the bulk the success attributed to contract enforcement were realized during testing not in the field. Not entirely but mostly. Turning them off in production might have been less rewarding but not significantly assuming you have solid testing.



Well, point of our (my teams) problem probably is that it takes some time to take a half a million lines of code legacy system under a solid test suite... :sigh:
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Rick O'Shay:

The difference is they were applied at very specific interface points since performance was a consideration. What was the cost of a given condition expression? Was it called in a loop? Was the same value being checked at several different location before mine was tested? If so why add another?



Until now, we didn't have to think about performance in this regard. Our performance bottlenecks definitely are elsewhere...
 
Mr. C Lamont Gilbert
Ranch Hand
Posts: 1170
Hibernate Eclipse IDE Ubuntu
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Ilja, I would argue that what you are doing is not a candidate for assert. Assert should only be used if you intend to turn if off at runtime as I have stated.

The only unique feature of assert is that it can be turned off without recompiling. The only benefit of turning them off is to gain some speed boost, not to hide any information or change the program behavior.
 
Saloon Keeper
Posts: 28116
198
Android Eclipse IDE Tomcat Server Redhat Java Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I've been using assertions in C, C++ and Java for many years. Under absolutely no circumstances will I ever consider an assertion as part of the system design. For me, an assertion has one use and one use only, and that is to catch an error that "can't happen" as soon as possible after the fact and report it as what it is rather than having it cascade into something else and make the problem harder to diagnose.

The Sun spec, in fact, explicitly frowns on any attempt to catch and recover from assertions. Recoverable errors are what Exceptions are for. So are ubnrecoverable errors, for that matter. Assertions are for forensic use. If I can run a program with assertions on and it doesn't behave exactly (disregarding timing) as it would with assertions turned off, that program is improperly designed. And if it asserts, it's buggy.

For me, the benefit of assertions is that I can load up lots of antibugging information, ship a product, and if the thing misbehaves, have the user switch on assertions to get a clearer picture of what went wrong without suffering the overhead that such a fine-grained level of checking would otherwise impose. Before I had that ability (and, more recently, selectable logging levels), I'd have to build a special version of the app with extra checking code and ship it to the user (pre-Internet).

In the original C implementation, assert() was a macro and unless you compiled with the debug switch on, the error-checking code would never make it into the object or executable files at all. In Java it's not as black-and-white, but I don't figure I'm hurting myself any by acting otherwise.

The selectable assertion facility is absurd, as far as I'm concerned, since I either want everything finely-checked or nothing. I suppose that a really resource-intensive app might make me think otherwise, but nothing less.
 
Mr. C Lamont Gilbert
Ranch Hand
Posts: 1170
Hibernate Eclipse IDE Ubuntu
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Well said Tim.
 
Rick O'Shay
Ranch Hand
Posts: 531
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
OK, let's say you have a system that has little unit testing and you sprinkle it with assertions. I can see how leaving those assertions on would be a very tempting strategy. You are actually relying on the customers for part of your testing. Shocking!

Now, consider an application with excellent unit test coverage. It is also slathered with asserts. That application would be better off with the assert function disabled because the chances are slim to none that any of them will trigger. What if one does? Well, what if a method gets called with bad data that has no assert? I would worry about those first.

IMO, the benefit of turning them off is not just a performance boost (which may or may not be significant) but a usage boost. They are more likely to be used because you are confident they are just checking for programmer errors and that they will evaporate in production. Asserts should be actively tested rather than lying around waiting for somebody to step on one and spring the trap in production. Now, I know Ilja is not suggesting that. Rather, he is saying go ahead and leave them in addition to testing, just in case lightening strikes. That's where we disagree.
[ September 06, 2005: Message edited by: Rick O'Shay ]
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Rick O'Shay:
OK, let's say you have a system that has little unit testing and you sprinkle it with assertions. I can see how leaving those assertions on would be a very tempting strategy. You are actually relying on the customers for part of your testing. Shocking!



Yes, it's not a situation I like to be in. And we are working on improving the test coverage.

On the other hand, our customers need new features, so they also wouldn't be comfortable with us spending time solely on writing tests.

And when they find a bug, they want us to fix it as fast as possible. Having "asserts" helps us in doing so.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Tim Holloway:
For me, the benefit of assertions is that I can load up lots of antibugging information, ship a product, and if the thing misbehaves, have the user switch on assertions to get a clearer picture of what went wrong without suffering the overhead that such a fine-grained level of checking would otherwise impose. Before I had that ability (and, more recently, selectable logging levels), I'd have to build a special version of the app with extra checking code and ship it to the user (pre-Internet).



As I already said, until know we didn't have to care about the "overhead" at all.
 
Ilja Preuss
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator

Originally posted by Mr. C Lamont Gilbert:
Ilja, I would argue that what you are doing is not a candidate for assert. Assert should only be used if you intend to turn if off at runtime as I have stated.



The way they are implemented in Java, I'd certainly agree - and it's exactly what we do.

Still, I feel that Rick is right with something he said early in this thread: using asserts would be more expressive.
 
Well THAT's new! Comfort me, reliable tiny ad:
We need your help - Coderanch server fundraiser
https://coderanch.com/wiki/782867/Coderanch-server-fundraiser
reply
    Bookmark Topic Watch Topic
  • New Topic