Bookmark Topic Watch Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin
  • Likes 2
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Report post to moderator
(Level: intermediate)

Often on this website you'll see phrases like "thinking objectively" or "thinking in objects", but I've yet to see a book or article that actually describes the process. They teach you what classes and objects look like and how to use them mechanically, but very few that I've seen actually delve into the "objective" thought process. And I suspect that's because it's very difficult to describe a thought process.

Java also complicates the purity of this process by being a hybrid language: classes and objects are Object-Oriented, but the code we write in methods is almost entirely procedural, so it's perfectly possible to write programs - even quite good ones - that are more "procedural" than "objective".

.

This is an essay I wrote back in 2008 that tries to describe my "light-bulb" moment in terms of objective thinking. It is NOT meant to be a guide; simply a description of my thought processes before and after my "moment of clarity".

For those of you who started out with Object-Oriented languages, or who came from a functional or "pure object" (eg, Smalltalk) background, it's probably old hat; but I'm hoping that those who, like me, came from a procedural background, may find something in it to help them get over the "objective hump".

Here goes:




I've been programming for nearly 40 years, 32 of them as a professional and, as you may imagine, have seen a fair few changes in my time. Possibly the most significant of these is "Object Orientation" - a subject, I've always suspected, that many more claim to understand than actually do. I know, because I was one of them.

Yesterday, that all changed.

.

To explain my "Eureka" moment, I need to describe the problem I was working on; so I hope you can bear with me while I go through it.

The problem was this:

Write a parser that can go through a source file made up of lines of text and remove all comments and quoted strings "intelligenty".

I won't go into the 'why's of the problem, save to say that it was one that I'd already solved with awk, and had a feeling that it might lend itself to an Object-Oriented solution.

The definition of "intelligent" can be taken as meaning simply: "has rules". The solution I came up with was written in Java, but could have been written just as easily in any OO language.

.

In my initial draft, I came up with the following Objects:

  • Source: Since we're dealing with line-oriented character data, this is simply an array of Strings. Files and streams are converted as required.
  • Parser: Takes a Source and spits back a "stripped" version of it.
  • QuotationRule: Defines the rules for quoting a String from the source, including specific start and end characters.
  • CommentRule: Very similar to a QutotationRule, except that comments are often signalled by a multi-character string (eg, "/*" or "<!--") and terminated by a different one. Comments can also be:

  •   'single-line' (eg, "#" in many scripting languages, or "//" in Java/C++).
      'stacked' (ie, a comment can contain another comment).

    The last two objects together make up the "parsing rules".

    .

    I won't bore you with all the intricacies of escape characters and the like, save to say that it complicates things and masked the solution for an old fart like me. I'm also going to leave out the output side of the problem, since it's the parsing part that is important.

    OK, so we've loaded our lines to be parsed. Now comes crunch time:

      parse(): This method needs to go through all lines and determine which bits of text are "quoted" and which aren't.

    .

    And this is where my problems started.

    .

    My first crack was to solve the problem like an old C programmer:

    "I have a load of characters to parse. I need to start at the beginning and go through, character by character, keeping track of the current parsing state, and do whatever needs to be done."

    Reasonable 70/80's logic and, using my pseudo-code (my apologies to those who are well-versed in this stuff), the solution looks something like this:



    Leaving aside the complexities of the checks themselves, can you see the problem?

    .

    While providing a solution, the code is by no means optimal and cannot be, because it deals with all characters the same way, regardless of the current situation.

    Basically, it's a linear solution. It works, but it's tightly coupled to current requirements - and who doesn't remember all the books that warned us about that?

    .

    Imagine what the code might look like if we added 5 new types of things to look for ... or 10 ... especially if they aren't simply exclusive.

    For example, the 'grave' quotes (`) used in some shell scripting languages was a particularly thorny problem for my original awk program, since they can appear inside quoted strings.

    We need meta-data to help us determine what the current situation is and deal with it; and a procedural language like C simply doesn't have the tools to handle it gracefully.

    .

    And now we come to my moment of clarity.

    .

    Like a lot of such moments, it's very difficult to explain exactly how I came by it. I suspect there's a lot of thinking and repetition involved before you actually get there; but my hope is that following the way I was thinking before and after my "moment" may help you find yours quicker.

    All I can tell you is that I'd been mulling over all of the above for quite a while, and then suddenly turned the problem on its head:

    What if a QuotationRule (or CommentRule) could do this for us? After all, it "knows" its own rules. Indeed: What if a Rule is simply an interface that QuotationRule and CommentRule implement?

    And what if, instead of going through the text and trying to decide what to do next, we pass the text (along with its current position) to a Rule - the Object that we know for certain does know what to do with it.

    .

    Now the solution becomes (again, in my pseudo-code):



    Much simpler.

    I don't need to know how many characters the quotation/comment/whatever starts with any more, or whether it allows escape characters or duplicates or embedding - my Rule object knows.

    I simply hand it the text and a current position and ask: "am I at the start of you?". If I am, I hand off the same information to its parse() method and let it do its thing.

    Furthermore, that method can do its job unencumbered by all the other rules that a procedural parser has to deal with.

    Now that's modularity.

    .

    For those of you who are still awake, the first draft of my final code (in Java this time) was:



    Pretty straightforward, and not much chance of it needing much change if we add some new rules.

    .

    Now I'm going to stop here just for a moment, because I can almost hear some people saying: "but isn't that just breaking the problem up into smaller pieces?".

    And the answer is: YES. Absolutely.

    However, we haven't just broken it up structurally - and there are plenty of reasons for doing that alone - we've broken it up empirically.

    Note that the program is no longer a single parser, it's many; and they work independently of each other - unless requirements dictate otherwise. So we now have a structure of objects doing the work that are designed along the lines of our actual parsing requirements. Basically, we've broken out of the procedural cycle of a central "mill" that has to keep track of everything.

    Remember: Objects have their own state

    .

    Needless to say, that isn’t where it finished. Now that I was finally thinking about the problem the right way, development was rapid:

    What if "lines" are stored in an object that keeps track of it’s current position?

    What if the "regular character" logic is simply another Rule, akin to the 'default' clause of a switch statement?

    Now the logic becomes something like:



    And there you have it: a 5 line text parser (OK, 12 with getRule()).

    For those of you who grew up writing OO code, this is probably old hat; but it was a major breakthough for an old procedural guy like me.

    Furthermore:

  • DefaultRule is a good candidate for a "singleton" class (or an enum).
  • Rules can be distinguished by their start String, which allows them to be stored in Collections such as Maps that provide fast searches - and a by-product of that is that it also prevents us from setting up "ambiguous" rules.
  • Lines is basically an Iterable : a very familiar interface to most Java-ites.
  • New Rule classes can be easily tested with a framework like JUnit.


  • This lends itself to the possibility of a very generic framework in which we simply define a Collection with a few extra goodies as a Parsable, and can then parse all kinds of things according to rules we create. I haven't got that far ... YET .

    .

    It also raises some interesting points:

    Parsing is a sequential process (my first mistake; the problem is sequential but the solution is not). You can't, for example, hand a "parser" half a file and expect it to do its job properly. On the other hand, you could hand it half a file, do something else, and then hand it the rest of the file and expect it to complete the job (in my old database parlance, we would call this "restartable" or "resumable").

    Parsers can be chained. All that needs to happen is that a new Parser is given the rules, and position of a previous one, and continue the process. The "take a position, return a position" style also lends itself to recursion, which could be very useful for a parser that needs to, for example, process include or header files, or scripts that call scripts that call scripts ...

    .

    At the risk of teaching grandmothers to suck eggs, I will add this point:

    A good program is the result of thought; so resist the urge to code.

    Dylan Thomas once said when asked how he wrote such effortless prose (paraphrased):

    "Ah, but you didn't see the stuff I threw away. What takes you 3 minutes to read took me 3 months to write."

    Now, I don't for one minute see myself as a programming 'poet'; I'll stick with trying to be a good journeyman - and this "moment of clarity" was a step along that road.

    And it only took me 32 years ...

    .

    Winston Gutkowski, September 2008.

    (With thanks to my Mum, an 85-year old English teacher who knows nothing about programming, who had to put up with me while I chuntered on about my "Eureka" moment on the phone to her 5,000 miles away. She gave me the best piece of advice anyone could have: Write it down.)



    CategoryWinston
     
    Consider Paul's rocket mass heater.
      Bookmark Topic Watch Topic
    • New Topic