• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Ron McLeod
  • Paul Clapham
  • Devaka Cooray
  • Tim Cooke
Sheriffs:
  • Rob Spoor
  • Liutauras Vilda
  • paul wheaton
Saloon Keepers:
  • Tim Holloway
  • Tim Moores
  • Mikalai Zaikin
  • Carey Brown
  • Piet Souris
Bartenders:
  • Stephan van Hulst

Rich Object Model versus Anemic Object Model

 
Ranch Hand
Posts: 580
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
We're writing a fairly large 'enterprise' system. We are quite near the start, we have a service layer which currently contains mainly CRUD operations and an object model (we also have a gui and a small server). We're getting to the stage now of adding more complex behaviour. We had a discussion the other day about whether our data model should be essentially dumb, that is, have no behaviour other than getters and setters, I was against as it goes against one of the main tenets of OOD - put data and behaviour together. However the consensus was that this was a good idea because the behaviour implementation will vary. The idea is to build a 'workflow engine' (does this actually mean anything....) which we can tailor for different clients by 'plugging in' diifferent behaviours.

For example, one of the things the system does is order processing. In this case for pharmacutical companies. An order is made up of several items, e.g. 100 packs of ibuprofen, 200 of paracetamol and 200 packs of viagra. In our object model, we have an Order class which has a collection of OrderItem classes. One of things we will want to do is 'price' the order. There will be various strategis for pricing an order. The simplest being to add up the values of the order's items. Another common one is to give a percentage discount over a certain volume or moentary value. More exotic strategies could include looking at previous order history, or even perhaps interfacing with a CRM system.
In this example externalizing the behaviour means using a Pricer object. Something along the lines of :
Money price = new PricingFactory().getPricingStrategy().price(order);
Having a rich object model would mean something like :
Money price = order.price();

After the discussion I had a search on the internet. I came up with Martin Fowler's article on Anemic Data Model. As I understand it, he is saying
* if we externalize the behaviour we negate the benefits of our domain model.
* one of the main costs of a domain model is the ORM mapping which we've already invested a lot of time in and solved to a large degree.
* Eric Evans suggests that there is room for a service layer (aka Application Layer) but it is very thin and is more to do with "directing the expressive domain objects too work out problems"

But what I don't understand fully is
*What are the main benefits of a domain model and keeping behaviour with data. Unfortunately I need something more compelling than "it's the OO way".
* Really what is meant by Eric Evans' vision of a service layer (I guess I should buy the book!).

any insight gratefully received,
thanks a lot, Don
 
Bartender
Posts: 2968
6
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Granted Object-Orientation doesn't solve every problem, in some cases it's overkill, in others it's simply inappropriate. However in many cases it can be quite useful.

Originally posted by Don Kiddick:
However the consensus was that this was a good idea because the behaviour implementation will vary.


Object-orientation isn�t often as well understood as it could be (see Naked Objects). In many cases it's not understood that commonalities are captured in interfaces while variability is captured in the different implementations of classes which implement the interfaces. To identify where to place various responsibilities and identify what collaborations are necessary you can use CRC.

Originally posted by Don Kiddick:
For example, one of the things the system does is order processing. In this case for pharmacutical companies. An order is made up of several items, e.g. 100 packs of ibuprofen, 200 of paracetamol and 200 packs of viagra. In our object model, we have an Order class which has a collection of OrderItem classes. One of things we will want to do is 'price' the order. There will be various strategis for pricing an order. The simplest being to add up the values of the order's items. Another common one is to give a percentage discount over a certain volume or moentary value. More exotic strategies could include looking at previous order history, or even perhaps interfacing with a CRM system.
In this example externalizing the behaviour means using a Pricer object. Something along the lines of :
Money price = new PricingFactory().getPricingStrategy().price(order);
Having a rich object model would mean something like :
Money price = order.price();



In fact you should probably have an OrderItem interface so that when you price the order you can just iterate over all of them, supplying the item count to each so that they can price themselves and then just accumulate the whole lot. The pricing strategy object should be injected into the class that implements the OrderItem interface. That way you can implement simple pricing strategies, strategies that compose existing pricing strategies in different ways, or even strategies the behave based on some dynamically loaded rules. These strategies could vary their behavior based on product characteristics, customer/location characteristics, customer history, etc.

The strategy classes you end up with will be nice little concise units that do exactly one thing - price one item's quantity in a specific context.

Originally posted by Don Kiddick:
Really what is meant by Eric Evans' vision of a service layer (I guess I should buy the book!).


In essence he suggests to categorize the Domain classes into Entities, Value Objects, and Services defined as

  • ENTITY � An object fundamentally designed not by its attributes, but by a thread of continuity and identity.
  • SERVICE � An operation offered as an interface that stands alone in the model, with no encapsulated state.
  • VALUE OBJECT � An object that describes some characteristic or attribute but carries no concept of identity.

  • So service objects encapsulate behavior that can�t naturally be the responsibility of any of the identified Entities or Value objects. So your pricing strategies should be service objects because any "state" they contain should be related to configuration only and not relate the item being priced. He further suggests to partition services into layers: Application, Domain, and Infrastructure services.
    [ April 21, 2006: Message edited by: Peer Reynders ]
     
    author
    Posts: 14112
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator

    Originally posted by Don Kiddick:
    *What are the main benefits of a domain model and keeping behaviour with data. Unfortunately I need something more compelling than "it's the OO way".



    One reason for keeping the data and the operations of that data close together is that it localizes change - whenever one of the two has to change, the other needs to be considered, too, so having them close together simply helps here.

    Another one is that if some operations need to work differently on different datastructures, you can make use of simple polymorphic method calls.

    Of course *some* of your operations will differ more because of other reasons than the data they work on. Those operations probably should live elsewhere. There OO patterns for that, too: Strategy, Visitor etc. pp.

    How do you find out which operations to put where? A good start would be to first put the operations to the data, and only extract them if you find that you actually need more flexibility. Often it is also possible to just remove parts of the operations - those small parts that are actually different.

    It helps if you are good at noticing "code smells" and refactoring.
     
    Don Kiddick
    Ranch Hand
    Posts: 580
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Thanks a lot everyone. That really helped me understand the issues better and (unbelievably) has helped me change the minds of my team members to go with a architecture similar to the one Eric Evans proposes.
    Don.
     
    (instanceof Sidekick)
    Posts: 8791
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator


    # ENTITY � An object fundamentally designed not by its attributes, but by a thread of continuity and identity.
    # SERVICE � An operation offered as an interface that stands alone in the model, with no encapsulated state.
    # VALUE OBJECT � An object that describes some characteristic or attribute but carries no concept of identity.


    Not to judge it good or bad, but I don't see anything very object-y in those descriptions. I'm pretty sure the last COBOL CICS system I did could be described that way. I think it's just a fact of life in most stateless server designs. They really are a lot like CICS.

    For my first web app, I tried a rich domain model in with J++. I serialized the model to a db blob to passivate it between calls. It worked, but fortunately it had a very small user population and soon went away. Whew.

    Has anybody tried a Continuation Framework? Apparently you write code as if it's a desktop app with a screen-IO module:

    Mainframe CICS has exactly this mode, too, called Fully Conversational. It takes far less code to get things done but programs are locked in memory while the user looks at the screen.
     
    Ilja Preuss
    author
    Posts: 14112
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    I have a friend who swears he knows two guys who make a living by implementing websites for banks in a small number of weeks using Seaside, for which Java teams take a year or longer.
     
    Peer Reynders
    Bartender
    Posts: 2968
    6
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator

    Originally posted by Stan James:
    Not to judge it good or bad, but I don't see anything very object-y in those descriptions.


    If you could distill the quintessential truth of object-oriented design in a few lines, how would they sell a 500 page book? The draft of Domain Driven Design � Tackling Complexity in the Heart of Software was first referenced in Patterns of Enterprise Application Architecture and has since been referenced by many new additions to the literature, so it can�t be that bad despite of some of the frustrated reader reviews.

    Originally posted by Stan James:
    I serialized the model to a db blob to passivate it between calls. It worked, but fortunately it had a very small user population and soon went away.


    Spring/Hibernate apps often take a similar approach. With Hibernate it is a practice to make domain objects serializable in order to make DTOs unnecessary. That means that detached domain objects can actually be stored in the user session between requests.

    Originally posted by Ilja Preuss:
    Implementing websites for banks in a small number of weeks using Seaside, for which Java teams take a year or longer.


    Yes, there has been some noise around Seaside (See also Secrets of lightweight development success, Part 8: Seaside), though that may very well come from a vocal minority. If it moves Squeak/Smalltalk closer to the mainstream, great, but that's not likely. The Xerox Alto was too pricy for Smalltalk to catch on in the 1970's. By the time hardware development caught up with Smalltalk's (virtual machine's) needs it had been around for a while � at which point the mainstream figured that it couldn't be any good, otherwise it would have caught on by now. Chances are that those websites, if they enjoy any longevity, will be re-implemented in technologies that are part of a more prevalent and cheap skill-set. It happened to Paul Graham's Yahoo and it will happen again. Paul Graham claims that LISP was one of the competitive advantages of Yahoo (Beating the Averages)� Yahoo was re-implemented in C++ and Perl once he sold it.
     
    Ilja Preuss
    author
    Posts: 14112
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator

    Originally posted by Peer Reynders:
    Chances are that those websites, if they enjoy any longevity, will be re-implemented in technologies that are part of a more prevalent and cheap skill-set.



    Well, yes. Which doesn't mean that it would be a smart move...
     
    Peer Reynders
    Bartender
    Posts: 2968
    6
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator

    Originally posted by Ilja Preuss:
    Well, yes. Which doesn't mean that it would be a smart move...


    You try and explain to an accountant the dynamics of Software Development that allow a smaller team of competent, skilled and talented designer/programmers (who have a significantly higher charge-out rate) who use the right (advanced) development tools to deliver a system cheaper and faster than a significantly larger team comprised of individuals with average or less competence (who have a significantly lower charge-out rate) who use use mainstream development tools (many of which pander to the lowest common denominator in one way or another).

    Accountants live in an "Economies of Scale" world. I had to chuckle when I realized that "Diseconomies of Scale" was the most recurring phrase in Steve McConnell's Software Estimation: Demystifying the Black Art.

    Ultimately the relative scarcity of "highly productive, competent, skilled and talented analyst/designer/programmers" is a reality.
     
    Ilja Preuss
    author
    Posts: 14112
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator
    Peer, I'm not sure what you are arguing.

    I'm sure that using a powerful language such as Smalltalk or LISP can give you a competitive advantage. If I understand correctly, you are agreeing with that?

    Now I'm also sure that there are a lot of people who think that it is for some perceived reason better to not use those languages. I'm sure that they think they are right, and that it would be hard to persuade them otherwise.

    All I'm questioning is whether they actually are right.
     
    Peer Reynders
    Bartender
    Posts: 2968
    6
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator

    Originally posted by Ilja Preuss:
    I'm sure that using a powerful language such as Smalltalk or LISP can give you a competitive advantage. If I understand correctly, you are agreeing with that?


    Correct.
    I think I was trying to make too many points which made it confusing.

    Originally posted by Ilja Preuss:
    Now I'm also sure that there are a lot of people who think that it is for some perceived reason better to not use those languages. I'm sure that they think they are right, and that it would be hard to persuade them otherwise.

    All I'm questioning is whether they actually are right.



    Stop questioning and start collecting evidence (anecdotal or otherwise) why "they" make sub-optimal decisions so that you can present that evidence to "them".

    For example Philip Wadler summarizes some of the obstacles faced when trying to use functional languages in the "Real World" in How enterprises use functional languages, and why they don't (An angry half dozen, Why no one uses functional languages).

    In respect to my previous remarks (allow for some exaggeration to make the point):
  • People holding the purse strings (Business Owners, Accountants, etc.) often project their own paradigm onto Software Development. So they do not understand that throwing more labor at a software problem doesn't make it go any faster, nor that using cheaper labor won't make it cost less. The facts that Frederick Brooks laid out in 1972 are just too strange to be believed by non-software people. The major contributing factor to the "diseconomies of scale" of software development is the intense need for accurate communication between all of the participants. The "diseconomies of scale" can be counteracted by using a smaller number of (more competent) participants who use a more effective level of communication (i.e. they say what they mean, they comprehend what is meant, and they keep communication concise by using standardized idioms that would require pages of explanation to the uninitiated; this is starting to look like a personnel corollary of Fowler's "Don't distribute your Objects"). The "communication overhead" also plays into the failure of the construction paradigm inspired waterfall process � it fosters the type of specialization where people working in the early phases may not recognize issues that are obvious downstream. "Software design is hard". (Robert C. Martin, "Designing Object-Oriented C++ Applications Using the Booch Method", p.vii). Hard work is often mistaken for grunt work (i.e. unskilled work). Whatever grunt work can be found in Software Development can often be automated - what is left is the "hard" work that needs to be addressed by competent personnel.
  • Just because technologies/methodologies are mainstream doesn't mean (1) they're good nor that (2) they're well suited towards implementing the solution to their particular problem. Many technologies are popular because they are perceived as being "easy to use" � that does not necessary imply that that a technology satisfies any particular need efficiently, effectively and in a productive manner. And just because it "works" in one domain doesn�t imply that it's appropriate for any other.
  • Just because some technologies/methodologies have been around for awhile and haven't entered the mainstream doesn't mean (1) they're good or that (2) they're bad. Some ideas die a deserved death. Others just are so far ahead of their time that initially the level of technology was insufficient to feasibly utilize and sustain support for the idea. Sometimes there weren't enough of the right people around to recognize the potential or the learning curve is perceived as too steep to collect the necessary critical mass of people to get it going (instead "easier to use" but less productive tools are opted for).
  • Employers hire "mainstream skills" IT people (especially when IT isn't their core business) because they need them to be (1) replaceable and (2) cheap (mainstream does imply a ready supply which drives down the price). This sets up a significant barrier for the entry of non-mainstream technologies/methodologies.
  • Employees acquire "mainstream skills" to make themselves marketable (even if it drives down the market price because "less" is better than "zero"). There is significant resistance to learning non-mainstream technologies/methodologies.
  • Most educational instructions try to deliver what their clients (the students) want: The skills that employers are looking for now (as future needs are way to hard to predict). Personally I don't see the point of teaching students BOTH Java and C++. Moving from C++ to Java should not be difficult and if they have problems with languages other than Java they are probably having problems with Java, they just don't know it. It is more important that an individual can function in most if not all programming paradigms, i.e. imperative vs. declarative; procedural, object-oriented, functional and logical programming. The selection of programming languages taught should span the entire spectrum of programming paradigms. But some "wanted ads" directly require either Java or C++ or both and that's what drives the demand.
  • Employers tend to move people that have (finally) accumulated the development experience to become effective (and use more advanced tools/methodologies) into "higher business value", non-development positions (like Project Management) to justify their higher salaries. So "the majority of those actively developing software are not the best ones qualified to do so�and worse, don't even know it" (as observed by Scott Ambler in Ending the Blame Game).
  • People in this business better have a (burning) passion for it. People who are competent should get the pay they deserve. Unfortunately the 1:20 productivity ratio between the worst and the best developers implies not only that the best are underpaid but that the worst are overpaid. Local economic spikes of sudden demand (pre-dotcom in North America then, offshore work in India now) unnaturally raises the developer's worth (including the worst ones) in-spike, dragging people into the profession that are looking for a "good living" but have very little interest in the business, and depresses the developer's worth post-spike. So even if they are "good", they may still be in for a ride. Furthermore as a career they have to keep up with the emerging technologies and methodologies (regardless whether their employer gives them any incentive), their employer's/client's business knowledge, project management, their own interpersonal communication and writing skills; they should aim to become a Generalizing Specialist. Anybody venturing into this field should read My Job Went To India first and Pragmatic Programmer second to get a sense of the work that's ahead of them � there must be an easier way to "just make a living".
  • Employers look for tools that will make "good developers" out of "cheap" ones, often making up for some kind of knowledge gap that otherwise would require time-consuming and tedious learning. Well, if that was possible we wouldn't need developers at all � we would have AIs writing all the programs � actually AIs would be running everything (for better or worse), there would be no need for programming. It isn't possible to replace knowledge or education with tools.
  • As Employers are looking for these tools, tool makers try to oblige. But often these tools turn out to be nothing more than fancy cookie cutters that the "temporarily enhanced" developer can't wield once they try to tackle their first "real world" problem that doesn�t neatly fall into those fanciful "commonsense defaults" � they need to obtain the knowledge anyway. The tools that are incredibly helpful are the ones that make the competent, in-the-know developers more productive � however these tools may come with a steeper learning curve and may not be immediately accessible to the uninitiated.

  •  
    Ilja Preuss
    author
    Posts: 14112
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator

    Originally posted by Peer Reynders:
    Stop questioning and start collecting evidence (anecdotal or otherwise) why "they" make sub-optimal decisions so that you can present that evidence to "them".



    I currently don't have a strong need to do that - there are other things that have priority to me. Frankly, I'm not even sure that it would work - "evidence" often is far from being the top decision driver.


    For example Philip Wadler summarizes some of the obstacles faced when trying to use functional languages in the "Real World" in How enterprises use functional languages, and why they don't (An angry half dozen, Why no one uses functional languages).



    Well, yes, he has good list. So, what do we do with it?

    In respect to my previous remarks (allow for some exaggeration to make the point):



    Allowed. And not much to disagree with. I don't see much that helps me really decide whether it would be a good idea to use a more arcane language, though.

    As you wrote, things don't become mainstream because they are the best thing available. To me, it seems to follow that if you want to be better than your competition, it might a good strategy to try something non-mainstream. Of course there is also a risk involved. But things without risk typically also have a lower profit margin. The skill is not to avoid risk, but to manage it.

    Fortunately, there *are* business people who understand that. Otherwise Ruby wouldn't catch up, for example.
     
    Peer Reynders
    Bartender
    Posts: 2968
    6
    • Mark post as helpful
    • send pies
      Number of slices to send:
      Optional 'thank-you' note:
    • Quote
    • Report post to moderator

    When others do a foolish thing, you should tell them it is a foolish thing. They can still continue to do it, but at least the truth is where it needs to be.


    People can always rationalize suboptimal decisions and you may have to accept these decisions but that does not mean you have to accept them in silence (another example: Sony cans GOAL at Naughty Dog Lisp is for Entrepreneurs). If you strongly believe that a decision is foolish go on record stating this. If time proves you wrong gracefully admit that you were wrong and explain how you learned from the experience. However if time proves you right, nobody can blame you and maybe next time you'll have the power to effect the optimal decisions. Also you never know when somebody with an open mind and the right connections to stop the foolishness is within earshot.


    Originally posted by Ilja Preuss:
    Well, yes, he has good list. So, what do we do with it?


    Well, for one it raises awareness. The next time somebody tries to veto the right tool because of its relative unpopularity, it's good to argue for the right tool by dispelling the misconceptions (non-reasons) and high-lighting the real reasons for the tools unpopularity � and outline how these reasons are irrelevant to the project, or how the risks may be mitigated in order to capitalize on the benefits. Often you will find that the unwillingness to adopt new or unconventional tools which are capable of generating significant rewards, is actually deeply rooted in other problems internal the performing organization. The risk of choosing any particular implementation tool may be of little consequence compared to the risk of these internal problems. Furthermore as technology advances, some of the more legitimate reasons against an unconventional tool may simply disappear.

    One of more the interesting points that Philip Wadler makes is that "Much of the attractiveness of Java has little to do with the language itself, but with the associated graphics, networking, databases, telephony, and enterprise servers. (Much of the unattractiveness of Java is due to the same libraries)". Sun Microsystems keeps harping on the "write-once-run-anywhere" (WORA) and "write-once-deploy-anywhere" (WODA) features � infuriatingly this seems to have also have degenerated into a "one-language-for-any-problem" mindset (usually prevalent in the VB community). The different programming paradigms (procedural, object-oriented, functional, logical) are optimally effective at tackling different kinds of problems � hence no language can be equally good and effective at solving all types of problems. Have a look at the Haskell versus C implementation of a Quick-Sort shown in the Haskell Introduction. Wouldn't the agile principle of "do the simplest thing that works" dictate that you use the Haskell implementation until you have conclusive evidence (memory footprint, performance, etc.) that you need the more complex C version? Haskell has been successfully used with COM libraries by interaction through monads (HaskellDirect, HaskellScript). There also was Lambada (Lambada, Haskell as a better Java) that allowed Haskell to interact with the Java environment. A Haskell/Java combo could be quite powerful if it was possible to generate java byte-code from Haskell code. Haskell could leverage the existing Java libraries and Java could use libraries that were more effectively coded in Haskell � finally there would be a choice (of consequence), so you could choose the right (programming) tool for the job and still enjoy WORA and WODA. Sun Microsystems really needs to consider opening up the JVM specification to the hosting of other languages. Squeak/Smalltalk would be great � but then we are still stuck in the object-oriented paradigm � they need to be able to push into the other paradigms of functional and logical programming. So far there is only one initiative that I am aware of The Kawa language framework that does something like that; it compiles Scheme to Java byte-code. That project however will always be hampered as they have no control over the JVM to make Scheme as effective as it could be on a virtual machine.

    (Side note : It's interesting to note that Microsoft managed to "sideline" Erik Meijer, one of the co-developers of Lambada, by hiring him as an Architect for the SQL Server group. Plenty of food for conspiracy theories. End side note.)

    Sun Microsystems (and the remaining members of the anti-Microsoft alliance) should be careful that they don't get out-maneuvered on the multi-language support front. They managed to get one up on MS because MS didn't initially pay attention to the Internet and Java - "the new internet language". .NET/WinFX can be seen as a long term effort to make Windows "the OS (that) is the Virtual Machine", similar in the way that Windows Server is "the OS (that) is the Application Server"; i.e. they are trying to make Windows re-targetable to non-Intel hardware architectures (NT's Hardware Abstraction layer (HAL) was an earlier not-so-successful attempt). Another aspect of the .NET strategy is the multi-language support. As long as C#, J#, C++.NET, and VB.NET were the only languages, the whole thing wasn't worth talking about because the only languages supported were in the procedural and object-oriented paradigm � C++.NET is needed by Microsoft to port its own applications but do we really need C#, J#, and VB.NET?

    In 1998 Microsoft Research managed to hire Simon Peyton Jones the principal developer of the Glasgow Haskell Compiler. In 2002 Microsoft launched the F# research project (F# - A New .Net language). Microsoft's primary interest in functional programming was its use as an effective tool for XML processing, as already borne out by XSLT. F# languished for a long while, and it looked like MS had lost commercial interest, but F# seems to be gaining traction again (hubFS).

    (Side note : Given the date of publication this could be an April Fools joke: F# for games and machine learning: .NET + performance + scripting. However it does raise the interesting question whether Microsoft plans to port .NET/WinFX to a future iteration of the XBox. Looking at the technical specs of the first generation XBox (it's a 700 Mhz PC running NT) you can be excused for suspecting that this isn't a gaming platform at all. You might suspect that it is just the first iteration towards a (consumer electronics) networked personal computing appliance that will use "No Touch Deployment" to rent (not buy) Microsoft software over the internet � aimed to displace current consumer PC desktop/laptop products that otherwise could use a Linux-style/non-Microsoft OS (I know, some people put Linux on the XBox � but to what end?). Whether the XBox 360 technical specs and it's development directions bear this particular hypothesis out, I don�t know. End side note.)

    (Exaggeration coming up Microsoft's Achilles heel is their (marketing) need to appeal the largest possible market segment for their products, so they often design and market towards a more "unsophisticated" target audience even if effective use of the product absolutely requires a sophisticated and informed user. This has lead to "insecure default settings" (the secure ones were perceived as too restrictive) on some products and to a Volks-Basic mentality that "everybody" can program, which is then enabled in development tools by exercising "default behavior" behind your back that may be entirely inappropriate for your particular project. Therefore it's not likely that F# or something like it will become part of the supported core toolset (lack of mass-appeal) � if it does AND it is a halfway decent implementation AND they successfully push the productivity aspect � oh boy.


    Originally posted by Ilja Preuss:
    I don't see much that helps me really decide whether it would be a good idea to use a more arcane language, though.


    If you learn a functional language you'll have another tool in your tool-belt, so you'll be even more capable of choosing The Right Tool For The Job.

    In many cases it is stated that learning a functional programming language will make you a better programmer. In Why Functional Programming Matters (1984) John Hughes argues that "structural programming" improved software development because it introduced modular design. He writes: "First of all, small modules can be coded quickly and easily. Secondly, general purpose modules can be re-used, leading to faster development of subsequent programs. Thirdly, the modules of a program can be tested independently, helping to reduce the time spent debugging." He then writes "� to increase ones ability to modularize a problem conceptually, one must provide new kinds of glue in the programming language." He argues that "higher order functions" and "lazy evaluation" as first class features of a language are the type of glue that functional programming can bring to the table to improve modularization even further.
    Higher order functions allow functions to take other functions as parameters so that the outer function can apply an "arbitrary" function passed as a parameter. Using this technique recursively you can readily compose "arbitrary" functions into a pipeline which encourages reuse of even the smallest code fragment. Note that when Hughes talks about "reuse" he is not talking about library or class level reuse � he is talking about function reuse � look at it as "Do not Repeat Yourself" (DRY) to the extreme. No more three strikes, then refactor; no more "Rule of Three by Don Roberts": "The first time you do something you just do it. The second time you do something similar, you wince at the duplication, but you do the duplicate thing anyway. The third time you do something similar, you refactor" � everything exists only once!
    Lazy evaluation allows the function at the end of such a "function pipeline" to act as the execution driver by "pulling" the output out of the previous stages and allowing it to shut down processing as soon as a certain objective is met � making it possible to curtail unnecessary processing that may occur in conventional languages where input is pushed into the first function of the pipeline. This feature is implemented in the same spirit as C/Java short-circuit evaluation of logical expressions or green cut pruning in logical programming languages - only it happens on a much higher level.

    Andrew Koenig implements in "Chapter 15: Sequences" of his book Ruminations on C++ a sequence class "which is patterned after lists in the vintage-1960 LISP". At the end of the chapter he decides that it might be worth investigating whether it's worthwhile to transplant the style (to C++) of applying "the RISC approach to other data structures" while accepting immutability, which "the functional programming community has shown" to be possible.

    Alexander A. Stepanov was most likely heavily influenced by his earlier development experience with "a large library of algorithms and data structures in Scheme" when he contributed to the development of the C++ Standard Template Library (STL). The STL advanced the capabilities of C++ to a new level. (An Interview with Alexander A. Stepanov, Al Stevens Interviews Alex Stepanov).

    Andrei Alexandrescu was most likely guided by functional programming principles when he designed his traits type list (Traits: The else-if-then of Types) and when he designed his Loki Library (of designs, containing flexible implementations of common design patterns and idioms), showcased in his book Modern C++ Design. Loki pushed the C++/Template envelope even further.

    The last two examples however highlight a potential obstacle to adopting functional programming (FP) inspired idioms in conventional languages. Many C++ developers initially had problems adopting the STL as it is neither procedural, nor object-oriented. It embraced Generic Programming � some see Generic Programming as the next evolutionary step beyond Object-Oriented Programming. Many people comfortable with the STL could not initially wrap their heads around Loki. To those unfamiliar with FP, FP idioms look strange to begin with and FP idioms implemented in conventional languages will look downright obfuscated. So if you use FP idioms in conventional code, are you actually sacrificing clarity, or is the apparent lack of clarity simply a reflection of (the code-reader's/reviewer's) inexperience or lack of exposure?

    Both Java and .NET now support Generics but those "Generics" were deliberately constrained to not support to the type of meta-programming that is possible with C++ Templates (Todd Veldhuizen: Template Metaprograms (1995)) � possibly because that type of meta-programming was perceived by some as a hack (which it is, when you compare it to the meta-programming capabilities of Lisp). However that does not stop others from attempting to use Generics to mimic FP: Functional Programming in Java: Greater expressiveness through higher order functions.

    Becoming competent in a functional programming language will probably improve your XSLT skills significantly, as XSLT shares many aspects with functional programming (Dimitre Novatchev: The Functional Programming Language XSLT - A proof through examples).

    Finally, some studies suggest that functional programming is more productive than other programming paradigms. Haskell vs. Ada vs. C++ vs. Awk vs. ..., An Experiment in Software Prototyping Productivity:

    The results indicate that the Haskell prototype took significantly less time to develop and was considerably more concise and easier to understand than the corresponding prototypes written in several different imperative languages, including Ada and C++.

    See also Point of View: Lisp as an Alternative to Java (2000). I'm not quite sure if they sufficiently compensated for the possibility that programmers who choose to use functional languages may be the more productive programmers to begin with.

    There is some friction within the functional programming community as to which type of language is the best � a pure functional language (e.g. Haskell, Clean, Miranda) vs. a non-pure functional language (e.g. Lisp, Scheme, ML) � this is basically their version of the strongly typed vs. dynamically typed language debate. (FAQ for comp.lang.functional)

    Originally posted by Ilja Preuss:
    To me, it seems to follow that if you want to be better than your competition, it might a good strategy to try something non-mainstream. Of course there is also a risk involved. But things without risk typically also have a lower profit margin. The skill is not to avoid risk, but to manage it.

    Fortunately, there *are* business people who understand that. Otherwise Ruby wouldn't catch up, for example.


    Don't overlook that Ruby is as old as Java! Yukihiro Matsumoto released Ruby to the public back in 1995. While support was slowly building over the years it was ultimately its combination with Rails that created the "Killer-App" that brought Ruby into the limelight. (Unfortunately) That type of event can even make something as mundane as BASIC popular � ultimately it was Alan Cooper's shell construction set (ironically called "Ruby"; named "Tripod" earlier) turned "visual programming language for professional programmers by adding QuickBasic" that brought Visual Basic upon us. Maybe SeaSide will be Squeak's/Smalltalk's "Killer-App" � one never knows. But as Paul Graham's example shows sometimes you have to be in total control of the business that you are trying to enable through "optimal" IT decisions � and even then you can't ensure that they won't later be overridden by "less optimal" ones.
     
    There is no greater crime than stealing somebody's best friend. I miss you tiny ad:
    Smokeless wood heat with a rocket mass heater
    https://woodheat.net
    reply
      Bookmark Topic Watch Topic
    • New Topic