When others do a foolish thing, you should tell them it is a foolish thing. They can still continue to do it, but at least the truth is where it needs to be.
People can always rationalize suboptimal decisions and you may have to accept these decisions but that does not mean you have to accept them in silence (another example: Sony cans GOAL at Naughty Dog
Lisp is for Entrepreneurs). If you strongly believe that a decision is foolish go on record stating this. If time proves you wrong gracefully admit that you were wrong and explain how you learned from the experience. However if time proves you right, nobody can blame you and maybe next time you'll have the power to effect the optimal decisions. Also you never know when somebody with an open mind and the right connections to stop the foolishness is within earshot.
Originally posted by Ilja Preuss:
Well, yes, he has good list. So, what do we do with it?
Well, for one it raises awareness. The next time somebody tries to veto the right tool because of its relative unpopularity, it's good to argue for the right tool by dispelling the misconceptions (non-reasons) and high-lighting the real reasons for the tools unpopularity � and outline how these reasons are irrelevant to the project, or how the risks may be mitigated in order to capitalize on the benefits. Often you will find that the unwillingness to adopt new or unconventional tools which are capable of generating significant rewards, is actually deeply rooted in other problems internal the performing organization. The risk of choosing any particular implementation tool may be of little consequence compared to the risk of these internal problems. Furthermore as technology advances, some of the more legitimate reasons against an unconventional tool may simply disappear.
One of more the interesting points that Philip Wadler makes is that "Much of the attractiveness of Java has little to do with the language itself, but with the associated graphics, networking, databases, telephony, and enterprise servers. (Much of the unattractiveness of Java is due to the same libraries)". Sun Microsystems keeps harping on the "write-once-run-anywhere" (WORA) and "write-once-deploy-anywhere" (WODA) features � infuriatingly this seems to have also have degenerated into a "one-language-for-any-problem" mindset (usually prevalent in the VB community). The different programming paradigms (procedural, object-oriented, functional, logical) are optimally effective at tackling different kinds of problems � hence no language can be equally good and effective at solving all types of problems. Have a look at the
Haskell versus C implementation of a Quick-Sort shown in the
Haskell Introduction. Wouldn't the agile principle of "do the simplest thing that works" dictate that you use the Haskell implementation until you have conclusive evidence (memory footprint, performance, etc.) that you need the more complex C version? Haskell has been successfully used with COM libraries by interaction through monads (
HaskellDirect,
HaskellScript). There also was Lambada (
Lambada, Haskell as a better Java) that allowed Haskell to interact with the Java environment. A Haskell/Java combo could be quite powerful if it was possible to generate java byte-code from Haskell code. Haskell could leverage the existing Java libraries and Java could use libraries that were more effectively coded in Haskell � finally there would be a choice (of consequence), so you could choose the right (programming) tool for the job and still enjoy WORA and WODA. Sun Microsystems really needs to consider opening up the JVM specification to the hosting of other languages. Squeak/Smalltalk would be great � but then we are still stuck in the object-oriented paradigm � they need to be able to push into the other paradigms of functional and logical programming. So far there is only one initiative that I am aware of
The Kawa language framework that does something like that; it compiles Scheme to Java byte-code. That project however will always be hampered as they have no control over the JVM to make Scheme as effective as it could be on a virtual machine.
(Side note : It's interesting to note that Microsoft managed to "sideline"
Erik Meijer, one of the co-developers of Lambada, by hiring him as an Architect for the SQL Server group. Plenty of food for conspiracy theories. End side note.)
Sun Microsystems (and the remaining members of the anti-Microsoft alliance) should be careful that they don't get out-maneuvered on the multi-language support front. They managed to get one up on MS because MS didn't initially pay attention to the Internet and Java - "the new internet language". .NET/WinFX can be seen as a long term effort to make Windows "the OS (that) is the Virtual Machine", similar in the way that Windows Server is "the OS (that) is the Application Server"; i.e. they are trying to make Windows re-targetable to non-Intel hardware architectures (NT's Hardware Abstraction layer (HAL) was an earlier not-so-successful attempt). Another aspect of the .NET strategy is the multi-language support. As long as C#, J#, C++.NET, and VB.NET were the only languages, the whole thing wasn't worth talking about because the only languages supported were in the procedural and object-oriented paradigm � C++.NET is needed by Microsoft to port its own applications but do we really need C#, J#, and VB.NET?
In 1998 Microsoft Research managed to hire
Simon Peyton Jones the principal developer of the
Glasgow Haskell Compiler. In 2002 Microsoft launched the
F# research project (
F# - A New .Net language). Microsoft's primary interest in functional programming was its use as an effective tool for XML processing, as already borne out by XSLT. F# languished for a long while, and it looked like MS had lost commercial interest, but F# seems to be gaining traction again (
hubFS).
(Side note : Given the date of publication this could be an April Fools joke:
F# for games and machine learning: .NET + performance + scripting. However it does raise the interesting question whether Microsoft plans to port .NET/WinFX to a future iteration of the XBox. Looking at the technical specs of the first generation XBox (it's a 700 Mhz PC running NT) you can be excused for suspecting that this isn't a gaming platform at all. You might suspect that it is just the first iteration towards a (consumer electronics) networked personal computing appliance that will use "No Touch Deployment" to rent (not buy) Microsoft software over the internet � aimed to displace current consumer PC desktop/laptop products that otherwise could use a Linux-style/non-Microsoft OS (I know, some people put Linux on the XBox � but to what end?). Whether the XBox 360 technical specs and it's development directions bear this particular hypothesis out, I don�t know. End side note.)
(Exaggeration coming up
Microsoft's Achilles heel is their (marketing) need to appeal the largest possible market segment for their products, so they often design and market towards a more "unsophisticated" target audience even if effective use of the product absolutely requires a sophisticated and informed user. This has lead to "insecure default settings" (the secure ones were perceived as too restrictive) on some products and to a Volks-Basic mentality that "everybody" can program, which is then enabled in development tools by exercising "default behavior" behind your back that may be entirely inappropriate for your particular project. Therefore it's not likely that F# or something like it will become part of the supported core toolset (lack of mass-appeal) � if it does AND it is a halfway decent implementation AND they successfully push the productivity aspect � oh boy.
Originally posted by Ilja Preuss:
I don't see much that helps me really decide whether it would be a good idea to use a more arcane language, though.
If you learn a functional language you'll have another tool in your tool-belt, so you'll be even more capable of choosing
The Right Tool For The Job.
In many cases it is stated that learning a functional programming language will make you a better programmer. In
Why Functional Programming Matters (1984) John Hughes argues that "structural programming" improved software development because it introduced modular design. He writes: "First of all, small modules can be coded quickly and easily. Secondly, general purpose modules can be re-used, leading to faster development of subsequent programs. Thirdly, the modules of a program can be tested independently, helping to reduce the time spent debugging." He then writes "� to increase ones ability to modularize a problem conceptually, one must provide new kinds of glue in the programming language." He argues that "higher order functions" and "lazy evaluation" as first class features of a language are the type of glue that functional programming can bring to the table to improve modularization even further.
Higher order functions allow functions to take other functions as parameters so that the outer function can apply an "arbitrary" function passed as a parameter. Using this technique recursively you can readily compose "arbitrary" functions into a pipeline which encourages reuse of even the smallest code fragment. Note that when Hughes talks about "reuse" he is not talking about library or class level reuse � he is talking about function reuse � look at it as "Do not Repeat Yourself" (DRY) to the extreme. No more three strikes, then refactor; no more "Rule of Three by Don Roberts": "The first time you do something you just do it. The second time you do something similar, you wince at the duplication, but you do the duplicate thing anyway. The third time you do something similar, you refactor" � everything exists only once!
Lazy evaluation allows the function at the end of such a "function pipeline" to act as the execution driver by "pulling" the output out of the previous stages and allowing it to shut down processing as soon as a certain objective is met � making it possible to curtail unnecessary processing that may occur in conventional languages where input is pushed into the first function of the pipeline. This feature is implemented in the same spirit as C/Java short-circuit evaluation of logical expressions or green cut pruning in logical programming languages - only it happens on a much higher level.
Andrew Koenig implements in "Chapter 15: Sequences" of his book
Ruminations on C++ a sequence class "which is patterned after lists in the vintage-1960 LISP". At the end of the chapter he decides that it might be worth investigating whether it's worthwhile to transplant the style (to C++) of applying "the RISC approach to other data structures" while accepting immutability, which "the functional programming community has shown" to be possible.
Alexander A. Stepanov was most likely heavily influenced by his earlier development experience with "a large library of algorithms and data structures in Scheme" when he contributed to the development of the C++ Standard Template Library (STL). The STL advanced the capabilities of C++ to a new level. (
An Interview with Alexander A. Stepanov,
Al Stevens Interviews Alex Stepanov).
Andrei Alexandrescu was most likely guided by functional programming principles when he designed his traits type list (
Traits: The else-if-then of Types) and when he designed his
Loki Library (of designs, containing flexible implementations of common design patterns and idioms), showcased in his book
Modern C++ Design. Loki pushed the C++/Template envelope even further.
The last two examples however highlight a potential obstacle to adopting functional programming (FP) inspired idioms in conventional languages. Many C++ developers initially had problems adopting the STL as it is neither procedural, nor object-oriented. It embraced Generic Programming � some see Generic Programming as the next evolutionary step beyond Object-Oriented Programming. Many people comfortable with the STL could not initially wrap their heads around Loki. To those unfamiliar with FP, FP idioms look strange to begin with and FP idioms implemented in conventional languages will look downright obfuscated. So if you use FP idioms in conventional code, are you actually sacrificing clarity, or is the apparent lack of clarity simply a reflection of (the code-reader's/reviewer's) inexperience or lack of exposure?
Both Java and .NET now support Generics but those "Generics" were deliberately constrained to not support to the type of meta-programming that is possible with C++ Templates (Todd Veldhuizen:
Template Metaprograms (1995)) � possibly because that type of meta-programming was perceived by some as a hack (which it is, when you compare it to the meta-programming capabilities of Lisp). However that does not stop others from attempting to use Generics to mimic FP:
Functional Programming in Java: Greater expressiveness through higher order functions.
Becoming competent in a functional programming language will probably improve your XSLT skills significantly, as XSLT shares many aspects with functional programming (Dimitre Novatchev:
The Functional Programming Language XSLT - A proof through examples).
Finally, some studies suggest that functional programming is more productive than other programming paradigms.
Haskell vs. Ada vs. C++ vs. Awk vs. ..., An Experiment in Software Prototyping Productivity:
The results indicate that the Haskell prototype took significantly less time to develop and was considerably more concise and easier to understand than the corresponding prototypes written in several different imperative languages, including Ada and C++.
See also
Point of View: Lisp as an Alternative to Java (2000). I'm not quite sure if they sufficiently compensated for the possibility that programmers who choose to use functional languages may be the more productive programmers to begin with.
There is some friction within the functional programming community as to which type of language is the best � a pure functional language (e.g. Haskell, Clean, Miranda) vs. a non-pure functional language (e.g. Lisp, Scheme, ML) � this is basically their version of the strongly typed vs. dynamically typed language debate. (
FAQ for comp.lang.functional)
Originally posted by Ilja Preuss:
To me, it seems to follow that if you want to be better than your competition, it might a good strategy to try something non-mainstream. Of course there is also a risk involved. But things without risk typically also have a lower profit margin. The skill is not to avoid risk, but to manage it.
Fortunately, there *are* business people who understand that. Otherwise Ruby wouldn't catch up, for example.
Don't overlook that Ruby is as old as Java! Yukihiro Matsumoto released Ruby to the public back in 1995. While support was slowly building over the years it was ultimately its combination with Rails that created the "Killer-App" that brought Ruby into the limelight. (Unfortunately) That type of event can even make something as mundane as BASIC popular � ultimately it was
Alan Cooper's shell construction set (ironically called "Ruby"; named "Tripod" earlier) turned "visual programming language for professional programmers by adding QuickBasic" that brought Visual Basic upon us. Maybe
SeaSide will be Squeak's/Smalltalk's "Killer-App" � one never knows. But as Paul Graham's example shows sometimes you have to be in total control of the business that you are trying to enable through "optimal" IT decisions � and even then you can't ensure that they won't later be overridden by "less optimal" ones.