Showing posts with label Haskell. Show all posts
Showing posts with label Haskell. Show all posts

Wednesday, September 5, 2012

Models of human skills: gaussian, bimodal or power-law?

The first thing I have to say is that this post actually contains my own reflections. I have not found clear scientific facts that actually prove these ideas wrong or right. On the other hand, if you have evidence (in both directions) I'd be glad to have it.

On of my first contacts with probability distributions was probably around 7-8th grade. My maths teacher was trying to explain to us the concept of gaussian distribution, and as an example, he took the grade distribution. He claimed that most of the students get average results and that comparatively fewer students got better or worse grades (and the more "extreme" grades were reserved for a few).

I heard similar claims from many teachers of different subjects and I have not been surprised. It looks reasonable that most people are "average" with fewer exceptionally good or exceptionally bad exemplars. The same applies to lots of different features (height, for example).


201209051915.jpg

Lots of years later, I read a very interesting paper claiming that, instead the distribution of programming results usually has two humps. For me, the paper was sort of an epiphany. In fact, my own experience goes in that direction (but here, I may be biased because I have not collected data). The paper itself comes with quite lot of data and examples, though.

So apparently there are two models to explain the distribution of human skills: the bell and the camel. My intuition is that for very simple abilities, the distribution is expected to be a gaussian. Few people have motivation to become exceptionally good, but the thing is simple enough that few remain exceptionally bad either. On the other hand, sometimes there is simply some kind of learning barrier. It's like a big step that not everybody is able to make. This is common for non-entirely trivial matters.

In this case, the camel is easily explained. There are people who climbed the step and people who did not. Both groups are internally gaussian distributed. Another example could be an introductory calculus exam. If the concepts of limit and continuity are not clear, everything is just black magic. However, some person may still get decent marks because of luck or a lower level form of intuition. Among those that have the fundamental concepts clear there is the usual gaussian distribution.

However, both models essentially assume that the maximum amount of skill is fixed. In fact, they are used to fit grades, that have, by definition, a maximum and a minimum value. Not every human skill has such property. Not every human skill is "graded" or examined in a school like environment. And even for those that actually are, usually grades are coarse grained. There is probably a big difference between a programming genius (like some FOSS super stars) and a regular excellent student. Or between, say, Feynman and an excellent student taking full marks. The same thing may be even more visible for things usually perceived as artistic. Of course, it is hard to devise a metric that can actually rank such skills.

My idea is that for very hard skills, the distribution is more like a power law (at least in the tail). Very few people are good, most people are not even comparable to those and the probability to find someone twice as good as a very good candidate is just a fraction.

Just to conclude, I believe that for very simple skills most of the time we have a gaussian. If we have "learning steps" or some distinctive non-continuous feature that is related, then we have multi-modal distributions (like the camel for programming: those that are able to build mental models of the programs are successful, those that only use trial and error until the programs "compile" are unsuccessful). Eventually, for skills related to exceptionally hard (and challenging and gratifying) tasks we have power-law distributions. May we even use such distributions to classify the skills and the tasks? And in those cases, a gaussian distribution is a good thing or not?

So that, given someone that is able to make mental models of programs, maybe Haskell programming remains multi-modal, because there are some conceptual steps that go beyond basic programming, while Basic is truly gaussian? Is OOP gaussian? And FP?

And in OOP languages, what about static typing? Maybe dynamic typing is a camel (no, not because of Perl) because those that write tests and usually know what they are doing fall in the higher hump, while those that just mess around fall in the lower hump. Ideas?

Monday, December 12, 2011

Erlang and OTP in Action (review)

First time I got into Erlang, it was with "Programming Erlang: Software for a Concurrent World" (Joe Armstrong). It is a very nice book, in my opinion, and I enjoyed immensely reading it. That was like 4 years ago or something. Back then, the functional revolution was just at the beginning: no widespread Scala, no Clojure at all, essentially no F#. Back then it looked like OO was going to rule the industry for years and years, with no contender of sort. Rails was fresh, Django was fresher (back then the APress book was just being released).

I bought the book because I wanted to see this "brand new" technology (20 year old, but just going to make it through in the circles I did frequent). And really, the language looked like 20 years old. Full of Prolog legacy, Unicode who's that guy? and so on. However, it no other piece of software I knew could as easily. Massive concurrency, hot swapping code. Wow.

The language, I did not especially like. The runtime… WOW! As a language, I love Python or Clojure because the way apparently distant functionalities work together and create something even more beautiful. Erlang does not have that at language level. It has at a framework level.

Think about hot swapping code in an object oriented software. First, I somewhat believe that object oriented modules are somewhat more tangled that functional equivalents. Partly because of the object reuse OO promotes (that can really be against you if you want to swap code). Then, there is the whole problem of references vs. addresses. Addresses have an additional level of indirection that makes it far easier to swap a process than it is to swap an object.

But the very idea that state is in the function parameters and that process linking and easy restarting thing are at the very root of how easy it is to swap code in Erlang. But then… back to the books.

The essential problem was that after seeing a bit of Erlang, I thought OTP was not a big deal. Yes, it is easier to use. But also plain Erlang is. And I convinced myself that should I need Erlang, I could just use plain Erlang. Than things changed: I read some OTP using code and I understood not so much of it. That was this year. What I understood, was that it could be helpful. I had to write a concurrent prototype and I welcomed the idea not to write as much code as possible.

In the meantime, I forgot many things about Erlang. In this situations, instead of reading the same book twice, I buy another book to gain perspective. So I decided to buy another book. One of the candidates was "ERLANG Programming" (Francesco Cesarini, Simon Thompson). It also had excellent reviews. Essentially I believe it contains more material than Armstrong's book and is also a bit more recent. However, as far as I understand, is still a bit terse on OTP.

As a consequence, I bought "Erlang and OTP in Action" (Martin Logan, Eric Merritt, Richard Carlsson) instead. And I'm very happy of this choice. It complements Armstrong's book well and extensively covers OTP. In fact, I also believe that the approach is very interesting. Introduce OTP first and learn to use it, then when you know what it can do, you are going just to use that. Then, learn plain Erlang in order to extend OTP when your use case is not covered. And a nice plus was a detailed description of JInterface, which I could need as well.

In fact, I do think that as it may make sense to introduce objects as early as possible in a book on an object oriented language, starting with OTP is a very big plus from a learning perspective. Then perhaps the point is that I did not need to get into a functional mindset (which I think Armstrong book does with more attention.

If the question is however just "learn to think functionally" I believe that "The Joy of Clojure: Thinking the Clojure Way" (Michael Fogus, Chris Houser), LYHFGG or Land of Lisp are probably better alternatives. Another interesting one is "Functional Programming for Java Developers: Tools for Better Concurrency, Abstraction, and Agility" (Dean Wampler), even if it has a whole different perspective.

Saturday, November 26, 2011

Good and not so good reasons to learn Java (or any other language)

The first thing to consider is there is really not such a thing like a reason not to learn a programming language. Or better, there is only one reason: lack of time. Learning a language is a tricky business[0]. Learning to use a whole platform is way trickier.

Please notice that some of the reasons presented here are general enough to be applied to any programming language (especially the bad reasons to learn a language). Others are specific of Java (especially the reasons to learn it).

Also keep in mind that the good reasons to learn Java will be presented in another post because this one is becoming far to long to be readable in the few minutes I feel entitled to ask you, dear reader. So believe me... you probably should learn Java. Still, not for the reasons I list here.

Not so good reasons to learn Java

It is widespread

You may be lead to think that since Java is a very widespread language, it is easier to find jobs if you know it. In the case of Java there is the question that I am convinced that it is not a bad thing to know Java and that can have pleasant effect on finding jobs (more on that later), still I would not consider it a good reason to learn Java.

Learning a widespread language means more jobs and more applicants. What really matters is the ratio between jobs demand and offer. And usually for very widespread languages it is not always very high. Skilled, experienced and "famous" developers do not care much. The others should.


It is object oriented

This and the variant "it is more object oriented" are clearly wrong reasons. There is plenty of good object oriented languages. And it is rather hard to decide which is more object oriented (lacking function looks more like a wrong design decision than a clue of being more object oriented).

Besides, I'm not even sure that being more object oriented is good (or bad for what matters). Well... not sure that being object oriented is inherently good either. Maybe in ten years the dominant paradigm will be another. Maybe in two. Three years ago "functional programming" was murmured in closed circles and outsiders trembled in fear at its very sound.

No, joking. Most people thought that "functional" meant having functions (and possibly no objects) and thought about C or Pascal. They did not know it was about not having crappy functions. Yeah, more than that, of course, but that's a start.

It is a good functional language

Just checking if you were reading carefully...
That is a bad reason because it is not true!

It is fast/slow

Oh, come on! Java implementations are decently fast, faster than most other language implementations out there and usually slower than implementations of languages such as C, C++, Fortran. Still, for some specific things it may be nearly as fast or faster. Sometimes it is not only CPU efficiency that matters.

It is good for concurrency

The "classic" Java threading model sucks. It's old. Nowadays other ways of doing concurrency are better (more efficient, easier to use).

Such more efficient easier to use methods are built-in in languages such as Clojure (or Scala, or Erlang). Still, Java is probably the most supported platform in the world. Such concurrency models are not inside the language, but you may have them using libraries.

Sometimes this is a real PITA (language support is a great thing, because the syntax is closer to the semantics).

Moreover having the "older" concurrency model visible may be a problem with the newer stuff. And some other libraries and frameworks may just assume you want to do things the old way.

It is good for the web

Ok... most people do not even know how widespread is Java on the web. In fact, it is. But is it really good? I do not really think so. There is lots of good stuff in Java world for the web, of course. The point is that there is also for other platforms. Here Rails and Django spring to mind.

Moreover, there is loads of extremely cool stuff coming from the functional world. Erlang and Haskell have some terrific platforms. Scala and Clojure also have some extremely interesting stuff and can leverage mature Java stuff but make it really easier to use.

Grails (web framwork) may seem on the same wavelength, still I think there is a very important difference. First, I don't like Groovy at all. In fact Groovy very author says that he would not have created it if he knew Scala. And of course since I do not like Groovy, I do not see why I should be using Grails, which, as far as I know, does not offer killer features over Rails or Django.

Scala and Clojure are different. They are not just "easier" than Java. They teach a different way of thinking and approaching development. And of couse, they are great for the web also.

I already know it a bit

This is quite an interesting point. Why, if you know a language a little, shouldn't you learn it well? This essentially makes it clear the difference between a not so good reason to learn something and a reason not to learn it.

The point is simple: if you are interested in learning Java (for the good reasons), do it. But from knowing a language "a bit" and knowing it "well" there is quite the same distance that from not knowing it and knowing it well. So don't do it because you think your prior experience may be extremely relevant.

There are languages which are just easier to master (where easier means that to learn them from scratch it takes less time that to become as proficient in Java as you would learning those languages -- yeah, Python, Ruby, [1]...)

Besides, I feel that the second step in Java is rather hard. I think that Java is very easy to learn for experienced developers (it is not a joke). The very first step in Java is relatively easy (a part from lots of useless crap like having to declare a class to put there a static main method and overly complicated streams). The step just after that, the one that takes you from writing elementary useless programs to writing elementary useful programs is quite harder, since lots of useful stuff in Java requires to understand OO principles quite well to be used without too many surprises.

So, you may have learnt Java at the college. Still, if you to hack something and "grow" there are languages that let you do it faster (Python or Ruby).

I have to

This is the worse possible reason because I don't see the spark in it. You have to. You probably are an experienced developers that got bored to death reading this post, you did not know Java and you have to learn it. Maybe your boss wants you to do some Java.

I'm sorry for you. Java is not easy to learn (especially to learn it at the point you can work with it). Mostly because everybody has been doing Java in the last fifteen years. Smart people and dumb people.

As a consequence there are truly spectacular pieces of software it is a pleasure to work with and worthless pieces of over-engineered crap. I truly hope that you are going to work with the great stuff.

Still I consider a "bad reason" to learn a language, because you are probably not going to enjoy the learning process. If you were, you would have used a different sentece... like "I want to".
And perhaps the thing would have been "My boss gives me the opportunity to increase my professional assets learning this new technology, Java".

It was easier to download/find a book/my cousin knows it.

One of the most popular reasons people pick up a language is because they find it installed on their system. That is not usually the case for Java, in the sense that usually the JDK is not installed, even if the VM is. Variants of this argument are a friend proficient in the language or, more frequently, availability of books at the local bookstores. This kind of arguments usually apply for complete programming beginners or people who had prior experience that has not been refreshed for years (e.g., old Basic programmers).

It is true that it is easy to find manuals for Java. The point is that not every manual is a good starting point. Specifically, this kind of user really needs a manual targeted at a true beginner. Unfortunately, that is the category of books where more craps piles. First beginners usually do not really understand if the manual they are using is crap.

If their learning process is too slow (supposed that they see it) they usually blame: (i) programming in general, (ii) the language or, worse (iii) themselves. Well, the language may have its faults (especially in the case of Java, still C++ is worse for a beginner), but it is important to understand that the culprit is usually the manual.

The funny thing is that the people who know how to tell apart a good manual from a bad one are usually the ones that do not really need an introductory book, are probably not going to buy it and more often than not are not enquired about a good manual. So unless their cousin is actually a skilled programmer, beginners are really risking buying a bad manual. And please, notice that even skilled programmers are susceptible to a similar argument: if you are interested in a new language and you read a terrible manual you may build a wrong opinion on the language (and perhaps you are not interested in spending another 40 gold coins to convince you that the language is really bad).

So please: read reviews on Amazon (or similar site -- notice that I'm not going to suggest to buy from Amazon, even if I often do and I am an associate -- here I'm just saying that the amount of reviews on books on Amazon -- com -- is usually large enough to make an idea). Find people that are experts and have teaching experience: their reviews are probably the one to base the judgment upon. Then buy the book from whatever source you want.

So do not buy a Java book because you can find books on Java and not on Clojure[2]/Python/Ruby/Whatever[3]. Choose the language (this is hard), choose the book (this is easier), buy, read it, study it, code code code.


---
  1. I kind of found out that for people really convinced that learning a language is easy one or more of the following apply:
    1. have an unmotivatedly high opinion of their knowledge of the languages they claim to have learned
    2. have a very inclusive definition of "learning" a language (e.g., writing hello world in it)
    3. only know and learn very similar languages (really, if you know C++ it's gonna be relatively easy to pick up Java, if you know Scheme probably Clojure is not going to be a big deal, etc.)
    4. and tend to write "blurb" in every language they know (so for example they write Python as if it was C++ -- and usually are very surprised when things go badly)
    Of course there is also a lucky minority for which learning languages is very easy.
  2. It is funny that I am suggesting to learn only "object oriented dynamic" languages such as Python or Ruby. I understand that many advocates of functional programming may disagree. But I somewhat think that while some kind of people greatly benefit from learning a functional language as the first language they truly master, many others are just not fit for functional programming because it is too abstract and working imperatively may be easier for them. As a consequence, languages such as Python or Ruby may naturally lead you towards functional programming if it is the kind of stuff you like, but are still usable if you are not that kind of person. I have seen things...
    And yes... if you are the kind of person that likes functional programming, you will get into functional programming sooner or later. This is just a bit later. 
  3. Notice Clojure here...
  4. Whatever is not a language. Still, it would be a great name for a language. Maybe not.

Saturday, September 3, 2011

Too many cool languages (Python, Clojure, ...)

There is plenty of languages I like. I'm naming just a bunch of them here... so a non exclusive list may be:

  1. Python
  2. Common Lisp
  3. Scheme
  4. Clojure
  5. Erlang
  6. Haskell
  7. Prolog
  8. Ruby

The list is quite incomplete... for example I left out languages from the ML family. I do not have anything against them, I just think Haskell nowadays is way cooler (cooler != better) and also the Haskell platform is a definitive advantage. I did not mention languages such as Io, because although I'd really like to do something with it, it just does do not feel enough motivation to try and do it in Io instead than in one of the languages above.

Same thing for Javascript. It's not that I do not like it... it is just that if I can, I would rather use one of the languages I mentioned (which at least in case of client side JS most of the times is a no go). I also left out things like CoffeeScript and a whole lot of languages that did not even jump to my mind right now.

The list is quite representative of my tastes in programming languages. All the languages are very high level languages (I enjoy plain old C, but I tend to use it to extend the above languages and/or for problems which really need C points of strength). Most of them are declarative languages, almost every one of them has a strong functional flavor (basically just Prolog hasn't, the others are either functional languages or received a strong influence from functional programming).

Some of the languages are better for certain tasks, because they are built for them or have widely tested libraries/frameworks. And this somewhat makes it a difficult choice which one to chose. For example, every time a network service has to be written, I consider Python (Twisted/Stackless), Erlang and recently Clojure. If the system has to be highly concurrent, probably Erlang has an edge, but also Haskell (with STM and lightweight threads) and Scheme (Gambit/Termite) are strong contenders.

This is essentially about personal projects. In fact, for "serious" stuff, I am still using Python or Java (and C/C++ if more muscle is needed). This is basically because I have far more experience with those technologies. Clojure is probably a viable choice as well, as I can easily interact with Java and I have been using that quite intensively in the past two years (I also used it to extend one of the "serious" projects, though I had to remove it because I had other experimental components and it was tiresome to find out whether bugs were due to the other components or my relative inexperience with clojure). Besides, I have still to find winning debugging strategies for clojure: probably I'm missing something obvious here. I'm also trying to increase my expertise with Erlang.

These are some random thought about developing in the environments.

Editing Tools

The first issue, is to find the right tools. It is not exactly easy to discover the right tools to develop in non mainstream languages. For example, with Java any major IDE is a good enough tool. For Python, both PyCharm and WingIDE are excellent and easy to use tools. Moreover, vim is an excellent tool with only minor tweaking. Oddly enough, I'm still struggling to find a good setup for Emacs.

On the contrary, Emacs is quite easily one of the best environments for Scheme/Common Lisp/Clojure (thanks to slime), Haskell, Erlang and Prolog (thanks to the improved modes). Still, after years of switching back and forth from Emacs to vim and vice versa, I think I'm more comfortable with vim. Unfortunately, it is not always the case that customizing vim for these languages is as easy as it is with Emacs. For example, I found an easy to use Erlang plugin, but I have still to try using advanced plugins for Haskell and Clojure (I know the plugins, still I did not finish the setup). For Clojure it's just easier to fire LaClojure or Emacs+Slime. For Haskell, I'm not serious enough to know my own needs. I wrote software for my bachelor's in Prolog with vim, and I have to say that Emacs is quite superior (especially considering Sicstus custom mode).

Back in the day I used TextMate for Ruby. Right now I guess I'd probably go with RubyMine or vim.

Platform

Now this is a serious point. Many of my "difficulties" with Scheme lie here: chose one interpreter/compiler with libraries to do most things I want to do and some tool to ease the installation of other libraries (I should probably just stick with Racket). For example with Python it is easy: pick the default CPython interpreter, use pip/easy_install for additional libraties.

Common Lisp is also going well (SBCL + Quicklisp here). I was pleasantly surprised by Haskell: the Haskell platform is wonderful. Some years ago it was not completely clear which interpreter to use and external tools (like Haskell) were not completely immediate. Now with the Haskell platform you just have everything under control.

About clojure, I feel rather satisfied. For project oriented stuff there is leiningen, which works fine. My main objection is that most of the time I'm not *starting* with project oriented stuff... which is something which so much remembers me of the Java world, where first you start a project (as opposed to unix, where you just start with a file). Luckily enough in the clojure community there are also guys like me, and so there is cljr which just fills the area of "before" something becomes a project.

Cljr is very nice with emacs/swank/slime. Unfortunately I've not yet found a way to make it interact with vim.

Erlang already comes bundled with an interesting platform and rebar does lots of nice stuff too. Erlang/OTP tends to be a bit too project oriented but it quite makes sense. Basically all of the times you are setting up systems with many processes and all and it just makes sense to do it that way.

Prolog is essentially a complete mess. I advise just using SWI-Prolog and forget the rest even exists, unless you need some of the advanced stuff Sicstus offers. In this case you pay and enjoy.

Tuesday, August 30, 2011

Clojure: Unfold and Anamorphisms

I found rather unusual the apparent lack of an unfold function in Clojure. Perhaps I was not able to find it in the documentation (which I hope is the case), in this case, happy to have done a useful, although slightly dull, exercise. However, if this is not the case, here it is unfold in Clojure.

Unfold is a very powerful function which abstracts the idea of generators. I tried to make it lazy (please, feel free to break it) as we expect from clojure functions. Some examples are provided:

Here, we simply generate a list of elements from 1 to 11 (included):

(unfold #(= 0 %) identity dec 10)

The following example is more interesting: here I show that unfold is at least as powerful as iterate. It is possible to regard iterate as a simpler (and easier to use) variant of unfold.

(unfold
      (fn [x] false)
      identity
      inc
      0)

The code is equivalent to (iterate inc 0). Now the idea is that:

  1. The first argument is a predicate that receives the seed (last argument). If it is true, the function returns the empty list; otherwise returns the cons where the first cell if the second argument applied to the seed and the second cell is a recursive call to unfold. All the arguments but the last one are the same. The last argument becomes g applied to seed.
  2. The second argument is a unary function which is applied to each individual "value" which is generated. In this case we just use identity, because we want just the plain arguments. We could have used #(* %1 %1) to have the squares of the values or whatever makes sense.
  3. The third argument is the function that generates the next seed given the current one
  4. The last argument is just the seed

Unfold is extremely general. For example the classic function "tails" that returns all the non-empty sublists of a list ((1 2 3) -> ((1 2 3) (2 3) (3))) could be implemented as:

(defn unfold-tails [s]
  (unfold empty? identity rest s))
  

The standard map, becomes:

(defn unfold-map [f s]
  (unfold empty? #(f (first %)) rest s))

Now, here the implementation:

Upgrade

With this update, I present an upgraded version with the tail-g parameter. The classic srfi-1 Scheme version also has this argument. Originally, I thought that in Clojure I could use it to have different sequence types. On a second thought, it can't work because of the way Clojure manages sequences (at least, not in the sense that I intended it).

On a second thought, however, it is highly useful. Consider for example the tails function I defined above. In fact it returns all non-empty substrings of a given string. This makes sense in lots of contexts, however it is different from the definition of the Haskell function.

tails                   :: [a] -> [[a]]
tails xs                =  xs : case xs of
                                  []      -> []
                                  _ : xs' -> tails xs'
That could be roughly translated (losing laziness) as:
(defn tails [xs]
  (cons xs
        (when (seq xs)
          (tails (rest xs)))))

This function also returns the empty list. Surely, it would be possible to (cons () (unfold-tails xs)), but that would break the property that each (finite) element in the resulting sequence is longer than the following. Without the kludge, the unfold-tails function breaks the strictness property that the resulting list contains xs if xs is empty. Accordingly, tails should really be defined to include the empty list. Appending the empty list to the end of the list returned from unfold-tails would maintain both properties, but would be excessively inefficient.

On the other hand, specifying the tail-g function allows a cleaner definition

(defn tails [xs]
  (unfold empty? identity rest xs
          #(list %)))

Essentially, without tail-g it is not possible to include in the resulting sequence elements which depend from the element which makes the sequence stop.

Updates

  • Substituted (if ($1) (list) ($2)) with (when-not ($1) ($2)) after some tweets from @jneira and @hiredman_.
  • Added version with tail-g
  • Here more functions and experiments with unfold.
  • Removed HTML crap from cut and paste

Technorati Tags: , , , ,

Sunday, August 28, 2011

On monads and a poor old (dead) greek geek

After immensely enjoying the LYHFFP book, I immediately started playing with Haskell a bit. Since lately I have been implementing sieves in different languages, I found that the Sieve of Eratosthenes was an excellent candidate. In fact, there are quite a few Haskell versions out there (and almost all of them are faster than the version I'm going to write, but this is about an enlightenment process, not an algorithm implementation).

Bits of unrequested and useless background thoroughly mixed with rantings and out of place sarcasm

I also had to reconsider some of my assumptions: I was familiar with Turner's "version" of the sieve and O'Neill's article, which was basically about Turner's version not being a real implementation of the sieve and about not everybody noticing that for like 30 years (and teaching that to student as a good algorithm). I found this story extremely amusing, in the sense that sometimes functional programmers are quite a bit too full of themselves and how cool is their language and overlook simple things.

Turner's algorithm essentially was this:

primesT = 2 : sieve [3,5..]  where
  sieve (p:xs) = p : sieve [x | x<-xs, rem x p /= 0]
And essentially here the point is that to me it does not really look like a sieve, but like a trial division algorithm. Quite amusingly I found a lengthy discussion about that here. The good part is that they rather easily (if you read Haskell as you drink water) derive optimized versions of this algorithm which have decent algorithmic performance and then take some lines in trying to convince us that the code above should not be called naive (which I do not intend as an insult, but as a mere consideration) and that should be regarded as a "specification".

Now I quite understand that declarative languages are excellent tools to write definitions of mathematical stuff (and Haskell is especially good at this), but as far as I know the context in which the one-liner was presented is that of an algorithm, not of a specification.

Essentially this unrequested bit of background is just to retaliate against the world for me not being able to come up with a really fast implementation. Basically the optimized version of Turner's algorithm is quite faster than my monadic versions. Which is fine, as my goal was to get some insight on monads, which I think I did. More on this here.

On the author's unmistakably twisted mind


So... back to our business. I struggled a lot to use Data.Array.ST to implement the sieve. Essentially the point is that I find the monadic do notation quite less clear than the pure functional notation with >>= and >>. This is probably a symptom my brain having been corrupted by maths and myself turning in a soulless robot in human form. Nevertheless, I finished the task but I was so ashamed I buries the code under piles of new git commits.

Essentially, I found mixing and matching different monads (like lists) excruciatingly confusing in do notation. Notice, that was just *my* problem, but I just had to struggle with the types, write outer helper functions to check their type and so on. Basically I was resorting to try and error and much shame will rain on me for that. So I threw away the code and started writing it simply using >>= and >>. Now everything kind of made sense. To me it looked like a dual of CPS, which is something I'm familiar with. The essential point is that the resulting code was:
  1. Hopefully correct.
  2. Written in one go, essentially easily
  3. Rather unreadable
So the resulting code is:

Essentially for (3) I decided it was time to use some do notation. Perhaps things could be improved. What I came out with is a rather slow implementation, but I am rather satisfied. I think it is extremely readable: it retains a very imperative flavor (which is good in the sense that the original algorithm was quite imperative) and I believe could be written and understood even by people not overly familiar with Haskell. It almost has a pseudo-code like quality, wasn't it for some syntactical gimmicks like "$ \idx ->".


Somewhere in the rewrite process I left out the small optimizations regarding running to sqrt(lastPrime) instead of just to lastPrime, but this is not really the point. Still... I'm quite happy because I feel I now have a better grasp of some powerful Haskell concepts.

However, I feel like Clojure macros are greatly missed. Moreover, I found really easier to understand the correspondingly "difficult" concepts of Clojure or Scheme (a part from call/cc which kind of eludes me, think I have to do some explicitly tailored exercises sooner or later).

I also feel like continuously jumping from language to language could lead to problems in the long run, in the sense that if I want to use some language in the real world I probably need more experience with that particular language, otherwise I will just stick with Python because even when sub-optimal I'm so much more skilled in the python platform than in other platforms.

Saturday, August 20, 2011

Excellent Learn You a Haskell for Greater Good

This is the third book explicitly about Haskell I directly buy (a part from things such as Functional Data Structers or Pearls of Functional Algorithm Design), the others being Haskell School of Expression and Real World Haskell, plus some online tutorials and freely available books. I believe they are all excellent books, although with slightly different focus. They come from different ages as well (with HSOE being quite holder).

Back then I enjoyed a lot HSOE, but I think I missed something. Large part of the book used libraries not easily available on the Mac, for example. Moreover, the book did not use large parts of the Haskell standard library which are very useful. For that and other reasons, I did not go on working with Haskell. Real World Haskell has a very practical focus and I quite enjyoed that. Unfortunately, I still remembered too much Haskell not to "jump" the initial parts of the book (and that is usually a bad thing, because you don't get comfortable with the author style before jumping to more elaborate subjects). Moreover, the book is quite massive and I had other stuff to do back then (like graduating).

I did not even want to buy LYHFGG. Afterall, I am a bit skeptical over using Haskell for real world stuff (I prefer more pragmatic languages like Python or Clojure) and so I tried to resist the urge to buy another Haskell book (I could finish RWH, afterall). For a combination of events I did not even remember, I put the book in an amazon order. Understanding a couple of Haskell idioms could improve my clojure, I thought, and I started reading the book in no time.

The first part of the book is very well done but somewhat uninteresting. By the time I started reading it, I had forgotten most Haskell I knew and consequently I read that carefully: however, I made the mistake not to start a small project just to toy with the language. That is the reason I say "somewhat uninteresting": it is very basic, very easy and very clear. Still, I did only remind me of things I knew, without really improving my way of thinking much. Still, the writing was fun and light and I read through it quickly. I consider it a very good introduction to functional programming in Haskell and to functional programming in general and as such the tag line "a beginner guide" is well deserved.

Later in chapter 11 comes the real meat. Functors, Applicative Functors, Monoids and then Monads are presented. The order is excellent: after having learned Applicative Functors, Monads require only a small further understanding steps. Moreover, repeating all the reasoning on Maybe and lists really clarifies the thing. The examples were also in HSOE, but connecting the dots was somewhat left to the reader. This time I did not make the mistake to see monads as just a tool to implement state monad and reached a deeper insight on the subject.

About the book itself, I just love no starch press...
Technorati Tags: ,

Thursday, July 21, 2011

Is state really bad?

Prologue

Actually this post was being written in March, when some facts referred in the body of the article actually happened. I am resuming the post today, after months of extensive (and rather unbloggable) work.

Intro

Stateful programming is somewhat the default strategy in the industry nowadays and it has been in the last 60 years. If we consider the 10 more widespread languages in the tiobe index (a statistic is worth another), they are all imperative languages strongly based on stateful programming. This month Lisp is has enormously improved (Land of Lisp anyone?), but even if we would consider Common Lisp and Scheme together, it would not go in the first 10 positions (even if just for a few points).

The Haskell case

In fact, in the last years we have witness an increase of interest in functional languages. As far as I remember, about 5 years ago there was a lot of hype about Haskell (which was in its 30's or something like that). It was related, as far as I remember, in a famous user-friendly linux distro having written some tools in Haskell and in the Darcs project. Darcs has almost disappeared with Git (Perl+C) and Mercurial/Bazaar (Python, Python+C) sharing the market of distributed versioning tools.

The case is rather interesting: Darcs was built with a very strong "theory" behind, the algebra of patches. The problem is that the merge process is still exponential in the worse case and that it is rather buggy. It is interesting that the software is both slow and bug ridden. It is tempting to infer something about Haskell, at least to counter the hypothesis that pure functional programming makes it relatively easy to write bug-free programs. Perhaps this should be taken into account the next time someone looks down on setf. Well, I do like Haskell, just not as much as I love Lisp.

The last thing I would like to mention is that I like the idea to insulate imperative code from functional/algorithmic code (but this is something I suggest with every language). However being forced to do that is not necessarily a good idea. In fact, it makes Haskell quite harder to pick up and has the paradoxical effect that if you are a developer good enough to code in Haskell, you probably already know how to separate side-effected code from side-effect free code. Besides, although I like purity (to some extent), I'm not sure whether state monads should be considered stateless. I mean, a state monad implements state in a nice functional construct, mathematically sound and pure. In fact they are almost seen as an executable transaltion of the declarative semantics of a stateful program. Consequently, I'm not sure whether frequent use of state monads is really an improvement over carefully used stateful programming.

In other words, although I admit my limited experience of Haskell in real world projects, I believe that if we think functionally the code is going to be clear (but it will be as clear in Haskell as in Lisp). On the other hand, if we think in a heavily stateful way, our code may be less clear in Lisp and quite messy in Haskell, where we would implement our (wrong) thinking having state monads pop out everywhere. Feel free to correct me, possibly with real experience and examples.

Erlang and Clojure

Erlang is different from Haskell in that functional stateless programming is required for practical purposes. In order to allow shared-nothing process based architecture (and the actor model), state is essentially banned. Of course, there are solutions which bring back state (ETS, DTS, plus various libraries such as mnesia). It is worth noting that this kind of state is generally different from usual imperative state.

In imperative languages everything is stateful by default. On the other hand, in this case, we basically have only "db like" solutions. This does not encourage real stateful programming, and in fact, usual Erlang programs are pretty much state free. The essential different is that they are not side-effect free.

Being state free and side effect free are two very different things. Having no side effects is pure functional programming. And is extremely hard to do. In fact, it would not even made sense in a programming language built around the idea of passing messages between processes. If you send a message, you are not side-effect free anymore.

Somewhat in the same sense, Clojure is stateless but not side effect free. Most clojure coding is performed without explicit state change and there are a few explicit ways to deal with state. This is done in transactional way, so that essentially state change is allowed if explicitly required. State change should not mingled with stuff which has effect outside the clojure process. You can "unset" a variable and recover the old value, but you cannot unsend a message.

At the moment I feel that I can easily tolerate (and like) the kind of state change insulation provided by clojure and the pragmatics of avoiding side-effected stuff for as long as possible typical of all functional languages. On the other hand, I find really hard to accept absolute stateless-ness.

Lisp and Languages with FOFs

On the other hand of the spectrum of "functional languages" we have Common Lisp and the various freedom languages where functions are first order objects (Python, Ruby, ...), with Scheme leaning a bit more towards the functional side. Technically speaking Python is not even a functional language. Surely it is not stateless (but Lisp is neither). In fact, in Python recursion is frowned in favor of iteration (but iteration is often built around generators, which are an user-friendly/dumbed-down[0] instance of anamorphism -- unfold --, which is strongly functionally flavored). On the other hand, some call Ruby the Matz-Lisp.
In these languages there is no explicit restriction towards stateful programming. This means that usual algorithms and way of thinking are easy to implement (especially in Lisp, both very functional and very stateful algorithms are easy to implement). On the other hand, restriction of state change is left to the sense of the programmer (which not always is a good thing, but I don't really like languages which tell me exactly what to do... they just have to make the better way obvious -- I'm not Dutch, sorry --).
To be sincere, there are just things that are so much easier in stateful programming than in stateless programming. And for other stuff the converse applies. Consequently, the language should not force me too much towards one style or the other. Moreover, stateless-only languages tend to be harder to pick up for beginners (on the other hand Python and Ruby are extremely easy to start with and I believe that scheme is a wonderful first language for computer scientists). Eventually, stuff like GUIs are really hard to implement in fully stateless settings (clojure essentially relies on Java underlying classes, and I actually have never used Erlang for guis -- not that I am a strong GUI developer --).

On performance

One of the consequences is that lots of algorithms, data structures and ideas are just developed with stateful programming in mind. This is the reason books suck as Okasaki's Purely Functional Data Structures have been written: just to bridge the gap between mainstream stateful programming and functional reasoning.

Unfortunately many algorithms have been created with imperative languages in mind, and the data structures supporting the algorithms strongly reflect the imperative setting. Implementing such algorithms in functional settings is usually inefficient to say the least. Much theory (and not only the practice) is built around state change. The Art of Computer Programming uses a pseudo assembly language which essentially manipulates a Von Neumann machine.

Something as easy as "modify an array" may be a source of inefficiency in many languages. E.g., clojure provides transients just to make it easier to express such operations. On the other hand functional languages have usually some troubles in providing a data structure that is immutable, offers O(1) random element access and allows element modifications without copying the whole data structure.

Books


"Purely Functional Data Structures" (Chris Okasaki)


"Art of Computer Programming, The, Volumes 1-3 Boxed Set (3rd Edition) (Vol 1-3)" (Donald E. Knuth)



Notes

[0] depends on whether you are a python or a scheme programmer: python programmers usually say user-friendly

Tuesday, February 1, 2011

A rant on FP in Python compared to Clojure and Lisp

In a previous post, we started investigating closures in imperative settings. Essentially we provided a short Python example where things weren't' really as we expected. Essentially, the problem is that in Python you cannot rebind a variable in the enclosing scope. The problem is relatively minor.
  1. It is a problem we usually do not have in functional settings as rebinding variables (if possible) is not a common practice. For example in Clojure we just can't rebind a variable in a let form; on the other hand in Lisp it is a legitimate action
    (defun make-counter (val)
    (let ((counter val))
    (lambda ()
    (let ((old-counter counter))
    (setq counter (+ counter 1))
    old-counter))))
    
    CL-USER% (defparameter c (make-counter 0))
    C
    CL-USER% (funcall c)
    0
    CL-USER% (funcall c)
    1
    
    

  2. In Python 3 the nonlocal statement solved the problem. The following program, runs just fine.
    import tkinter as tk
    
    class Window(tk.Frame):
        def __init__(self, root):
            tk.Frame.__init__(self, root)
            value = 0
            def inc():
                nonlocal value
                value += 1
                self.label.config(text=str(value))
            self.label = tk.Label(text=str(value))
            self.label.pack()
            self.button = tk.Button(text="Increment", command=inc)
            self.button.pack()
    
    root = tk.Tk()
    w = Window(root)
    w.pack()
    tk.mainloop()
    
However, this is just the tip of the iceberg. Python is not a functional language, and does not even try to be. Python is built around the idea of mutable state, while, e.g., Clojure is built with the idea that state is immutable and areas where state is expected to change have to be explicitly marked (dosync).

Python fully embraces the object oriented facilities. Most "functional like" features Python has (e.g., first order functions) essentially are just byproducts of being a fully object oriented language. If everything is an object, then functions are objects too. It would take a special rule to forbid FOF and since they are also very nice to use, why explicitly forbidding them?

We have lambda (which is severely limited... compare with lisp lambda/clojure fn: both constructs have the full power of their respective "def(n|un)" form, in fact we may say that the def* form is just syntactic sugar over lambda/fn). We have a functool module: which is nice, but of course if the language is dynamic enough we can do that kind of things... but it is a library, its not a language feature (as in Haskell).

There are two more interesting functional facilities: list comprehensions and lazy evaluation. Both apparently were added to Python under the influence of Haskell.

List comprehension are a very powerful construct, and Python has expanded it further with set/dict comprehensions. This is somewhat a unique feature: even Clojure (which enriched Lisp with syntactical dictionaries and sets) does not have syntactical dict/set comprehensions.

To be sincere, I really did not felt as something was missing when in Python we did not have set/dict comprehensions. I absolutely did not miss set comprhensions: since Python has lazy evaluation, I did not feel something like set(x for x in seq) was a major drawback (cft. {x for x in seq} is probably clearer, but who cares?). So if in clojure I have to write:

(set (for [x (range -5 5)] (* x x)))
#{0 1 4 9 16 25}

and if I just feel that this is unbearable:

(defmacro for-set [binds & stmts]
  `(set (for ~binds ~@stmts)))

As both Python and Clojure have lazy stream evaluation in both cases there is no overhead (that is to say, it is not the case that a list is created and then passed to set which makes the set). Yes... perhaps in Lisp some reader forms could "fully" add the functionality... not sure that it is worth, though.

However, now it is the time of discussing Lazy Evaluation. There are different kinds of lazy evaluation. The first is the Haskell way: everything is lazy by default, essentially unless declared strict. This strategy essentially poses serious constraint on the language. Especially, it makes sense only if the language has no side effects. For example, we would not like that the following thing would not write anything because of lazy evaluation:
(defn foo [x] (println x) x)
(def bar (foo 20))
It would be awkward to say the least. On the other hand, Haskell is designed with these ideas in mind. Essentially output (and everything smelling of side-effect) is forbidden to mingle with the rest of the code. Consequently the programs do not have to worry about when things are evaluated.

This strategy would be plainly impossible in imperative languages such as Python. Moreover, in distributed settings default lazy evaluation is a serious problem (which is well known in the Haskell community... consider Wadler interview in Tate's Seven languages in seven weeks). In Lisp it is very simple to add lazy evaluation on demand (there is just an example in Land of Lisp) and in Clojure we can use delay to great effect (I like batteries included). In this case Python lacks the feature: we can use some tricks, but essentially the syntax becomes quite akward (and nobody would really like to do that).

On the other hand, Python introduced the great public to iterator manipuation. Python treats indifferently finite and infinite sequences (which was natural in Haskell and is now natural in Clojure) using a bunch of functions in itertools. Essentially, they were created to work naturally with lazy sequences and are a great asset in the hands of the skilled Python programmer. Clojure embraced this philosophy (and basically has everything Python has in itertools, perhaps more). We should also remember that here we are in the familiar areas of map/reduce stuff... anyway.

The extremely clever thing Python did was introducing the yield statement. With a very imperative syntax, some of the power of continuations has been brought in the language, actually we have a limited implementation of coroutines [coroutines are trivially implemented if you have continuations]; as closures in Python are limited, so are coroutines. On the other hand, programmers who could never have used with profit coroutines and closures can now used their tamed (crippled?) cousins. I generally consider this a good thing... not a big fan of full continuations, indeed.

And what about clojure? Well, clojure has no "yield" statement. I think that it is only a minor issue. Yield is a very functional object trying (and succeeding) to look imperative through fake beard and nose. On the other hand, most uses of yield are more related to building plain generators than to create "tamed" coroutines. Using a more functional style, along with lazy sequences, it is easy to provide nearly as intuitive variants in clojure.


Tuesday, January 4, 2011

Seven Languages in Seven Weeks

Today I eventually received my own copy of "Seven Languages in Seven Weeks". To be sincere, I started reading the electronic version some time ago (Pragmatic Programmers had a huge discount some time ago, just after the "Programming Ruby" at 10$).

Seven Languages in Seven Weeks


I'm not going to criticize the language choice. In fact, I think that the languages are a wise selection (Ruby, Io, Scala, Clojure, Prolog, Haskell, Erlang). They are all interesting languages and have something other languages do not have. Although I'm quite into Python, I'm not going to complain for its exclusion: the author is far more confident with Ruby (as far as he tells) and that means that his explanations are surely more accurate.

I love the idea of the seven languages in seven weeks: beware, this is not a "Teach yourself C++ in 21 bloody hours". The author is well aware that you could not possibly learn seven languages in seven weeks, but you can understand what those languages are about in seven weeks. I'm thinking about the potential this book has when read by people grown up in closed Microsoft or Java contexts (or perhaps even C and C++). If after reading this book he is still convinced that all the languages are the same he is either a compiler or a very distract reader (and probably hasn't understood a thing about the book).

Just seeing the languages is a wonderful enlightening experience. Realizing how much useless boilerplate compiler-friendly code some languages force developers to read is the first step towards better ways of programming.

And what about me? Well... I already knew 5 of the 7 languages quite well. I have known Ruby and Haskell for 5 years or so, a couple of years later I learned Erlang as well. Besides, I also wrote some thousand of lines in Prolog for my BSc and my MSc dissertation was on logic programming (specifically on building an efficient "interpreter" for a given logic programming language not too dissimilar from Prolog itself). I started studying Clojure basically when the word of mouth started spreading, which makes it about 2 good years. So what's it for me? For example, I am sorry to say that I found the part on Prolog rather unsatisfactory, but hey... I don't think that the goal of the book is to teach me Prolog. However, it gave me a pleasant introduction to Io (which I would have probably never learned) and convinced my that my first impressions on Scala were wrong and perhaps I have to give it a second chance.

So definitively worth reading. And here a free chapter!



Saturday, December 4, 2010

Pointfree programming

I have to admit that, nowadays, I'm not a huge fan of Haskell. It may be that I learned it when "Real World Haskell" did not exist. I learned on excellent Hudak's book, and although it is a very nice book (as implied by the adjective "excellent") I think that the focus is quite too academical (which is an awkward criticism from me). When learning a language I like books which give me the pragmatic of the language (that is to say "how I'm I expected to write the code") but also practical examples. Something to hack myself in order to dig into the language.

Unfortunately back then most of the "nicer" example could be run easily on Windows or Linux but not OS X (some graphic library, IIRC). So... I grow tired of Haskell rather soon (it's not nice to code continuously referring to library indexes to see if what you are doing has already been done and trying to understand the logic of some very idiomatic piece of code you have not been introduced to).

RWH does a very nice job in teaching Haskell (and perhaps you can read Hudak as a second book, if you need). However, it came somewhat too late. Perhaps another time I will start with Haskell again.

Moreover, while code in RWH is idiomatic and readable, I think that many programming styles popular in the Haskell community (judging from pieces of code I stumbled upon surfing the web) tend to be so clever that "I'm not able to debug them". Not unless I have decomposed them...

For example, I quite like the idea that everything is curried. I got a BSc in maths before turning to computer science and I have seen plenty of complex function. I'm used to functions that take other functions as argument, return functions. Functions (sometimes called operators) which bridge spaces of functions and similar stuff.

Consequently I quite like the idea that + is a unary function which returns a unary function (that is to say an "adder").

def add(x):
    def adder(y):
        return x + y
    return adder

This is a bit awkward in Python, however you can use stuff in the functools module to do the trick. However, Haskell is built around this very concept. If you just write (+ 1) you have the "increment" function. While I think this is nice (after all I don't really like to write a full lambda to do such a thing), abuses are easy to spot.

Another thing I rather like is the fact that you can omit "unused" parameters.

E.g., it is perfectly ok to write:

inc = (+ 1)

Which makes sense: (+ 1) is the function which adds 1 and we just give it the name inc. So far so good. If you are not very happy with operators, the classical example is:

sum = foldr (+) 0

instead of

sum' xs = foldr (+) 0 xs

(these examples come from the point-free Haskell guide). And I agree (to a point). In fact I still believe that there should be just one way to do things; however, Haskell guides make it very clear that the first version is preferred. Moreover, you can always write down the second variant and then remove dummy variables.

What I especially like is how it stresses the functional nature of the language: you don't "define" (defun?) functions, you just give them names. Functions just are. Ok, to much lambda calculus lately.

The point is that Haskell programmers tell you that:

fn = f . g . h

is clearer than
fn x = f (g (h x))

On that I strongly disagree. Since I have been introduced to the function composition operator, I have found that most people get it wrong. I have no idea (afterall you just substitute the composition operator with an open parentheses and everything is fine). Now, that is not necessarily a problem: not everyone is destined to become a programmer (and an Haskell programmer).

On this I found some support in the lisp/scheme/clojure community:

Avoid functional combinators, or, worse, the point-free (or `point-less') style of code that is popular in the Haskell world. At most, use function composition only where the composition of functions is the crux of the idea being expressed, rather than simply a procedure that happens to be a composition of two others. Rationale: Tempting as it may be to recognize patterns that can be structured as combinations of functional combinators -- say, `compose this procedure with the projection of the second argument of that other one', or (COMPOSE FOO (PROJECT 2 BAR)) --, the reader of the code must subsequently examine the elaborate structure that has been built up to obscure the underlying purpose. The previous fragment could have been written (LAMBDA (A B) (FOO (BAR B))), which is in fact shorter, and which tells the reader directly what argument is being passed on to what, and what argument is being ignored, without forcing the reader to search for the definitions of FOO and BAR or the call site of the final composition. The explicit fragment contains substantially more information when intermediate values are named, which is very helpful for understanding it and especially for modifying it later on.

which comes from here. Of course this can be an instance of "Lisp syntax is not very good at doing this kind of things, so you should not even try them". And it is not hard to see that (compose foo bar) could not be used extensively as function composition in Haskell, as it carries more syntactic noise. Clojure actually has builtin comp and partial functions:

clojure.core/comp([f] [f g] [f g h] [f1 f2 f3 & fs])  Takes a set of functions and returns a fn that is the composition  of those fns. The returned fn takes a variable number of args,  applies the rightmost of fns to the args, the next  fn (right-to-left) to the result, etc.clojure.core/partial([f arg1] [f arg1 arg2] [f arg1 arg2 arg3] [f arg1 arg2 arg3 & more])  Takes a function f and fewer than the normal arguments to f, and  returns a fn that takes a variable number of additional args. When  called, the returned function calls f with args + additional args.

Which is useful, are used sparingly.

However, when you start freely mixing the various features of Haskell syntax you have got a real mess. At least in my opinion. Functions taking multiple parameters in Haskell are really just unary functions returning functions.

E.g., (A x B) -> C in fact is just A->(B->C).

From the point of view of a programmer coming from another language, you can "bind" only a subset of the arguments of a function. Which, in Haskell terms means that you simply get a function instead of just a value.

For example, (map (*2)) is a function that doubles all the value of a list. It's just like that! And if you don't have the list in a variable: you can compute it on the fly (e.g. filter odd [1..10], which means take only the odd values from the range [1..10] -> 1, 3, 5, 7, 9).

The first tentative fails:

Prelude% map (*2) filter odd [1..10]
%interactive%:1:9:
    Couldn't match expected type `[a]'
           against inferred type `(a1 -> Bool) -> [a1] -> [a1]'
    In the second argument of `map', namely `filter'
    In the expression: map (* 2) filter odd [1 .. 10]
    In the definition of `it': it = map (* 2) filter odd [1 .. 10]

Here the problem is that the "second parameter" of map should be a list. However the compiler finds "filter". Filter is a perfectly legal value for a function, only it has type (a1 -> Bool) -> [a1] -> [a1] and not [a]. In other words, we are passing a function where a list is expected.

Essentially the compiler just reads the arguments and tries to "fill" the parameters. In this case, however, it does not work. We should have written map (*2) (filter odd [1..10]).

See the parentheses? They do the right thing. Now the compiler knows that it is supposed to pass map the return value of the filter function.

However, now we have a "moral" problem: we have parentheses. Point free... does not work! We would have to write

f lst = map (*2) (filter odd lst)

instead of

f = map (*2) (filter odd)

In fact the latter is not valid. (filter odd) has type [a]->[a], not [a]. Still, I the use of parentheses is very readable in my opinion. But we want to use point free, remember? "Luckily" map (*2) has type [a]->[a] and (filter odd) also has type [a]->[a]. They can be composed! How?

map (*2) . filter odd

Now, if you are a real Haskell programmer, you are going to like it. I somewhat like it... it has some perverse elegance, I have to admit. Then imagine that you can compose this way any function. Haskell programs are highly composible.

In fact it is not hard to write very very complex functions just chaining other functions with . (and $). E.g., which is the function that computes the sum of all the perfect squares of a sequence of numbers, excluding the ones smaller than 50?

sum . dropWhile (< 50) . map (** 2)

The point is that there is no limit to the nonsense you can do this way.