Sunday, February 27, 2011

call/cc! I yield! Bah, lazy-seq.

Modifications

  1. 03-01-2011: an additional scheme variant has been added.
  2. 03-02-2011: minor modifications

Imperative Iterators

The archetypal implementation of imperative iterators are Java's subclasses of the Iterator interface. Everyone who has ever implemented an Iterator should acknowledge the pain that is to explicitly maintain state between calls. The essential problem is that the state has to be explicitly maintained. In particular there are two different concepts of state: the iterator "low level" state and the iteration state. In the iterator we have to save "what we have done" and explicitly do so.

Then every time next() or hasNext() are called, we have to do some more computation, save our internal state and return the control explicitly to our caller. This is the implementation of a Javish iterator in clojure to pre-order visit a tree like structure.

I decided to use Clojure instead of Java because the example structure was easier to write, the example could be just run in a REPL, i prefer Clojure to Java and other irrelevant details. However, that is essentially the way iterators are written in object oriented languages (included nice ones, like Python) or languages that have to interoperate with OO languages.

The principal issues is that the "value returning" logic has to be mingled with the iterator bookkeeping logic; I believe that this (although more versatile) is far more complicated that simply adding all the values in a (flat) structure and returning the structure. The reason why this is not the dominant approach is that it is less flexible and less efficient. We also notice that iterators can be infinite.

Ah, if you like OO mumbo-jumbo, this is of course the Iterator pattern.

Yield Statement

In this section we briefly examine Python yield statement. The corresponding yield expression (which essentially allows limited coroutines) is not considered here. If a Python function contains the yield keyword, its semantic changes.

The yield statement may only be used inside functions. A function that contains a yield statement is called a generator function.

[...]

When a generator function is called, the actual arguments are bound to function-local formal argument names in the usual way, but no code in the body of the function is executed. Instead a generator-iterator object is returned; this conforms to the iterator protocol[6], so in particular can be used in for-loops in a natural way.

[...]

Each time the .next() method of a generator-iterator is invoked, the code in the body of the generator-function is executed until a yield or return statement (see below) is encountered, or until the end of the body is reached.

If a yield statement is encountered, the state of the function is frozen, and the value of expression_list is returned to .next()'s caller. By "frozen" we mean that all local state is retained, including the current bindings of local variables, the instruction pointer, and the internal evaluation stack: enough information is saved so that the next time .next() is invoked, the function can proceed exactly as if the yield statement were just another external call.

Essentially instead of using the structure of the previous example, we could have written:

The idea is that when yield is encountered the generator "returns" the value and when next() is subsequently invoked, it resumes execution from that spot. Essentially it is a neat syntactical facility that "saves" somewhere the continuation of the computation in order to be ready to resume from that point.

call/cc

Ok. I said it. Continuation. That is. When the goal is to experiment with control, scheme comes to mind. This is our representation of the tree:

Now I provide an easy example for doing the preorder "functionally". The idea is to provide a function taking a callback function that is called on each element of the tree when is visited. The function also returns the return values of the callback function (in reverse order); anyway, use display as a callback function and we have our familiar example:

Here we did not use continuations. In the following example, I use continuations to save the state of the computation where to resume (after the yield) and to jump out (simulating python yield statement) with another continuation. I think this code should be written more clearly; I'm no expert with continuations (I usually find them too general), so suggestions are welcome.

When generator is called, it returns a function that every time that is called returns the successive element. It behaves like an almost like an iterator and has the structure of Python generator functions. Boilerplate code could be avoided using a macro. Essentially, lines 2-6 are boilerplate and similarly are lines 11-13 and 18. The difference with a "java like" iterator is that when there is no way to test if there is a "next" element (no hasNext method, closures are objects with just one method! ;)) and it returns null forever. So the implicit protocol is "do not put null objects in the tree" and if you get null, then you are done. Other strategies could be used, e.g., instead of returning the value, a list containing the value is returned; empty list means that the iteration completed; list containing an empty list meant that the tree contained a null valued node.

We could have used continuation passing style and that would perhaps have been more similar (even though far more complex to understand and write) than the corresponding Clojure way to do it.

In the comments a different scheme continuation based solution has been suggested by Marco. It is far more elegant than my original solution, but is structurally different from the Python code of the sections above. In fact, it does not use an explicit stack to manage the tree traversal (as both the Python version and my scheme version do), but relies on more recursive calls, essentially using the implicit call stack.

What about lazy-seq?

Essentially we specified that iterators are needed because they are efficient and more general. Indeed in languages such as Python, generators are an easier way to write iterators. What about Clojure? The example at the very beginning is not the way things should be done in Clojure. On the other hand the iterator in scheme would be a good API for Clojure as well. In fact it is the very same pattern that we have with map (Haskell programmers here may want to extend the discussion on "mapping" on different algebraic types, such as Trees, and not only Lists).

Imperative iterators are just a design pattern. Call/cc is a feature so powerful you can use it build almost every other control abstraction. Yield can be used to implement coroutines (which could also be implemented with call/cc). And what about lazy-seq? Lazy-seq is nothing like that. Many Lisp books show a simple example where macros are used to implement lazy evaluation (or at least list lazy evaluation, which is what matters most). lazy-seq basically does that. The difference is that other Lisps are usually not built around the idea of lazy sequences the way clojure is.

Consequently, most libraries and APIs do not work (or do not play well) with lazy evaluation. They could be made to work, but it is simply outside Common Lisp pragmatics, and there is not anything wrong with that.

But back to lazy-seq... why it is here? Consider the code below.

I want to point out the structure of the tree-seq-aux function. Indeed, the function is not recursive. It simply return a lazy-seq. The second parameter of the cons cell we put in the lazy-seq, is another call to tree-seq-aux. However, this is not recursion: when lazy-seq is evaluated, we already returned from the previous call.

But thing are even more interesting than this. That cons cell is somewhat similar to the iterator we saw at the very beginning. Its car (first) is the current value. It is "next". And what is its cdr (rest)? Suppose that our iterator was not stateful. Suppose instead that calling next() returns both the current value and another iterator which will provide the successive values. Well... basically it is our cons cell.

Pair<Value, ConstIterator<Value>> v = it.next();

That is essentially our cons cell. But in fact we somehow have the same pattern we had in the continuation based example. We "save" what to do next (akin to the jump continuation) in the cdr of the cons cell; however, we do not have to evaluate it right now because of the lazy-eval. As I already said, if we used continuation passing style, we would have placed the recursive call which we put in the lazy-seq cons cdr in the explicit continuation.

The advantage is that it is easier to write, easier to understand, easier to use. Unfortunately is much more specific and powerful. But I don't mind practical tools, when they have the elegance lazy-seq has.

Technorati Tags: , ,, ,



Thursday, February 24, 2011

Clojure: generating sentences given a grammar (part 1)

Introduction

The idea of this post is writing a simple sentence generator based on a given (context free) grammar. The idea and the program come from PAIP.

However, this is essentially only a starting point to present some philosophical differences of programming languages and to show how Clojure is every bit as versatile as Common Lisp (on this kind of problems) and perhaps some new built-in features make it an even more interesting choice for this problem.

Description of the problem

The problem here is simple: generate sentences in a small subset of english. Small is an overstatement; perhaps diminutive is more appropriate. At the cost of looking snob, I believe that many mainstream programmers may already be cowling in fear or, if skilled enough, they are looking for a library doing the task.

There is nothing wrong with the library approach. Moreover, it is probably the right choice for a "real" project, because it is certainly more versatile than the few lines we are going to write, more tested [well, actually not, as our piece of code is just a very close translation of some 25 years old program or so ]. Perhaps, more efficient.

Now suppose we are not meant to use external libraries. Unrealistic as it may be, consider that we want to learn something beside a new bunch of API's. Or maybe we just want a lightweight solution. Different approaches are favored for different programming languages.

I jump the stage in which the programmer writes a program specifically translating the example grammar. I believe it is clear from the specs that we want to generate sentences for every grammar with some prerequisites. As a consequence, we have just to represent the grammar in some format, read it, understand it and generate the sentences.

This is essentially a language interpretation problem: we have a "program" written in a given language (the grammar) and we want to interpret it (and generate the sentence). The language is not turing-complete, of course [ as we have decided that it should be a CFG, and consequently out interpreter machine needs not to be a turing machine ]. I read somewhere (which I mistakenly believed to be the Unix Programming Environment) that the authors strongly advised to transform generic problems into language interpretation problems. The idea is that this way it is easier to think at a higher level of abstraction (the one given by the interpreted language, not C); efficiency is probably not an issue either, as a specific domain language is likely to be more efficient than a general purpose interpreted language.

An example grammar is:

Sentence : Noun-Phrase + Verb-Phrase
Noun-Phrase : Article + Noun
Verb-Phrase : Verb + Noun-Phrase
Article : the, a, ...
Noun : man, ball, woman, table, ...
Verb : hit, took, saw, liked, ...

Implementation details

The question is how to represent the grammar. I believe that most Java programmers here are already mentally checking the API's of their favourite XML parser. Although in Clojure there are more appropriate data structures, I chose to represent it as a list of lists, as with the Lisp tradition. This has the advantage that examples prepared for PAIP program can be directly used without modification. Moreover, the size of the grammar is modest enough not to make it an issue. Besides, switching from lists to vectors does not lead to changes in the code.

I belive that here we can spot a very evident cultural difference. In the Java/OOP world the problem is two-fold. First we have to translate the grammar in data structures manipulable by Java, then we have to actually interpret them. The first part is essentially the task of a parser: for XML there are many parsers (but the syntax becomes very clumsy, IMHO); on the other hand it is possible to use parser generators such as Yacc or Antlr. The second part is also considered a serious issue: in fact in GOF's Design Patterns the Interpreter pattern exactly deals with these kind of problem. Their suggestion is to create a class for every terminal and non terminal. Please notice that we are not talking about the terminals and non terminals of the grammar (such as Sentence or Noun Phrase) but about the terminals and non terminals of the language in which we express the grammar (such as Rule, Alternative, Head and things like that).

It is interesting to notice how both problems in Lisp are non-problems. Lists and atoms provide a nice representation for the grammar itself:

(def *simple-grammar*
'((sentence -> (noun-phrase verb-phrase))
(noun-phrase -> (Article Noun))
(verb-phrase -> (Verb noun-phrase))
(Article -> the a)
(Noun -> ball man woman table)
(Verb -> hit took saw liked)))

The parser problem is completely solved. And the interpreter pattern just becomes some list processing. Less fancy for sure, far easier nonetheless. It is just list processing. Nothing more, nothing less.

This is essentially a straightforward translation of the original common-lisp variant. However, does the job. You can also experiment with different grammars.

Morale

Notice that here the key was not lisp homoiconicity. It is a feature which has not been used. A similar approach could have been used also in C (though grammars would have been impossible to share, unless a specific file based representation was developed). Here the key was just thinking in terms of a DSL.

This is not obvious. Some times ago I developed a toy Prolog program which generates imprecations and similar stuff. Unfortunately enough, this example did not immediately come to my mind. Instead I assumed that Prolog ease of development would be (alone) a time safer. It was not. Using the full language power to generate sentences proved to be far more complex than switching to a simple grammar representation + a bunch of procedures.

Future work

Right now, I'm trying to adapt the generate-all to Clojure. Indeed, although the straightforward translation actually works, it is not lazy, and consequently cannot be used with grammars generating an infinite language.

Books & Amazon associates

"Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp" (Peter Norvig)

"Unix Programming Environment (Prentice-Hall Software Series)" (Brian W. Kernighan, Rob Pike)

Technorati Tags: , ,


Wednesday, February 23, 2011

Is laziness tiring? (part 1)

Introduction

A part from the obvious pun, I effectively consider laziness tiring. Let me explain: I'm quite used to lazy languages, I've used Haskell for some time, and I use effectively Python lazy features extensively. Moreover, it has been a while I use clojure as well.

Laziness is an extremely powerful conceptual abstraction: it is like having an optimizing compiler tailored at not perusing your sequences, so that you can write code in a natural way. I would definitely want some form of laziness (especially sequence based laziness) in every language high level enough that it does not hurt.

Moreover, lazy sequences do not have many of the usual drawbacks of default lazyness. First, you somehow have to explicitly require it; when not required is there but it does not hurt either (e.g., map). It does not play extremely well with side-effects, but in clojure the language is designed to limit the scope of expressions with side effect.

However, I decided to understand more fully how lazyness works in clojure.

lazy-seq and cons

This is a simple code I wrote to test multiple functions on a given collection and see how they fare.

Essentially benchmark is a macro taking a vector of functions. It builds code fragments which call the desired function on the list and prints the time needed. We use a slow-computation function which sleeps for 500 ms. This way is easy to count calls to slow-computation.

One of the more interesting examples I found is this one:

Here the list is a completely lazy list of three elements. The results are expected: list?, seq?, identity do not do anything with their argument, which is not evaluated (by the way list? is false, seq? is true). first has to evaluate the first element (which is expected), empty evaluates the first element to see if there is such an element. seq should return nil for empty sequences, and as a consequence has to evaluate the first element.

rest also evaluates the first element, but not the successive. I'm not sure if it would be possible to return the cdr of the list without evaluating the car; probably not with clojure lazyness model. next is worse: it evaluates the first element (like rest), but then calls seq on the result, and that leads to the execution of another element.

lazy-seq and list

This example is slightly different, but is trivial, indeed. There is just one lazy seq: the contained list has to be evaluated altogether for every function that needs at least one element.

lazy-seq and concat

The following example, started to surprise me.

As far as I could tell, concat should return a lazy sequence over the lists. It is probably my misunderstanding, but I thought that identity, seq, first, list? and empty? should evaluate one or no elements. I decided to write more benchmark functions.

The results are here:

However, when running the first example the perceived time was akin to the second: simply the evaluation was not timed because it was done (at least) when binding in the let form. Then I got the illumination. Laziness is a bitch: the point here is that it is list that is not lazy. By the way, why should it be. So it does not matter what I do: the time is already spent when I get into concat. I understand that this is trivial: however, I find strange having tp reason about "when things happen" the point of laziness being "not reasoning about when things happen".

Anyway, using lazy-seq, things work as expected:

But what about a "warn on eager", which alerts when something that should be lazy is instead eager?

Technorati Tags:, ,


Monday, February 21, 2011

'(leiningen slime Emacs) are the way.

Leiningen seems to be really the easiest way to create a clojure developing environment.

Up to now, I worked mainly with IntelliJ + LaClojure. The rest of the project is Java (so having IntelliJ is a huge plus) and is very "project oriented" stuff. In fact, it is not even possible to think about individual scripts or so, things are meant to work in an environment and the environment needs to be started beforehand. As a consequence, a solution which does not work well "one file at a time" is not a big issue right now.

Although LaClojure plugin is very good, I feel more at home with Emacs. Not that I'm been an assiduous Emacs user (I use and used pretty much everything), it is just that Emacs has been optimized in last "don't even know how many years" to work with Lisp. Emacs indentation is beautiful and all. Moreover, leiningen should be a good choice even for my main project.

As a consequence, I decided to test leiningen with smaller projects.  I started a one file project as an example. The first step was just lein new'ing, then I had to manually add the :dev-dependencies [[swank-clojure "1.2.1"]].

$ lein deps

Then it is just a matter of lein swank and the server is there, waiting for my Emacs to connect. I had already installed slime (and swank) for my common lisp development. I installed swank clojure and set some configuration variables in my emacs configuration:

 
(require 'clojure-mode)
(add-to-list 'load-path
             "~/.emacs.d/swank-clojure/src/emacs")

(setq swank-clojure-jar-path "/usr/local/Cellar/clojure/1.2.0/clojure.jar"
      swank-clojure-extra-classpaths (list
                                      "~/.emacs.d/swank-clojure/src/main/clojure"
                                      "/usr/local/Cellar/clojure-contrib/1.2.0/clojure-contrib.jar"))
and that is the rest of the slime related configuration
(eval-after-load "slime" 
  '(progn (slime-setup '(slime-repl))))


(setq load-path 
      (cons "~/.emacs.d/slime" load-path))
(cond
 ((macosx-p)
  (setq inferior-lisp-program "/usr/local/bin/sbcl --noinform"))
 ((win-p)
  (setq inferior-lisp-program "C:/Program Files/Steel Bank Common Lisp/1.0.37/sbcl.exe --noinform")))

; (slime-setup)
(require 'slime)
(require 'slime-autoloads)
(add-hook 'slime-repl-mode-hook 'split-window-vertically)

I think that I should improve the configuration, as it is still sub-optimal. However, with M-x slime I just run a new sbcl slime server, with sbcl client. Then I can use it just to hack with common lisp. On the other hand, M-x slime-connect asks me which server to connect. With default values, if I already run lein swank in my project, it connects to that one (but ports and servers are completely configurable).

At that point I have a familiar lisp developing environment. I can easily compile and load with C-c C-k, reload every function with C-M-x and everything that is standard in slime.

It seems that functions in src/projectname/core.clj can be used directly when they are loaded with C-M-k, but not with C-c C-k. However, I find more useful to do the latter most of the times. Moreover, I believe that files outside src do not load directly in user. Besides, I prefer not to. As a consequence I switch namespace in slime.

(in-ns 'org.enrico_franchi.paip.simplegrammar.grammar)

I believe that this should make clojure.core functions not accessible; however, I have the habit of importing the "core" libraries (at least, I did that in common lisp packages, as recommended). Consequently my namespace declaration is, for example:

(ns org.enrico_franchi.paip.simplegrammar.grammar
  (:use
   clojure.core
   clojure.contrib.combinatorics))

And slime is in my namespace, so I can use the functions as I want (user is not modified) and I can also call clojure.core and whatever I need.

Technorati Tags: , , ,

Saturday, February 19, 2011

Clojure: mdef macro

I don't believe this macro is particularly useful. In principle, this macro should not have been written at all. If fails the very first questions we should ask ourselves before writing a macro, that is to say the macro is not necessary. If you really need a macro to specify multiple special variables, you are probably getting the design wrong.

However, it is easy enough to demonstrate some basic techniques. In particular, the idea is how we quote/unquote in nested ways. Most basic examples on macros show how ~ unquotes a symbol. In fact ~/unquote is more general: it "reverts" to standard clojure evaluation.

Code in an unquote is just regularly executed at compile time. A particular case is when we write ~foo it is equivalent to (unquote foo) where foo is just a symbol which is evaluated to its value as it is the rule with Clojure. The do form is quasiquoted, then we ~@ the list built with the loop.

Inside the loop, we use ` once again: we could have built the list without using "templating", just using regular clojure operations [(list 'def ...)]. In fact, it is entirely possible to build macros without using ` and ~ and simply "building" the s-expressions representing the code. It is usually believed that code written this way is less clear.

Of course, the macro could have been written in a more compact way. However, I think that this way is rather readable and shows clearly the point I intended to make.

Technorati Tags: , , ,

Thursday, February 17, 2011

Clojure: on the perversion of condp

Yesterday, I discussed here some issues with removing too many parentheses. Today I especially deal with the awful perversion of condp. Apparently, removing parentheses from Lisp is a common pattern in modern Lisps. Arc seems to have taken the very same route, even though somewhat more extreme (e.g., let binds just one variable -- no parentheses needed --, with binds more than one and essentially uses clojure's let syntax). Anyway, back to condp... condp takes a binary predicate, an expression, and a set of clauses. Essentially instead of yesterday's example:
(cond
  (instance? Integer x) :int
  (instance? String x) :string)
we could have written:
(condp = (class x) String :string Integer :integer)
The first thing it is that, in my opinion, it may be rather easy to mistake the first pair (which semantically is the binary predicate and the expression) with the successive conditions. The second issue is that condp can have an odd number of arguments: the last one is the default. Now consider the pathological example
(condp
  = (str x)
  (str String) (str y)
  (str Integer) (str z)
  (str k))
And suppose that for some unfathomable reason is written like:
(condp =
  (str x) (str String)
  (str y) (str Integer)
  (str z) (str k))
Ok... it is your mistake if you just don't format thing properly, but such a bug is much harder to spot than it should be. I understand that this example is extremely contrived. The problem is that having to resort on basically enumerating arguments to understand what is their function (that is to say if they are in an even position they have a function, if they are not they have another, and if there is actually a last argument things are even different). Things could be worse? They are worse. I forgot to mention that condp clauses have both a binary form and a ternary form. In fact a clause instead of :
value expression
can be
value :>> unary-function
The first thing to notice is that enumerating arguments becomes even messier, and a special syntax is to be introduced. Counting becomes messier because you have to remember if you got an even or an odd number of ternary clauses, which changes the meaning of being in an odd or in an even position. If we grouped things with parentheses, then it would be just a minor issue. Once you are familiar with ternary clauses, you won't be surprised that
((fn [x] (condp = (keyword x) :a :>> :b :>> :<<)) :a)
evaluates to nil. I believe that this is an example of how some good features in a programming language conjure to create very counter-intuitive behaviour (I'm a big fan of the principle of least surprise). Why? If we did not know of the ternary argument, one would thing that if (keyword x) is :a or :b then we get :>>, otherwise, we get :<<. This would be straightforward. Unfortunately[0], we do have the ternary clauses. Ok, but this does not happen. :>> is magic. You cannot (easily) have a condp returning :>> [which is something I think would be perfectly legitimate, and in case you have to resort to an expression which evaluates to :>>, like (keyword ">>")]. I love magic in programming languages, but the magic leaning towards abstraction. The kind of magic which makes complex things easy, not the kind of magic which essentially introduces a special case for something of dubious value. So, back to the expression... :a equals to :a, so that condition is chosen. But :>> is magic. Consequently :>> is not returned. Rather (:b :a) is evaluated. But remember, a keyword may be used to test inclusion; this is a nice programming language feature in many situation, but here it plays awfully bad with the other parts. So we are basically asking... is :b in :a? And the answer is nil. By the way, this does the right thing:
((fn [x] (condp = (keyword x) :a (keyword ">>") :b (keyword ">>") :<<)) :a)
If you are wondering whether parentheses would have fixe the issue... yes they would.
((fn [x] (condp = (keyword x) (:a :>> :b) (:>> :<<))) :a)
is clearly and immediately distinguishable from:
((fn [x] (condp = (keyword x) (:a :>>) (:b :>>) (default :<<))) :a)

Notes

  1. It is not that I don't like the idea behind ternary clauses: I just believe that they should be in a separate conditional form. And if you just don't need a function for some of the clauses, clojure has an extremely neat syntax and you can just write #(do % val) or (fn [_] val); I believe that the latter is clearer.

, , , ,

Wednesday, February 16, 2011

Is removing parentheses worth it?

Introduction

The first objection to lisp is "too many parentheses". It is roughly the same objection that programmers do to Python (well, not "too many parentheses" but the fact that you have to follow minimum decency rules for indentation). And Clojure claims to have reduced parentheses... let's go into this.

Back to the indentation vs. parentheses thing, the objections are similar because they share a single pattern: people who do not use the language feel that it is a major drawback, people using it usually like it and people who have to use it without particular enthusiasm usually are agnostic about it. That is to say, the language feature under examination looks "worse" from the inside than from the inside.

In fact Python programmers usually justify saying "you would indent it that way in any case"; which is true to some degree (programmers may use a slightly different style, but essentially it would not be that different to make anybody uncomfortable). Beside, as someone having to read and fix code from complete novices (being an assistant for a basic programming undergrad course), most of the times the first thing I have to do is running the programs through a beautifier (and most of the times even the novices spot the error as the program is properly indented -- but that is another story, and I keep telling them about indenting properly).

Lisp programmers usually answer objecting that in fact they do not even see the parentheses when typing (in fact I do see them) and that they are automatically inserted by Emacs (which is true). As a consequence, they do not have to type more. I add that Emacs is very good at using parentheses to understand the program structure and gets the indentation right. I have to say that the major drawback of Python approach is just that automatic indenters cannot do their job properly (not that I ever felt that it was a real issue).

Use all the parentheses!

Common Lisp basically uses only () because language implementors feel that the other kind of parentheses are free for the language user to abuse in defining custom languages. I understand that this may an advantage in some situations. On the other hand I also feel like having the parentheses in the language also allows interesting possibilities.

For example, I quite like that in scheme R6RS the possibility of liberally using [] can improve readability, at least to my eyes. For example I often use them in case expressions and to hold let bindings. In general when I have multiple S-expressions in the form ((LHS1 RHS1 ...) ... (LHSN RHSN ...)) I often write them as ([LHS1 RHS1 ...] ... [LHSN RHSN ...]), especially in cond/let forms, which is, as far as I can tell, an usage explicitly sanctioned in the C appendix of [1].

The good part is that you are free to use square brackets or not to use them. The bad part is that you can choose to use square brackets or not to use them. That is to say: there is a non normative suggestion, but you are not forced to follow it. As a consequence code can legitimately use or not use square brackets. Moreover, their usage is not standard in R5RS; consequently if the code is meant to work on both implementations, using square brackets is not an option.

In this, I praise Clojure choice. Using square brackets and braces is a good idea for a new lisp dialect, especially for one that is not meant to win Common Lisp programmers, but to be used in Java environments and to bring object oriented people into a more functional world.

I really like the idea that vectors are just [foo bar ...]. Even thought it was not particularly hard to write #(foo bar) instead. However, the syntax for hash maps is really a plus, in my opinion. And so it is the one for sets. It is not something you leave Common Lisp for, but it is a legitimate choice and a rather wise one: once you decide that you are definitely not going to let user customize the language with reader macros, then you better use the symbols you have! Moreover, considering that the JVM is more optimized for array like stuff, I believe that wider usage of vectors [] also justifies their new syntax. Where a scheme programmer uses a list of atoms, a clojure programmers uses a vector of symbols.

I am quite agnostic on using [] to delimit binding forms. I don't think that more than two lines should be written on the subject (even though depending on the width of your screen, these may be far more than two lines). I don't feel it is less lisp because we write (let [...] ...) instead of (let (...) ...).

Readability counts

On the other hand, I feel the removal of parentheses a drawback rather than an advantage. But I may be biased: for example in languages with infix syntax, I tend to use additional parentheses when precedence issues are slightly more complicated than trivial (which basically happens as soon as we use more than the elementary arithmetic operations).

The question is that using less parentheses decreases the redundancy of code (which would be a good thing, if it wasn't for the fact that redundancy means better error correction). For example, consider this code:

(let [x 1 y 2] (+ x y))

First, I do believe that having the paramethers all together makes code less clear. Syntax can be used to visually aid the programmer. In this case while it is obvious for a machine to match arguments pairwise, the eye is not helped at all. Besides, if I do indent it properly, I also feel some sense of irresoluteness because of the way the parameters are divided [notice that parameters should be lined up; however it seems that something between blogger and scribefire randomly destroys proper indentation, sometimes, if it happens it was not what I meant]:

(let [x 1 
           y 2] 
  (+ x y))

I believe that this is far more readable (but it may be my bias, as I said):

(let ([x 1]
           [y 2])
  (+ x y))

because parentheses logically group arguments. Notice that in C if I could write int foo 8;. I would still prefer int foo = 8. Ok that syntax would be ambiguous (consider int foo (8); would be a valid function declaration and a valid variable definition, as I can place () around expressions... in a way which resembles me of the classic error mytype foo(); which most beginners do -- that is a function definition, not a variable declaration).

Anyway... back to clojure. Consider the following code:
(let [x 1 y] (+ x y))

It is an obvious mistake (well, not so obvious, IMHO). And the compiler says:
java.lang.IllegalArgumentException: 
  let requires an even number of forms in binding vector 
  (NO_SOURCE_FILE:0)

Good. But what if we are type hinting freaks and write:
(let [^Integer x 1 y] (+ x y))

I know that in this case it would be better to use (int 1) but, that's just an example. Ok... the compiler still complains (rightly):
java.lang.IllegalArgumentException: 
  let requires an even number of forms in binding vector 
  (NO_SOURCE_FILE:0)

Now I should check what is a "form in a binding vector". Perhaps the definition explicitly excludes type hints, so the compiler is not lying. But the heck... the message is extremely unclear. I believe that lack of parentheses is rather annoying in this case. But I've seen worse.

The cond statement is not very good either...
(cond
  (instance? Integer x) :int 
  (instance? String x) :string
  :default :dontknow)

Once again, a few more parentheses could group things more tightly. But this is just a matter of taste.
A discussion about the condp form is here.

References

  1. M. Sperber, R. K. Dybvig, M. Flatt and A. van Straaten, "Revised6 Report on the Algorithmic Language Scheme - Non-Normative Appendices -.


, , , ,

Monday, February 14, 2011

Macro or not to macro?

Today I was wondering about the code I commented about some days ago. I essentially created a function to transform an array of objects I got in input to something more easy to deal with, that is to say an hash-map. Since most of those parameters needed to become properties of the object receiving them, I considered that I could just write a function to avoid repetitive code.

The function approach is useful and it solves my problem. I need a dictionary, I get a dictionary. However, what I am doing is essentially akin to binding/destructuring. I only have to rely on the information on the left side of the binding, as the right side completely lacks such information. Moreover, in some situation it is nice to directly access the "fields". I definitely decided that it was time for a macro.

Notice, I'm quite against macro abuse. I've already abused many nice programming features as soon as I learned them. Consequently I know I can resist this time. But this is definitely too nice not to use a macro. My first attempt was this:



It essentially retains the same structure of the function, but is a macro. It is slightly more robust because it makes clear what happens when the array is too short (the variables are nil) or when it is too long (variables are ignored). It stresses that what matters is just the keys parameter, while the defaults only help. In particular it is clear that having keys in the defaults that are not among keys is completely useless. Consequently, order in defaults is plainly irrelevant and plain dictionaries (instead of array-map's) can be used.

However, when I started substituting the code in the project, I felt that the code did not look particularly nicer. I immediately started thinking to a new solution. An afterthought is that my typical use-case (get a dictionary to merge with .state) is not easily covered.

In "Artificial Intelligence Programming: Case Studies in Common Lisp" some suggestions regarding macros are given at the very beginning of the book. Norvig explains that writing a macro is a four-step process:
  1. Decide if the macro is really necessary.
  2. Write down the syntax of the macro
  3. Figure out what the macro should expand to
  4. Use defmacro to implement the syntax/expansion correspondence
I felt like the syntax of the macro was not carefully designed. The advise given in the book (a part of not writing them) is to follow established Lisp conventions for macro syntax whenever possible. In this case, the idea was to follow clojure conventions. In particular, clojure has structural unbinding. Using the {:keys [...], :or {}, :as options} I just have all the features I need.

I considered that in order to read the other macro a programmer needed either to look at the implementation or to the documentation. On the other hand, following clojure conventions, the meaning of the code would be apparent to everybody knowing about clojure destructuring.



That is an example on how to use the macro. And now, the macro itself!



Not perfect, but it does its job.

, , ,

Sunday, February 13, 2011

Programming Clojure (opinions)

It was the end of 2009, when I bought "Programming Clojure". In the same order, I had Effective Java and Java Puzzles. The idea was looking for an alternative language to use on the JVM. Scala had already been ruled out... it was either Clojure or JRuby/Jython. In those days, I was a lot into scheme.

I had to admit that I found Clojure extremely unimpressive. On the other hand, when I started working with it more recently, I just felt the opposite. Not the sheer sense of wonder and magic I have when working in Common Lisp (or even Python) but still a peaceful easy feeling. Most of the problems I have is that:
  1. Clojure library is too different from the one of scheme or common lisp to be used without looking at the docs
  2. The official doc does not use examples, which would make it very easy to grasp the syntax of stuff
It took me some time to figure out that in clojure cond does not take "enough" parentheses. I will discuss on this lack of parentheses another time. I felt that the book I had, did not help me at all. Let me explain, this is the book index:
  1. Getting Started
  2. Exploring Clojure
  3. Working with Java
  4. Unifying data with sequences
  5. Functional Programming
  6. Concurrency
  7. Macros
  8. Multimethods
  9. Clojure in the Wild

What I did not like

Chapter 2 is a rather introductive chapter, but it is all in the wrong order. It starts with basic data, and it is ok. Then basic reader macros (quote) and functions are introduced. First, to my understanding, conditionals should have been dealt with before functions, so that when explaining functions recursion can be explained as well. Without conditionals, it is quite hard to write most recursive functions. ;)
But after functions and before conditionals are explained bindings and namespaces. Destructuring, for example, is a terrific feature. I completely missed it out (my fault) because the first time I read it it was too advanced (it is not what you look for when just starting to write the early test programs one writes when learning a new programming language). On the other hand when looking for such features, I did not imagine it was so early in the book.
Then it comes the discussion on namespaces. We do not even have been presented if at this point. The discussion is too terse (in my opinion) and is utterly permature. The rest of the chapter makes sense, still is somewhat too short.
Although clojure is not as small as scheme, it is still possible to do a very short introductive chapter where the "core" language is presented. This is not what happened. On the other hand, Chapter 2 should have been more detailed, in my opinion.

Then it comes to Chapter 3. Which is about Java. I feel that this is a terrible mistake. The impression I got (and as far as I could see from the comments on the web I was not alone) is that, in fact, Clojure cannot be used without Java.
Let me explain: you have been just taught the core language, more or less. At this point you expect either some example program putting together the notions you have, or a chapter on some more advanced features.
You get Java. I understand that clojure relies on Java for many basic features and part of its strength is in reliably and easily calling Java, but if it was just this, I would plainly use Java alone. You have to make me love the language on its own, and then using Java is "why you can use it in the enterprise".

I know I have not been very informative: the point is that clojure is more than (.calling (Java.)); Clojure is a wonderful language on its own. I wanted to see the clojure way. You don't have to sell me clojure as an interop language (yet). I totally agree with Practical Clojure choice on this point (which is placing it at the end of the book). I do want to hack with clojure, not with Java (it is a rather safe assumption, as I bought a book about Clojure -- well I also bought books on Java, but that is another story, isn't it?). This choice somewhat gave me the impression that clojure couldn't stand on its own without Java (which is not true, even though the main strengths over Common Lisp would be gone, Clojure would be a nice language anyway).

What I did like

Is it Programming Clojure a bad book? No. It is not. I quite liked all the code examples. To be sincere, I did skip them the first time I read the book (because some where somewhat too long to figure out without really paying attention to what is happening -- and I did not have a computer around --). However, they are really useful, and perhaps the greatest strength of the book. I suppose that the example (and especially the long lancet example) required the Java chapter at that point. The snake example is simply lovely.

Moreover, there is plenty of information in the book. It is just difficult (for me) to find. Which is essentially my problem, not the book's.

The last chapters are very well done, in my opinion. Chapter 6 easily explains a rather difficult subject in a way which should be comprehensible to most people. Clojure STM and agents are basically some of the more important features of clojure. People could even want to switch to clojure just for them. My only objection is about moving clojure on the clound (compare with concurrency features of Erlang), but that is an entirely different story (and mine is just a general objection, not a motivated claim).

Chapter 7 is a nice introduction to macros. I think that it could be a real breakthrough for most Java programmers, which probably did not even know that stuff could be done. I quite like clojure macro system (kind of ... # is so weird!). In fact it improved my grasp of scheme macros as well (e.g., in The Scheme Programming Language there is not a lot of emphasis on quasiquotes).

Chapter 8 is well done. I have yet to understand if it is true that multi-methods are still not very much used in clojure. As far as I can't tell, it is a very useful feature (and one I used quite a lot in common lisp, even without using defclass). From the author point of view, it seems that they are not widely used (and he provides some statistics). Regardless, the introduction to multi-methods is very nice.

Conclusions

I liked this book. The layout is nice, the book has been thoroughly reviewed. Would I buy it again? Don't know: I also have Practical Clojure and I'm not sure if I need them both. The general opinion on amazon is that both books are not needed. I always find different perspectives interesting. Perhaps I would buy them both in any case.

To me, it seems that the book speaks more to the Java programmers than to the functional programmers. But this may be my impression.

Good book, anyway.



,

Friday, February 11, 2011

I changed theme

Perhaps you noticed. I changed the blog theme.

The bad news are that the SyntaxHighlight javascript plugin is not loaded anymore. As a consequence, old posts do not have syntax highlight. However, since it was getting on my nerves, I'm happy about it. If you feel that I should re-enable email me, place a comment here, send me a carrier pigeon or something like that.

The new theme seems to support stuff like mail/blog/tweet etc... which for some reason did not work with the other theme.

I'm not into graphically tweaking blogs (I think users plainly do not care), but I stumbled upon 5 different tech blogs with very same theme and colors in few days and decided that perhaps I could change the theme.

When you feel frustrated, you are doing it the wrong way.

At the moment I'm working on a project where clojure and java have to tightly inter-operate. I write very functional oriented Java code and integrate it with clojure; clojure code calls low level java code, but as the whole thing is essentially built over a Java framework, clojure code has to be called from Java as well.

Consequently, I have been essentially using clojure as a DSL to generate Java code. I find very hard this kind of setup, as I really have to keep an eye on both sides of Java integration and the whole thing is constrained in a very javish structure (and in fact it is far more similar to the way I would have coded in Java than to the way I would have coded in Lisp). Now, this wouldn't be an issue if it weren't that more than once I considered writing the whole thing in Java.

The essential question is: if I have to write more code to inter-operate with Java than the code I would have written using directly clojure, is it worth it? I'm talking about getting attributes from the .state hash-map.

In fact, I feel like (:some-value @(.state this)) is a big step backwards with respect to just having a someValue field. The point is that writing Java classes in Java is more comfortable than writing them in clojure. Which makes sense. The point is that, in order to leverage the clojure advantages, I have to build my abstractions over the Java framework. In fact, this also makes sense. I understood looking at my clojure code, which was too javish. Well, it was even more javish than the Java code I tend to write.

In fact this is one of the worst pieces of code I ever wrote.



Well, as a partial justification, I have to say that I translated that crap directly from Java. That is Java in clojure. It is essentially imperative code (and that is acceptable, as after all we want to mutate state), but it totally lacks abstraction. In Java that method takes an Object[] (and really, it can't be implemented otherwise).

Changing the number or the order of the parameters I get completely breaks everything and needs a complete rewrite of the method. That is not open for extension at all. And it lacks abstraction.

I decided to change approach: after all what I want is a hash map. I just need to write a function taking the parameters array, the (ordered) keys and a dictionary providing default values. I also found out that sometimes I want to specify all the defaults (and in this case the keys are just redundant, as I can take them from the map, as long as it is an array-map).

This way, things are much easier. In fact the code is far from perfect: there is some redundancy (keys must be specified in two places). Part of the reason is that some objects may have keys which not only are both optional and do not have a default value (and consequently are just not to be in the dictionary).

This is essentially due to some bad design in other parts of the project and is going to be eliminated in time. Right now, the code is (some key names changed as well):



Essentially, as long as I manage to write lisp in clojure the frustration is minimal.

, ,

Thursday, February 10, 2011

IntelliJ 10.0.2: LaClojure broken

Here on OS X IntelliJ 10.0.2 update seems to break LaClojure plugin.
I am trying to investigate further.


, , ,

Tuesday, February 8, 2011

rlwrap: enable Clojure readline support

In my last post I somewhat ended up complaining that the system wide clojure I like to use (and I explaine why and when) basically lacks readline support. In Clojure that is mostly supported through JLine and the brew recipe does not install JLine (which is fine).

Adding clojure-contrib support is easy: we have just to install the library (brew install clojure-contrib and then add it to CLASSPATH). Basically I added in my .zshenv

export CLASSPATH=$CLASSPATH:/usr/local/Cellar/clojure-contrib/1.2.0/clojure-contrib.jar

Ok... now the time for readline support. My first temptation was adding a brew recipe for JLine. Then I had to modify clojure recipe. I bet it was not hard. As far as I know, it is just a matter of editing the following method in the recipe (brew edit clojure):

def script
<<-EOS
#!/bin/sh
# Runs clojure.
# With no arguments, runs Clojure's REPL.

# resolve links - $0 may be a softlink
CLOJURE=$CLASSPATH:$(brew --cellar)/#{name}/#{version}/#{jar}

java -cp $CLOJURE clojure.main "$@"
EOS
  end




This is extremely easy and interesting, indeed. I'm considering editing it in order to add a dependency to clojure-contrib and directly add it to the classpath. However, I did something different before opening the recipe: I read the sources of the lein script.

As I pointed out, lein repl has some readline support, but does not install as a dependency JLine. So I opened the /usr/local/bin/lein script and read it. And I discovered the wonderful tool called rlwrap. It essentially adds readline support to applications without it. In fact lein can both use rlwrap and JLine and as far as I can tell rlwrap support seems slightly superior.

So I just installed (brew install rlwrap) and now I can just run

% rlwrap clj

to obtain the desired effect.

Opening the full clojure recipe, I found out that there is another possibility; they say in the caveats section that:

  def caveats; <<-EOS.undent<br />    If you `brew install repl` then you may find this wrapper script from<br />    MacPorts useful:<br />      http://trac.macports.org/browser/trunk/dports/lang/clojure/files/clj-rlwrap.sh?format=txt<br />    EOS<br />  end


I probably did nor pay enough attention to installation messages!
However, it appears that repl is able to work with rlwrap but does not provide readline support on its own:

"""If you have rlwrap(1) installed you'll automatically get the full benefits of readline: history, reverse searches, etc."""

For me rlwrap clj works correctly.

, ,

Monday, February 7, 2011

Clojure: easiest installation ever

Some time ago I wrote a post about installing Clojure on windows. It was about the manual installation of clojure on Windows, without using leiningen or other external tools. My idea was to write more posts about installing and setting up the environment on OS X (and perhaps Linux, though I suspected that the steps would be the same). Then perhaps introduce Leiningen.

In fact, Leiningen is too simple to install to deserve an article on its own. On windows it is a matter of downloading a bat file an running it. On Unix substitute the bat file with a shell script and you get the idea. It is exactly the way software should be deployed. If installation instructions are not dead simple, users will follow different paths and even support becomes far more complicated.

Besides, leiningen on OS X is just a matter of:
% brew install leiningen

Why we insist on Leiningen? Afterall it is a "build tool for Clojure designed to not set your hair on fire." The next line in the description once again states that:
Building Clojure projects with tools designed for Java can be an exercise in frustration. With Leiningen, you describe your build with Clojure.

From the description, it seems that Leiningen is essentially a project oriented tool. I have lots of reluctance about "project oriented tools", especially when I'm starting to work with a new environment. I want to use it to write small scripts in order to become more confident and more productive with the environment, without tackling both the novelty of the environment and the problems related to a medium size project. Moreover, I want to write small IA games, project euler problems... and lots of stuff which is not project oriented.

This is the reason why I wrote the article which described how to install clojure as a "regular interpreter" that can be used as I would use Python or SBCL. In fact, clojure is that and even more, but I have to focus. Why leiningen? Leiningen is a project oriented tool.

The fact is that leiningen is so simple that it is not harder to setup a full project than a single file. In fact it suffices:

% lein new leiningen-examples
Created new project in: /Users/enrico/Documents/src/leiningen-examples

And I have got the skeleton of the project, which contains instructions about installing both clojure and clojure-contrib (which is useful and I always forget to install until I need it). Lein will download the clojure (and other jar) dependencies. Lein will run your repl (with the relevant libraries included), bundle everything in a single jar, and so on. This is great... still, I miss a full-fledged sistem-wide installation.

Once again, brew comes to my help:
% brew install clojure

Some of my needs are fullfilled. Consider for example:

% cat prova.clj     
#!/usr/bin/env clj

(println (slurp "http://www.google.com"))

% chmod 755 prova.clj
% ./prova.clj       
--- google homepage

However, the repl is not half as good as the one I got from leiningen. E.g., readline-like support is not available by default (differently from my post on Windows). I can always add clojure-contrib with brew:

% brew install clojure-contrib

and brew suggests:

==> Caveats
For Clojure to detect the contrib libs, the following path must be in your
CLASSPATH ENV variable:

    /usr/local/Cellar/clojure-contrib/1.2.0/clojure-contrib.jar

To do this with bash, add the following to your ~/.profile file:

    export CLASSPATH=$CLASSPATH:/usr/local/Cellar/clojure-contrib/1.2.0/clojure-contrib.jar

The suggestion actually works. However, we are still issues with readline.



, ,

Saturday, February 5, 2011

Clojure: all the way to lisp

Sometimes, when learned Java developers have to face serious critics to their favourite language (Java), a famous quote from Guy Steele is cited in response:
And you're right: we were not out to win over the Lisp programmers we were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp. Aren't you happy?
By the way, for those who are in this kind of stuff, here it is the original discussion (2003), entitled "bindings and assignments" and and the quote was given as an answer to McKay claim that Java success was more due to Sun marketing than to effective Java merits. There are a couple of things that immediately strike me:
  1. Steele is not even trying to defend Java. He is not saying that Java is particularly good, he limits to say that taking Lisp as the ultimate language, Java was a meant to bring the blub programmers from C++ to something slightly more lispish. He is almost apologising: "we had finite resources, facing an apparently finite window of opportunity". In fact, I like his aptitude. He is very honest in the discussion, even if facing some rather strong critiques, after all he had been told that Java success is not due to his merits as a language designer, but to right marketing choices from his company.
  2. Java does not look akin to Lisp at all. I mean, if you look at Ruby, for example, the influence Lisp had is very easy to see (much more evident than for Python, for example). Java looks very unlispish, indeed. My opinion is that the less lispish thing Java has is the syntax. It's not even the static typing or lack of closures or functional gimmicks, which can be implemented in an object oriented way, indeed. The point is that the those implementations take a huge lot of code and syntactic noise to be used. Lisp was about hiding the noise behind a macro. Java brings in the noise and multiplies it ten folds.

Now, where are we going? We are going in the clojure direction. After ten lots of years of Java, it is apparent that the blub programmers dragged halfway to Lisp were infinitely distant from good programming and consequently still are. They blubbed in C++, when C++ was cool. They blub in Java, now that Java is cool. Even if it is kind of an heresy, I bet they will blub in Python if Python will become the next big thing in enterprise (or in Ruby, or whatever). Please note that Graham was implying that languages are partially ordered (with Lisp on top), while I do not have this opinion. Moreover, there is a whole new generation of people who believe that Java is really cool and perhaps do not know everything but Java. I have even lost the url of a very informative post of an American professor pointing out how the industry is unsatisfied of the very low average skills graduates have.

Those programmers, use Java very badly. Java is extremely good at hiding its good features, indeed (and it hides them behind a huge wall of stolid syntax). And in fact Java is not as bad as most people thing. This is evident if you stumble into code from people like Bob Martin or Fowler. But this is not even the point. As it is not the case to point out the continuos struggle that Java developers face against Java purportedly greatest features (checked exceptions and static typing). There are huge libraries and frameworks which basically just try to make static typing less painful.

So what now? We have clojure. If Java dragged those programmer halfway to Lisp, Clojure should drag them right all the way in. Afterall Clojure is Lisp. Some might prefer Common Lisp (I do) or Scheme, but Clojure is Lisp nonetheless. They can be dragged all the way to lisp. They have their libraries, their tools. And they have Lisp. They can call Java in both directions. Program in object oriented ways, program in functional ways. Mix the two. Hack with a new wonderful concurrency models (I hate Java threads).

The point is that those features will be simply above the grasp of most people. It was pointless to bring people halfway to something they will never reach. Even not that they could.


Wednesday, February 2, 2011

Java epic fail!

(def *boom* 2.2250738585072012e-308)
(println *boom*)

This is not clojure fault, btw. In this article (thanks dvd, thanks C8E, thanks geekml it!) more on this issue. And I thought I could safely affirm that the above program was going to terminate...

The example over there is in clojure, because it is easier to write and test. However, the problem is with an algorithm buried deep in the JVM dealing with the conversion of strings to doubles. The algorithm fails to terminate with the above number. It happens both at compile time (if you place the number as a literal) or at runtime, if you feed it to Double/parseDouble). This is common to all languages on the JVM.

As variant of the algorithm are quite widespread, other languages may suffer from the problem as well (perhaps with different numbers). An example is PHP, which had a similar bug some time ago. However, "Perfumery languages"[0] seem to be immune .

Right now, the JVM bug should have been fixed. Does this make Java a perfumery language as well[0]?



Notes

[0] This is a long running joke on a mailing list; please, do not take it too seriously.

, , , , , ,

Tuesday, February 1, 2011

A rant on FP in Python compared to Clojure and Lisp

In a previous post, we started investigating closures in imperative settings. Essentially we provided a short Python example where things weren't' really as we expected. Essentially, the problem is that in Python you cannot rebind a variable in the enclosing scope. The problem is relatively minor.
  1. It is a problem we usually do not have in functional settings as rebinding variables (if possible) is not a common practice. For example in Clojure we just can't rebind a variable in a let form; on the other hand in Lisp it is a legitimate action
    (defun make-counter (val)
    (let ((counter val))
    (lambda ()
    (let ((old-counter counter))
    (setq counter (+ counter 1))
    old-counter))))
    
    CL-USER% (defparameter c (make-counter 0))
    C
    CL-USER% (funcall c)
    0
    CL-USER% (funcall c)
    1
    
    

  2. In Python 3 the nonlocal statement solved the problem. The following program, runs just fine.
    import tkinter as tk
    
    class Window(tk.Frame):
        def __init__(self, root):
            tk.Frame.__init__(self, root)
            value = 0
            def inc():
                nonlocal value
                value += 1
                self.label.config(text=str(value))
            self.label = tk.Label(text=str(value))
            self.label.pack()
            self.button = tk.Button(text="Increment", command=inc)
            self.button.pack()
    
    root = tk.Tk()
    w = Window(root)
    w.pack()
    tk.mainloop()
    
However, this is just the tip of the iceberg. Python is not a functional language, and does not even try to be. Python is built around the idea of mutable state, while, e.g., Clojure is built with the idea that state is immutable and areas where state is expected to change have to be explicitly marked (dosync).

Python fully embraces the object oriented facilities. Most "functional like" features Python has (e.g., first order functions) essentially are just byproducts of being a fully object oriented language. If everything is an object, then functions are objects too. It would take a special rule to forbid FOF and since they are also very nice to use, why explicitly forbidding them?

We have lambda (which is severely limited... compare with lisp lambda/clojure fn: both constructs have the full power of their respective "def(n|un)" form, in fact we may say that the def* form is just syntactic sugar over lambda/fn). We have a functool module: which is nice, but of course if the language is dynamic enough we can do that kind of things... but it is a library, its not a language feature (as in Haskell).

There are two more interesting functional facilities: list comprehensions and lazy evaluation. Both apparently were added to Python under the influence of Haskell.

List comprehension are a very powerful construct, and Python has expanded it further with set/dict comprehensions. This is somewhat a unique feature: even Clojure (which enriched Lisp with syntactical dictionaries and sets) does not have syntactical dict/set comprehensions.

To be sincere, I really did not felt as something was missing when in Python we did not have set/dict comprehensions. I absolutely did not miss set comprhensions: since Python has lazy evaluation, I did not feel something like set(x for x in seq) was a major drawback (cft. {x for x in seq} is probably clearer, but who cares?). So if in clojure I have to write:

(set (for [x (range -5 5)] (* x x)))
#{0 1 4 9 16 25}

and if I just feel that this is unbearable:

(defmacro for-set [binds & stmts]
  `(set (for ~binds ~@stmts)))

As both Python and Clojure have lazy stream evaluation in both cases there is no overhead (that is to say, it is not the case that a list is created and then passed to set which makes the set). Yes... perhaps in Lisp some reader forms could "fully" add the functionality... not sure that it is worth, though.

However, now it is the time of discussing Lazy Evaluation. There are different kinds of lazy evaluation. The first is the Haskell way: everything is lazy by default, essentially unless declared strict. This strategy essentially poses serious constraint on the language. Especially, it makes sense only if the language has no side effects. For example, we would not like that the following thing would not write anything because of lazy evaluation:
(defn foo [x] (println x) x)
(def bar (foo 20))
It would be awkward to say the least. On the other hand, Haskell is designed with these ideas in mind. Essentially output (and everything smelling of side-effect) is forbidden to mingle with the rest of the code. Consequently the programs do not have to worry about when things are evaluated.

This strategy would be plainly impossible in imperative languages such as Python. Moreover, in distributed settings default lazy evaluation is a serious problem (which is well known in the Haskell community... consider Wadler interview in Tate's Seven languages in seven weeks). In Lisp it is very simple to add lazy evaluation on demand (there is just an example in Land of Lisp) and in Clojure we can use delay to great effect (I like batteries included). In this case Python lacks the feature: we can use some tricks, but essentially the syntax becomes quite akward (and nobody would really like to do that).

On the other hand, Python introduced the great public to iterator manipuation. Python treats indifferently finite and infinite sequences (which was natural in Haskell and is now natural in Clojure) using a bunch of functions in itertools. Essentially, they were created to work naturally with lazy sequences and are a great asset in the hands of the skilled Python programmer. Clojure embraced this philosophy (and basically has everything Python has in itertools, perhaps more). We should also remember that here we are in the familiar areas of map/reduce stuff... anyway.

The extremely clever thing Python did was introducing the yield statement. With a very imperative syntax, some of the power of continuations has been brought in the language, actually we have a limited implementation of coroutines [coroutines are trivially implemented if you have continuations]; as closures in Python are limited, so are coroutines. On the other hand, programmers who could never have used with profit coroutines and closures can now used their tamed (crippled?) cousins. I generally consider this a good thing... not a big fan of full continuations, indeed.

And what about clojure? Well, clojure has no "yield" statement. I think that it is only a minor issue. Yield is a very functional object trying (and succeeding) to look imperative through fake beard and nose. On the other hand, most uses of yield are more related to building plain generators than to create "tamed" coroutines. Using a more functional style, along with lazy sequences, it is easy to provide nearly as intuitive variants in clojure.