Friday, March 31, 2006

Some good news too... (YASP)

Those who do not like seeing high their CPU usage stats when doing things like networking, should consider buying a Mac Intel. However I think this is due to the tremendous raw power of these machines than to changes in how MacOS X deals with networking.
I've always said that to me it is OK if CPU monitoring tools tell that the CPU is doing heavy work even if the system is only downloading something from the net. In fact although CPU values are high, the system is fast as ever, so I wasn't concerned.
Well, on MacIntel you don't even see high CPU usage stats.

Thursday, March 30, 2006

Killer: test 1 (passed)

This is the first test to understand what makes my MacIntel GUI terribly slow when some software run.

I wrote a small C++ program on that purpose. It makes some operations on matrices, but it does them in a quite inefficient way. When we multiply two matrices, it creates two vector objects, allocates memory for them, multiplies them, stores the result, and destroys them.

Of course this is not what you would be doing when implementing a matrix library, but we are trying to understand which software bugs are more problematic on the MacIntel and aren't on the PPC (in my previous post I told running some buggy sw on the Powerbook wasn't an issue, but made the MacIntel almost unusable).

This software does a lot of computations (it multiplies two quite large matrices of "points" -- structures of three doubles). Points use vectorial product. It does a lot of allocation and deallocation of small area of memory and I added I/O making it log the result each time it multiplies two points (that is a lot of I/O, since when we multiply two 500 element matrices it makes 500*500*500 point multiplications).

The verdict is positive. It is running right now (it hasn't yet finished) but the computer is perfectly usable.

Now I'm going to write a software that allocates and deallocates large areas of memory and then one that leaks a lot of memory. I do think, memory leaks are the cause of the "GUI slowness". However I still don't understand why the powerbook (that has 1/3 the RAM my iMac has) had no troubles.

Here I post the source of this simple software. Keep in mind that this was designed to be inefficient (don't use it, it sucks).

#include <cstdlib>
#include <iostream>
#include <ostream>
#include <fstream>

std::ofstream LOG;

struct Point{
        double x;
        double y;
        double z;
        Point() :
                x(0.0),
                y(0.0),
                z(0.0) {}
        Point(double l_x, double l_y, double l_z) :
                x(l_x),
                y(l_y),
                z(l_z) {}
        static Point random(){
                Point p(
                    rand()/(double)RAND_MAX,
                    rand()/(double)RAND_MAX,
                    rand()/(double)RAND_MAX
                );
                return p;
        }
};

std::ostream& operator<<(std::ostream& out, const Point& p){
    out << "(" << p.x << "," << p.y << "," << p.z << ")" ;
    return out;
}

Point operator*(const Point& op1, const Point& op2){
        Point res(
            op1.y * op2.z - op1.z * op2.y,
            op1.z * op2.x - op1.x * op2.z,
            op1.x * op2.y - op1.y * op2.x
        );
        LOG << res << std::endl;
        return res;
}

Point operator+(const Point& op1, const Point& op2){
        Point res(
            op1.x + op2.x,
            op1.y + op2.y,
            op1.z + op2.z
        );
        return res;
}

template<typename T, size_t size>
class Vector{
public:
        Vector(){
                mem_ = new T[size];
        }
        
        Vector(const Vector<T, size>& o){
            mem_ = new T[size];
            for(size_t i=0; i<size; ++i){
                mem_[i]=o.mem_[i];
            }
        }
        
        ~Vector(){
                delete [] mem_;
        }
        
        T& operator[](size_t i){
                return mem_[i];
        }
        
        const T& operator[](size_t i) const{
                return mem_[i];
        }
private:
        T* mem_;
    Vector<T, size>& operator=(const Vector<T, size>&);
};

template<typename T, size_t size>
T operator*(const Vector<T, size>& op1, const Vector<T, size> &op2){
        T acc;
        for(size_t i=0; i<size; ++i){
                acc = acc + op1[i]*op2[i];
        }
        return acc;
}

template<typename T, size_t size>
class Matrix{
public:
        Matrix(){
                mem_ = new T[size*size];
                for(size_t i=0; i<size; ++i){
                        for(size_t j=0; j<size; ++j){
                                get(i,j)=T::random();
                        }
                }
        }
        Matrix(const Matrix<T, size>& o){
                mem_ = new T[size*size];
                for(size_t i=0; i<size*size; ++i){
                    mem_[i] = o.mem_[i];
                }
        }
        ~Matrix(){
                delete [] mem_;
        }
Matrix<T, size>& operator=(const Matrix<T, size>& o){               
                for(size_t i=0; i<size*size; ++i){
mem_[i] = o.mem_[i];
                }
return (*this);
}
        T& get(size_t i, size_t j){
                return mem_[j*size + i];
        }
        
        const T& get(size_t i, size_t j) const{
                return mem_[j*size + i];
        }
        
        Vector<T, size> col(size_t j) const{
                Vector<T, size> v;
                for(size_t i=0; i<size; ++i){
                        v[i] = get(i,j);
                }
                return v;
        }
        
        Vector<T, size> row(size_t i) const{
                Vector<T, size> v;
                for(size_t j=0; j<size; ++j){
                        v[j] = get(i,j);
                }
                return v;
        }
private:
        T* mem_;
};

template<typename T, size_t size>
Matrix<T, size> operator*(Matrix<T, size>& op1, 
                                                  Matrix<T, size>& op2)
{
        Matrix<T, size> res;
        for(size_t i=0; i<size; ++i){
                for(size_t j=0; j<size; ++j){
                        res.get(i,j) = op1.row(i) * op2.col(j);
                }
        }
        return res;
}
int main(){
        const size_t SIZE = 500;
LOG.open("log.txt");
        Matrix<Point, SIZE> m1;
        Matrix<Point, SIZE> m2;
        Matrix<Point, SIZE> res;
res= m1*m2;
LOG.close();
}

Multitasking on MacIntel fails in presence of bugs

Right at the moment (as you may have learnt from some of my previous posts) I'm working on numerical libraries. I'm developing a (hopefully) efficient version of the aks algorithm to test primality.

I'm not here to describe AKS, nor to describe multitasking or any particular algorithm. It is sufficient to say that they are programs that make extensive use of the CPU.

On my old Powerbook G4 when I run a CPU expensive task the GUI keeps responding. Of course the task uses a lot of CPU: opening a new application takes more time, and so other tasks do. However applications respond as usual. The computer is noy "hung".

This is one of the things I love more of MacOS. The system remains usable even if under heavy stress. This is no longer true. My brand new iMac Core Duo simply becomes unusable. Applications do not respond (they reaction time can be counted in tens of seconds). This is plainly unacceptable. Of course this does not happen normally: it happens only with some software (and buggy software). However the very same program does not create problems on my Powerbook (the machine slows but remains usable).

This makes me think there is something in the scheduler that just does not work as expected and is fooled by some bugs in the software.

I want to make clear that well written software has't this problem. I can run heavy compilations and have the CPU (both cores) 100% and the system is responsive as usual. It gets stuck when something goes wrong.

The software that hung the mac had bugs (it's iper-alpha and I'm still working on it). But while the old MacOS X PPC responded to me relatively quickly (allowing me for example to kill the bugged software), the new MacOS X Intel seems to prefer letting the bugged software finish. The point is that those bug should not slow the system to that point (and in fact the PowerBook wasn't slowed): they were just a couple of memory leaks.

Now I'm trying to develop a software that has the similar behaviour to the one I'm writing (that is to say uses lots of memory, lots of computation and lots of logging -> I/O), to see what of the three things that stresses more the system. Appearently it is memory.

edit: I just wrote a heavy computation/loging software. This has no troubles at all. You can read what does it do here
Anyway sorry for being allarmistic. I'm trying to figure out what the problem is.

Tuesday, March 28, 2006

Look, I'm Universal!

A PPC Application

ppc-app-2006-03-28-05-02.png

A Universal App

universal-app-2006-03-28-05-02.png

I have not yet found an Intel only Mac application.

Now we show some unix executables. The first one is a script.

s-unix-2006-03-28-05-02.png

This one is universal

universal-unix-2006-03-28-05-02.png

and this is Intel Only

intel-unix-2006-03-28-05-02.png

Of course you can build universal unix executables, as you've seen.

Sunday, March 26, 2006

gmp 4.2 on MacIntel

Good news. With gmp 4.2 assembly optimization works. That means that you can get decent performances. For values of decent that are *below* those of an old Prescott and just a bit better than those of a plain Pentium M with the same clock.

The problem is that (for example) you can't run make check. This makes me thing something is broken. However, I can't understand what. However I've been told that "MacIntels" are not supported by gmp-4.2 . So consider twice before buying a MacIntel if you need to work with gmp.

And you can't use c++ too. For some reason there is an error with the generation of an assembly optimization. The answer from the developers has been "gmp-4.2 is not supported on MacIntels" (however I am not really able to consider this a solution to the problem, but unfortunately I'm not skilled enought to fix things by myself).

In fact the second core is not used at all, so this result is quite predictable. Moreover I used shared libraries instead of static ones (for the very good reasons that the guys at apple don't ship the gcc with the static version of libgcc and of crt0.o, so there is no easy way to do it).

In the end I assume MacOS X on Intel is young and probably not as optimized as a FreeBSD (just to name one). These are the results:

iMac 2 GHz 2 GB

***** GMPbench version 0.1 ***** 
Using default CFLAGS = "-O3 -fomit-frame-pointer -I../gmp-4.2" 
Using default CC = "gcc" 
Using default LIBS = "-lgmp -L../gmp-4.2/.libs" 
Using compilation command: gcc -O3 -fomit-frame-pointer -I../gmp-4.2 foo.c -o foo -lgmp -L../gmp-4.2/.libs 
You may want to override CC, CFLAGS, and LIBS 
Using gmp version: 4.2 
Compiling benchmarks 
Running benchmarks 
Category base 
Program multiply 
multiply 128 128 
GMPbench.base.multiply.128,128 result: 9530908 
multiply 512 512 
GMPbench.base.multiply.512,512 result: 1150785 
multiply 8192 8192 
GMPbench.base.multiply.8192,8192 result: 12500 
multiply 131072 131072 
GMPbench.base.multiply.131072,131072 result: 228 
multiply 2097152 2097152 
GMPbench.base.multiply.2097152,2097152 result: 9.62 
GMPbench.base.multiply result: 12463 
Program divide 
divide 8192 32 
GMPbench.base.divide.8192,32 result: 306090 
divide 8192 64 
GMPbench.base.divide.8192,64 result: 104119 
divide 8192 128 
GMPbench.base.divide.8192,128 result: 66800 
divide 8192 4096 
GMPbench.base.divide.8192,4096 result: 20668 
divide 8192 8064 
GMPbench.base.divide.8192,8064 result: 268859 
divide 131072 8192 
GMPbench.base.divide.131072,8192 result: 435 
divide 131072 65536 
GMPbench.base.divide.131072,65536 result: 242 
divide 8388608 4194304 
GMPbench.base.divide.8388608,4194304 result: 0.796 
GMPbench.base.divide result: 5617.3 
GMPbench.base result: 8367.1 
Category app 
Program rsa 
rsa 512 
GMPbench.app.rsa.512 result: 2755 
rsa 1024 
GMPbench.app.rsa.1024 result: 478 
rsa 2048 
GMPbench.app.rsa.2048 result: 72.6 
GMPbench.app.rsa result: 457.26 
GMPbench.app result: 457.26 
GMPbench result: 1956 

gmp on MacIntel and on G4

My iMac should be "twice as fast" as the iMac G5. Ok, that good.
This is true if you use apple software (i suppose they did the test correctly) and if you benchmark with SPEC suites. That's good.And the iMac is fast. I have never used a faster mac (however, I never used a G5 too). Applications open with no need to wait and so on. I can even play 3d games with rosetta.

However right at the moment I have to use gmp. And gmp says that my brand new iMac is not even twice as fast as my "old" PB G4 1.5 GHz. That is a desktop. I know, gmp on Mac Intel uses no assembly code. And the iMac is 1.5 x slower.

But I don't care how much my CPU is fast if the software I have to run is not optimized for it. Today they released gmp 4.2. I'm gonna try it and see if now the assembly issue is fixed.

iMac 2 GHz 2 GB
***** GMPbench version 0.1 ***** 
Using default CFLAGS = "-O3 -fomit-frame-pointer -I/opt/local/include" 
Using default CC = "gcc" 
Using default LIBS = "-lgmp -L/opt/local/lib" 
Using compilation command: gcc -O3 -fomit-frame-pointer -I/opt/local/include foo.c -o foo -lgmp -L/opt/local/lib 
You may want to override CC, CFLAGS, and LIBS 
Using gmp version: 4.1.4 
Compiling benchmarks 
Running benchmarks 
Category base 
Program multiply 
multiply 128 128 
GMPbench.base.multiply.128,128 result: 3388942 
multiply 512 512 
GMPbench.base.multiply.512,512 result: 283065 
multiply 8192 8192 
GMPbench.base.multiply.8192,8192 result: 2753 
multiply 131072 131072 
GMPbench.base.multiply.131072,131072 result: 43.6 
multiply 2097152 2097152 
GMPbench.base.multiply.2097152,2097152 result: 1.73 
GMPbench.base.multiply result: 2883.1 
Program divide 
divide 8192 32 
GMPbench.base.divide.8192,32 result: 116928 
divide 8192 64 
GMPbench.base.divide.8192,64 result: 72789 
divide 8192 128 
GMPbench.base.divide.8192,128 result: 36886 
divide 8192 4096 
GMPbench.base.divide.8192,4096 result: 5076 
divide 8192 8064 
GMPbench.base.divide.8192,8064 result: 66084 
divide 131072 8192 
GMPbench.base.divide.131072,8192 result: 107 
divide 131072 65536 
GMPbench.base.divide.131072,65536 result: 54.0 
divide 8388608 4194304 
GMPbench.base.divide.8388608,4194304 result: 0.159 
GMPbench.base.divide result: 1770.9 
GMPbench.base result: 2259.6 
Category app 
Program rsa 
rsa 512 
GMPbench.app.rsa.512 result: 870 
rsa 1024 
GMPbench.app.rsa.1024 result: 129 
rsa 2048 
GMPbench.app.rsa.2048 result: 18.0 
GMPbench.app.rsa result: 126.41 
GMPbench.app result: 126.41 
GMPbench result: 534.46 

PowerBook G4 1.5 GHz 512 MB
... broken post ...

MacIntel not advised for scientific researchers.

I had problems with almost all scientific libraries I tried.
gmp builds only if you use --host=none-apple-darwin. That means you are disabling assembly optimizations (and you probably wouldn't want to . And that option was correctly set for plain gmp, but not for gmp-cxx-wrappers (Gregory Wrigh has no access to a Mac intel, now he should have fixed it).

cln is plainly broken. That means you can't use GiNaC too. I think someone should buy cln's developers a MacMini Intel.

If you want to use OCaml, you have to use a cvs special version, since the stable does not yet compile. The solution is here.

In fact I had lots of problems. The guys at darwin ports have been really nice (using dp is just the quickest way to install this kind of software), but there are *lots* of troubles. The same installation on my powerbook G4 went just fine (of course the took a lot to finish, but they did finish).

Portfile for ntl on MacIntel was broken too (now it's fixed).

YASP (Yea Another Stupid Post)

I have to admit that this
double-core-2006-03-26-10-21.png
makes me quite proud... I've never had a biproc machine (and this is dual core, but that does not change the point).
I'm looking forward to have some time to install Linux. Non because of Linux itself but because of the twin penguins that should appear when booting. Well.. they should have appeared once. Now distros tend to favour another boot style...

Saturday, March 25, 2006

Have I ever said...

how much I do love C?
Picture1-2006-03-25-11-19.png

ReactOS

Per chi non lo sapesse ReactOS è un'implementazione libera di Windows NT. Non ancora completa, è comunque un sistema bootabile e funzionante, che permette anche di fare girare alcune semplici applicazioni di windows "native".

Uno dei vantaggi di avere un MacIntel è che Q (ovvero qemu + gui) è in grado di fare funzionare a velocità decenti sistemi per x86, fra cui ReactOS. Mi sono scaricato la iso per qemu preinstallata e lo ho lanciato. Ha funzionato tutto e si è automaticamente configurato per usare la scheda di rete di qemu.

Beh.. ecco uno screenshot.

QScreenshot1.miniatura-2006-03-25-11-11.png

Ed eccone un altro... sigh

QScreenshot2.miniatura-2006-03-25-11-11.png

Il sito ufficiale di ReactOS è questo

Al di la della schermata blu, è davvero un progetto interessante per coloro che volessero impratichirsi con gli internals di Windows.

Office? Ajax

Linspire batte Google nella corsa all'Office su web.
Provatolo funziona, ma è anche molto limitato. L'importazione di documenti word non eccessivamente complessi è ben fatta. Le capacità di editing, limitate.
Inoltre non è ben chiara la licenza del prodotto (al momento). Alla domanda "Come lo scarico" viene spiegato come usarlo online [ questo ha senso visto che è pensato per essere usato online, ma immagino che se fosse open dovrebbero rilasciare i sorgenti da qualche parte ].
Le potenzialità ci sono, anche se con l'ultima versione di Firefox compilato per Intel su MacIntel paiono esserci dei bachi.
Un ultima nota sul sito: sembrano abbiano fatto lo stesso errore di tutti i principianti che usano Ajax. Ovvero abusarne. Attenzione dunque ai bookmarks che non si comportano come dovuto.
Link homepageNotizia su Linux Filter

Thursday, March 23, 2006

Install gmp with c++ with Dynamic Libraries.

I have to admit I'm a libtool noob. However I know gcc and g++ 4.0.1 available with MacOS Tiger are able to compile c++ dynamic libraries. In fact this is done with flag -dynamiclib
However, gmp if configured with --enable-cxx and without disabling dyamic libraries fails. In fact it passes g++ the -shared option (that works for linux). If you compile plain gmp with no C++ support, the problem does not exist. I'm afraid it's a bug with gmp libtool (I suppose you could fix it with autoreconf, but I did not try it).
Unfortunately I need c++ support since I have to work with ppl. The solution is to modify all occurrences of "-shared" in the configure file with "-dynamiclib". And appears not to work.
The same hack worked with readline (but I'm afraid that's because readline was written in C and somehow support for C dynamic libraries seems better on mac os x: I'd say "older", so it's more likely developers made it work.
However darwin ports is able to build it the correct way. You only have to specify somewhere that gmp-cxx-wrappers should build with --host=none-apple-darwin

Darwin Ports vs. Fink

Well... on my PB I recently installed Fink. It run smoothly. Fine. Here on my MacIntel I installed Darwin ports, since Fink is alpha on MacIntel.

DP is amazing. Works with no hassle and all. It's main disadvantage is that it compiles everything (and that is not quite a problem, since I have a 2 GHz DC processor). Well, I have always been a DP fan (being a BSD fan)... and I have to say they have improved.

Of course (as soon and it becomes more mature) some may prefer to use fink. Not having to deal with compiling issues (that are really rare on DP indeed) can be an advantage (or at least a speed up). Moreover DP lacks a decent UI.

For a cl geek like me this is not an issue (I tended to use fink that way too). But if yoou are used to sparkling icons, it may not be for you.

MacIntel

Until now I almost never spoke about Intel Macs here. My feelings were twofold. The long time Mac user in me told me that using Intel processors was no good. The unix geek told me that many benefits could come.

Of course we know that IBM was not going to invest on G5 anymore. Apple had to switch. And I bought my first MacIntel. Good.

Fast is fast. The perceived speed against my Powebook G4 is astonishing. Applications load instantaneously: compilation is much faster (unfortunately I can't compare this against a G5 iMac, so this is pointless).

Rosetta is fast enought to run Neverwinter Nights smoothly. Of course the game is quite old, but it's a full 3d game. In fact I didn't expect this to work. Moreover it means that NWN wasn't G4 optimized...

Still I miss some utilities. I'm lost without WindowShade X. About the rest this is a wonderful machine. Go on with more tests.

Friday, March 17, 2006

About web standards...

The Acid Test 2
Acid2 is a test created by the WaSP (Web Standards Project) that can be used to show if a browser implementation of CSS and XHTML adheres to standard. In fact the tricky part here is CSS.
The Acid2 test is expected to render correctly on any browser that follows the W3C HTML and CSS specifications. Of course a browser that does not correctly support all of the features used in Acid2, will not render the page correctly.
Safari
acid_test_safari-2006-03-17-12-55.png
This is simply the first browser to render correctly the Acid2 test (Safari updated for MacOS X 10.4.3). It's a bit surprising that while the test was published in April 2005 and Safari passed it on October 2005 [ Konqueror passed it a month later ], the "considered-most-standard" Mozilla does not. Not yet.
In fact the private builds of Dave Hyatt supported Acid2 since 14 days after its publication.
Gecko: Mozilla, Firefox
acid_test_camino-2006-03-17-12-55.png
We can just say that this browser family appears not to support Acid2 and it's not clear if they are going to support it.
Opera
Opera 9 is the first windows browser to pass the test (10 March 2006). However we are talking about a prerelease, not a "stable" version.

Thursday, March 16, 2006

SuSE 10

I know this ain't no piece of news... still I installed SuSE Linux today and I found it really well done and "easy" just from a visive point (that means the user is more likely not to panic).

Really well done. Still I'm in love with Debian.

Tuesday, March 14, 2006

Developing on Mac, Win, Linux (Database - pt. 2)

Databases

Databases on Linux range from trivial to very easy. Of course I'm talking about Postgres (my db of choice), SQLite, and, ahem MySQL. I've no informations on Oracle, I should try it one of these days. If you have to develop with MS SQL (one of the best MS products out there), of course you may want to use Windows.

MySQL

MySQL can be installed really easily on Linux/fBSD and all. It's a matter of aptgetting/rpming. You can also install that way all the bindings for different languages (beware, most are GPL, if you want to develop closed source apps, take this into account).

On Windows there is a beautifully crafted package. Double click and go.

On MacOS there is a .pkg that installs everything, even a PreferencePane to start and stop MySQL. There is also a StartupItem. Well, it works quite good (a part from suboptimal performance).

SQLite

On Linux same old story. Apt-get. Language bindings as above.

On MacOS X it's already installed. It's also the basis for CoreData. Ruby bindings are easily installed. I haven't tried Python ones (but should be a matter of compiling a python package) nor Perl. I don't use PHP at all (well, not if I can avoid).

I've not tried directly with Windows. However I see there are some precompiled that should just work fine.

PostgreSQL

This is my favourite database. Installing on linux is as easy as any other db. On MacOS I found no "pretty installer" a-la MySQL. You have to configure a bit of permissions and users. Not difficult, but more difficult than on GNU/Linux. Again, for FreeBSD applies what I told for linux.

Developing on Mac, Win, Linux (Editors - pt. 1)

I kind have to tell something about me, since it better helps understanding my position.

I chose MacOS X as my main developing platform. Of course it happens to write software meant to run on GNU/Linux (or other *nix flavours) or even on Win. Usually when this happens I'm using a cross platoform technology. I never wrote something longer than 100 lines with the WinAPI, for example.

Before I used GNU/Linux, fBSD and even before other unices (OSF1) and MacOS "Classic". This is my background. I never quite got exposed to windows. So my point is unusual: most people come from windows to other platforms. To me is quite the contrary (a part that I don't "go" to windows, I just happen to use it from time to time).

About programming environments, I've done (and do) lots of things. I like high level languages, such as Python or Ruby. I like ObjectiveC and Cocoa (of course this won't be in our comparison...). But also do a lot of low level C coding with the POSIX api (well I did... right now I prefer to do it in Python), develope some software in C++, I'm going to graduate with a project in Prolog. I quite like XML and CSS and I also have to use some Java and some calculus (both Matlab and C -- no, I don't quite master Fortran)

So my skills are not particulary vertical (for example someone who does all his job in the XML processing fields, or numeric calculus). They range in many different fields (this does not mean I'm really "guru" in each of them, on the contrary I've got much to learn).

Editors

Basically those who work in the Linux community used to share in equal parts between Emacs and vim. There used to be some other editors (nedit), but that was the story. Recently I've seen a lot interest around IDEs (kdevelop, anjuta) and some "lesser" editors have reached the status of full programmer's editors. Among these Scite and Kate, for example. Moreover HTML/web centric editors have also increased their popularity.

On the other side the Windows community has always been more IDE oriented. There were the Borland products, Microsoft Visual Studio, Wacom. The most used web environments were Dreameaver and Homesite (now builtin in dw).

Of course there were also a bunch of editors.

Linux pt. 1

If I have to work on Linux, I've got no problems of any sort. I'm average skilled both in vim and in Emacs and I can make it. I do use Aquamacs (an Emacs version) even on the MacOS: once I preferred vim, but right now I need some things Emacs has and vi hasn't (a decent Prolog mode, for example).

Windows

About an year and an half/two years ago I had to develop an application with Twisted. For a 0-based array of reasons I used Windows as my main developement environment. I installed entought python (and in those times there was no Python 2.4, so no problem), I put Twisted 1.3 on the top of it and chose gvim. Gvim is beautifully integrated with windows. Of course it acts quite much like vim does (i don't like the evim variant). I did it. Still I wasn't using windows. I cloned my linux environment on Windows. And it is what I made when I had to code some C++ with mingw and so on.

I chose to use gvim not because I really wanted to, but because other editors were really priced or poor. Yes I tried scintilla too. Not particulary poor, of course not priced... but in fact too simple.

Windows and MacOS

If you compare the situation with the MacOS it's tragic. MacOS has many "simple" editors that I do not really like (but some love SubEthaEdit or Smultron), but it has also BBEdit (the best web editing environment I've seen, much better than dreamweaver and priced 199$ - 129$ for TextWrangler users -- TW is a free editor everybody can download and use ). And it has TextMate: the best general programmer editor a part from Emacs.

UltraEdit (Windows)

Recently I had to do some more work: I tried UltraEdit, priced 39 $. The same as TextMate. A couple of users said it is a wonderful editor and I gave it a try. Out of the box it did not support nor Python nor Ruby. Quite annoying in fact.

I googled the solution and I find I had to copy some strings in a file. The format is awful. Where TM has a lot of small bundles, BBEdit has plugins, UltraEdit has only this big flatfile. Ok. What matters is functionality.

But functionality is missing. For example Python or Ruby indenting is really disappointing. TextMate indents code back and forth to match the syntax of the code. It takes less time to try than to read. If you haven't a Mac, Emacs does it.

TextMate also has lots of ways (and easy one) for saving me from typing lots of code. Snippets are poweful, and I also have commands (snippets are short words or commands that are expandend to full constructs with placeholders to fill in).

I find nothing like this (nothing that simple) in UltraEdit or in Scintilla. Not that Scintilla is not a bad editor. It correctly deals with a lot of languages, the syntax theme is clear and easily readable (I think I should do a "Scintilla Theme" for TextMate one day). With Python/Ruby it lets you run or check syntax directly from editor.

And UltraEdit (in the Studio Version) is a very good "tiny ide"-"enhanced editor" for Java, for example (but we are at 99$). Much better than BBEdit or TextMate in Java editing. It's fast and has some useful basical functions. Of course Eclipse or NetBeans do a lot more stuff, but UltraEdit Studio takes a couple on microseconds to load.

I've not tried it with PHP and HTML (have I already said I don't use PHP?), but it appears to be really good. I just can't stand feature bloated IDEs like DreamWeaver. They tend to make the programmer not to think... but that's another story.

The many windows editors

It could be a matter of taste. Another friend of mine who did a lot of coding in Windows suggested another text editor (I don't even remember the name of the editor -- I still remember my friend's name of course). Probably among all editors out there (more or less shareware) there is the one that suits my taste (a part from the Emacs/gvim variants). Still everybody knows a good way to fuck up a windows installation is to install and try software (I know ghost, but it looks like I've no time to waste to play with software).

GNU/Linux pt. 2

Again... GNU/Linux is quite convenient in this sense. Emacs and vim are wonderful editors (and they are both free and free). They can be extended to do almost everything (think about Emacs mode to make it a Java IDE or the Auctex package).

They are both available for Windows and Mac, but additional packages are not as easy to install (on Debian is a matter of aptgetting...)

About the IDEs... well, KDevelop is told to be really good. Probably For sure it outperforms XCode, but should be not as good as MS Vistal Studio, even if it should support more languages (so if you need one of those...). So depends on what you have to do... anyway it's a really good IDE, nothing to say... but :)

Windows pt 2

I tried some more editors. Notepad++ is nice (but not really suited for Python or Ruby: in this sense Scite is much better).

I tried Komodo, and it's wonderful (and also cross-platform). The best thing it does are easy debugging and intellisense like for dynamic languages (tried with python and ruby, it should work with PHP and Perl too). Unfortunately the full version costs almost 300 bucks. If you don't develop professionally you can buy the "personal or educational" version, that at 29$ is quite affordable.

Anyway some things in Komodo to me look quite akward, while TextMate is as easy as poweful. Of course comparing Komodo to TM in Rails editing is playing dirty. TM is the editor of choice of Rails developers. And Komode has that intellisense... well, I think it should be great (even if I didn't really use it, so I think I'm not really gonna miss it).

I've seen there are a lot of targetted small IDEs that are worth trying, but I'm not gonna spend all my time this way.

Next time I'm gonna talk about databases

Wednesday, March 1, 2006

self vs. @ (Python and Ruby)

Until a few days ago, I considered myself a "faithful" Pythonist. I liked ObjectiveC. I liked a bunch of other languages (notably Prolog or Haskell). I quite liked C++ too. But in fact my true love was Python.

I know it is strange to talk about "love". That is of course unproper. Unfortunately I've no time to find a better word to express the thing. Let say that I liked to code in Python independently of what I was coding. Quite the same thing than I love using MacOS independently of the task, while I may be forced to use Windows because of the task, but would not use it if it was up to me (few... complicated period).

Today I read this (pointles) discussion about "self" in Python. There are people who :
  • Does not like to self.foo
  • Does not like to
    def foo(self, else):
        # code

I perfectly understand this position. In fact I do not really like having to reference self explicitly everytime (even if I fully understand why python does this). But it makes damn sense. Explicit is better than implicit.

I'm calling methods/accessing variables of an object. So I should use the conventional way object.method|variable. There should be only one way to do it.

For example Java "optional" this sucks. At least in my opinion. It has no purpose in method/variable stuff. Some use it to make clear that they're referencing instance variables, some use a m_var notation. If you are a C++ programmer you could be using var_ or (if you haven't read the standard quite recently) _var.

Of course having a clear and readable way to distinguish instance variables methods is good. That is clear. It makes you easier to read.

self is boring. I often forgot it and got error messages (about the ninth consecutive programming hour this is not the worst thing I do, however). In this sense I also forget the ruby @. And it's better a spectacular error than a hidden bug. So... go on

I quite don't like having specify self among the formal parameters. You schiznick, don't you know its a method? Actualy Python does not. If you take a normal function and bind it to an object, the first formal parameter is the object itself. So its better that function was meant from the beginning that way and had a starting self parameter.

Of course this is boring, but its necessary and has not real disadvantages (a part from some additional typing.. something that with a decent editor is not a concern Aquamacs/Emacs or TextMate strongly advised).

And so all the knots get back to the comb. Python has this boring self everywhere. And it is there and should be there. Ruby hasn't. The @ makes the code quite readable (expecially with a decent editor). Of course it prevents name clashes and such. Having not to pass self to functions also makes it easier to refactor from functions to methods (ok, we know, in ruby every function is a method, but that's not the point). This in the case that the method uses no instance variables.

But...
But Ruby treats variables and methods differently. Instance variables need a "special" syntax. Methods don't. It's not clear if a method is a function or not (of course it does the Right Thing)

irb(main):001:0] def foo; "foo"; end 
=] nil 
irb(main):002:0] class Bar; def foo; "dont foo" ; end 
irb(main):003:1] def bar; foo; end 
irb(main):004:1] end 
=] nil 
irb(main):005:0] b = Bar.new 
=] # 
irb(main):006:0] puts b.bar 
dont foo 
=] nil 

As I said it does the Right Thing... I'm just talking about readability. Of course you can use self in ruby too... but well. This is something I would have preferred solved in another way, even if right at the moment I'm find quite acceptable to renounce to have "one way" for a bit more pragmatism.