Future Programming on Quora

Quora has introduced the idea of “spaces”. A kind of “blog” or curated collection of existing answers from different people, to help organize answers around particular themes.

I just created a Future Programming Space to gather my answers about various ideas in the future of programming languages.

I still intend to move my answers to this blog and the ThoughtStorms wiki. But in the meantime, there’ll hopefully be a good collection / conversation over there.

Expression in Programming Languages

Source: My Quora Answer :

If Clojure is so expressive, and is basically Lisp, hasn’t there been any progress in expressivity in the last 50 years?
Well, as Paul Graham put it quite well, Lisp started as a kind of maths. That’s why it doesn’t go out of date. Or not in the short term.
You should probably judge the evolution of expressivity in programming languages by comparing them to the inventions of new concepts / theories / notations in maths (or even science), not by comparison to other kinds of technological development like faster processors or bigger hard drives.
What I’d suggest is that it turns out that function application is an amazingly powerful and general way to express “tell the computer to calculate something”.
Once you have a notation for defining, composing and applying functions (which is what Lisp is), then you already have an extraordinarily powerful toolkit for expressing programs. Add in Lisp’s macros which let you deconstruct, manipulate and reconstruct the definitions of functions, programmatically, and you basically have so much expressive power that it’s hard to see how to improve on it.
You can argue in the case of macros that, in fact, while the idea has been in Lisp for a while, the notation and semantics is still being actively fiddled with. We know that macros in principle are a good idea. But we’re still working on the right language to express them.
Perhaps we can also argue that there’s still room for evolving some other bits of expressivity. Eg. how to best express Communicating Sequential Processes or similar structures for concurrency etc. In Lisp all these things look like functions (or forms) because that’s the nature of Lisp. But often within the particular form we are still working out how best to express these higher level ideas.
Now, the fact that functions are more or less great for expressing computation doesn’t mean that the search for expressivity in programming languages has stopped. But it’s moved its focus elsewhere.
So there are three places where we’ve tried to go beyond function application (which Lisp serves admirably) and improve expression :

  • expressing constraints on our programs through types.
  • expressing data-structures
  • expressing large-scale architecture

These are somewhat intertwined, but let’s separate them.
Types
Types are the big one. Especially in languages like Haskell and the more exotic derivatives (Idris, Agda etc.) Types don’t tell the computer to DO anything more that you can tell it to do in Lisp. But they tell it what can / can’t be done in general. Which sometimes lets the compiler infer other things. But largely stops the programmer shooting themselves in the foot. Many programmers find this a great boost to productivity as it prevents many unnecessary errors during development.
Clearly, the type declarations in languages like Haskell or Agda are powerfully expressive. But I, personally. have yet to see a notation for expressing types that I really like or find intuitive and readable. So I believe that there is scope for improving the expressivity of type declarations. Now, sure some of that is adding power to existing notations like Hindley-Milner type systems. But I wouldn’t rule out something dramatically different in this area.
One big challenge is this : by definition, types cut across particular bits of computation. They are general / global / operating “at a distance”. One question is where to write this kind of information. Gather it together in standard “header” files? Or spread across the code, where it’s closest to where its used? What are the scoping rules for types? Can you have local or “inner” types? Or are all types global? What happens when data which is typed locally leaks into a wider context?
Data Structures
Lisp’s lists are incredibly flexible, general purpose data-structures. But also very low-level / “primitive”.
I don’t really know other Lisps. But it seems to me that Clojure’s introduction of both a { } notation for dictionaries / maps. And a [ ] notation for data vectors has been useful. Complex data literals can now be constructed out of these by following the EDN format. And it’s given rise to things like Hiccup for representing HTML and equivalents for other user-interfaces or configuration data. EDN is pretty similar to the way you define data in other contemporary languages like Javascript’s JSON etc. So it’s not that “radical”. But it’s nice to have these data structures completely integrated with the Clojure code representation.
Can we improve expressivity for this kind of data representation language?
I’m inclined to say that things like Markdown or YAML, that bring in white-space, make complex data-structures even more human readable and writable and therefore “expressive” than even JSON / EDN.
In most Lisps, but not Clojure, you can define reader-macros to embed DSLs of this form within programs.
So Lisps have highish expressivity in this area of declaring data. In Clojure through extending S-expressions into EDN and in other Lisps through applying reader-macros to make data DSLs.
Can we go further?
By collapsing data-structure into algebraic types, Haskell also comes up with a neat way of expressing data. With the added power of recursion and or-ed alternatives.
This leads us to imagine another line of developments for expression of data structures that brings these features. Perhaps ending up like regular or context free grammars.
Of course, you can write parser combinators in any functional language. Which gives you a reasonable way to represent such grammars. But ideally you want your grammar definition language sufficiently integrated with your programming language that you can use this knowledge of data-structure everywhere, such as pattern-matching arguments to functions.
Haskell, Clojure’s map representation, and perhaps Spec are moves in this direction.
But for real expressivity about data-structures, we’d have a full declarative / pattern-matching grammar-defining sub-language integrated with our function application language, for things like pattern matching, searching and even transformations. Think somewhere between BNF and JQuery selectors.
Shen’s “Sequent Calculus” might be giving us that. If I understand it correctly.
A third direction to increase expressivity in defining data-structures is to go beyond custom languages, and go for custom interactive editors (think things like spreadsheet grids or drawing applications for graphics) which manipulate particular recognised data types. These increase expressivity even further, but are very domain / application specific.
Architecture
“Architecture” is everywhere. It describes how different modules relate. How different services on different machines can be tied together. It defines the components of a user-interface and how they’re wired up to call-back handlers or streams of event processors. “Config files” are architecture. Architecture is what we’re trying to capture in “dependency injection”. And class hierarchies.
We need ways to express architecture, but mainly we rely either on code (programmatically constructing UIs), or more general data-structures. (Dreaded XML files.)
Or specific language features for specific architectural concerns (eg. the explicit “extends” keyword to describe inheritance in Java.)
OO / message passing languages like Smalltalk and IO do push you into thinking more architecturally than many FP languages do. Even “class” is an architectural term. OO languages push you towards thinking about OO design, and ideas like roles / responsibilities of various components or actors within the system.
Types are also in this story. To Haskell programmers, type-declarations are like UML diagrams are to Java programmers. They express large scale architecture of how all the components fit together. People skilled in Haskell and in reading type declarations probably read a great deal of the architecture of a large system just by looking at type declarations.
However, the problem with types, and OO classes etc. is that they are basically about … er … “types”. They’re great at expressing “this kind of function handles that kind of data”. Or “this kind of thing is like that kind of thing except different in these ways”.
But they aren’t particularly good to express relations between “tokens” or “concrete individuals”. For example, if you want to say “this server sits at address W.X.Y.Z” and uses a database which lives at “A.B.C.D:e”, then you’re back to config. files, dependency injection problems and the standard resources of describing data-structures. Architecture is treated as just another kind of data-structure.
Yes, good expressivity in data-structure helps a lot. So EDN or JSON beats out XML.
But, really, it feels like there’s still scope for a lot of improvement in the way we talk about these concrete relationships in (or alongside) our programs.
OK, I’m rambling …
tl;dr : function definition / application is a great way to express computation. Lisp got that right in the 1960s, and combined with macros is about as good as it gets to express computation. All the other improvements in expressivity have been developed to express other things : types and constraints, data-structures and architecture. In these areas, we’ve already been able to improve on Lisp. And can probably do even better than we’ve done so far.

Are Languages Still Evolving Towards Lisp?

My Quora answer to the question “Is it still reasonable to say mainstream languages are generally trending towards Lisp, or is that no longer true?

Based on my recent experiments with Haskell and Clojure.

Lisp is close to a pure mathematical description of function application and composition. As such, it offers one of the most concise, uncluttered ways to describe graphs of function application and composition; and because it’s uncluttered with other syntactic constraints it offers more opportunities to eliminate redundancy in these graphs.
Pretty much any sub-graph of function combination can be refactored out into another function or macro.
This makes it very powerful concise and expressive. And the more that other programming languages try to streamline their ability to express function combination, the more Lisp-like they will get.
Eliminating syntactic clutter to maximize refactorability will eventually make them approximate Lisp’s "syntaxlessness" and "programmability".
In that sense, Paul Graham is right.
HOWEVER, there’s another dimension of programming languages which is completely orthogonal to this, and which Lisp doesn’t naturally touch on : the declaration of types and describing the graph of type-relations and compositions.
Types are largely used as a kind of security harness so the compiler or editor can check you aren’t making certain kinds of mistakes. And can infer certain information, allowing you to leave some of the work to them. Types can also help the compiler optimise code in many ways : including safer concurrency, allowing the code to be compiled to machine-code with less of the overhead of an expensive virtual machine etc.
Research into advanced type management happens in the ML / Haskell family of languages and perhaps Coq etc..
Ultimately programming is about transforming input data into output data. And function application and composition is sufficient to describe that. So if you think POWER in programming is just about the ability to express data-transformation, then Lisp is probably the most flexibly expressive language to do that, and therefore still is at the top of the hierarchy, the target to which other programming languages continue to aspire.
If you think that the job of a programming language is ALSO to support and protect you when you’re trying to describe that data-transformation, then POWER is also what is being researched in these advanced statically-typed languages. And mainstream languages will also be trying to incorporate those insights and features.

Questions for 2014 : #3 Programming on Tablets

Third in a series (#1, #2) of questions occupying my mind at the beginning of 2014. Which may (or may not) inform what I’ll be working on.
3) How can we program on tablets?
I’m now a tablet user. I became a tablet owner at the end of 2012. For six months I played around with it, trying a few Android programming exercises. But I only really became a regular tablet user half-way through the year. Firstly when I put Mind Traffic Control into a responsive design. Secondly when I bought a couple of e-books. And I only really got committed when I did OWLdroid and coupled that with btsync.
So – somewhat late to the party, I admit – I’m now a tablet enthusiast.
And so my question is, how the hell do I program on this thing?
There’s a trivial answer to that question : get an external keyboard, an appropriate editor / IDE and treat it like a normal computer with a small screen. I can do that. I’ve worked a lot on netbooks and small screens don’t freak me out. But that’s not really what I mean.
Because tablets aren’t meant to have keyboards. And a computer without a keyboard challenges one of my deepest held programming beliefs : the superiority of plain text.
Plain-text is so flexible, so expressive, so powerful, so convenient to work with, that I’ve always been highly sceptical of those who want to do away with it. But on a keyboardless computer, it’s a different matter. Plain text isn’t at all convenient without a keyboard.
Especially the text of programming languages which makes rich use of another dozen or so punctuation symbols beyond the alphabet and numerals. And where manipulation relies on cursor-keys, shift and control, deletes (both forward and backspace), page up and down, tab-complete etc.
And yet tablets are becoming ubiquitous. Increasingly they’re the target of our programming, and the tool we have with us. So how are we going to program in this new environment? With multi-touch or stylus but no keyboard?
I have yet to see anything even vaguely plausible as the revolution in programming “language” we’re going to need for this.
I don’t think it’s the “Scratch”-like or “App Inventor”-like “stick the blocks together” languages. The problem of programming on tablets shouldn’t be conflated with the problem of teaching novices to program. (Which is what most visual programming environments seem to be about.)
One issue with that kind of system (and other “flow-charts”) is that blocks need to be big enough to be easily and unambiguously manipulated with fat fingers. But to be decently usable, a programming system should be able to have a reasonable density of information on the screen, otherwise you’ll spend all your time scrolling and forgetting what you’ve seen. How do you resolve that tension?
Perhaps “data-flow” programming of the Max/MSP, PD, Quartz kind. Piping diagrams. Process Modelling packages have something to teach us about orchestrating in the large. But they are shockingly clumsy for certain fine-grained activities that are expressed easily in text. (Eg. how the hell can you talk about tree-shaped data or recursive algorithms using this kind of piping model?)
So I don’t have any information about who is doing interesting work in this area. (Aside : while writing this post, I thought I’d consult the collective wisdom on StackExchange. Needless to say, my question was immediately shot down as too vague.) But I’m now very curious about it.

Phenotropic Program Design

An answer I made on StackExchange wrt ideas for Phenotropic Programming.

A thought I had recently :
If you used high-level ideas like Haskell’s Maybe Monad to wrap remote-procedure calls to other systems. You send a request to the server. But nothing comes back (server is broken). Or a Promise comes back (server is busy) and your programs continue working with those None or Promised values. That’s kind of like the fault tolerance Lanier is looking for.
Maybe there are ways of encapsulating other eventualities. For example remote calls which come back with an approximation which is increasingly refined over time by some kind of background negotiation. ie. what comes back is something like a Promise but not just “keep holding on and working with this and a proper value will turn up shortly” but “keep holding on and working with this and a better approximation will turn up shortly”. (And again, and again). This would be able to hide a lot of faults from the programmer, just as networking protocols hide a lot low level networking failure from the programmer.

Update : OTOH, I don’t know why I bother, because StackExchange is no fun at all these days. Already downvoted because it didn’t fit their narrow ideal format and someone couldn’t join the dots between what I was saying and what the question was asking. Here’s me spelling it out for them :

If I understand correctly, Phenotropic Programming is a way to program large (often multi-computer), robust systems. The problem is, all the biological metaphors make it vague and difficult to translate into practical programming terms. Here I’m suggesting that certain programming constructs that aren’t vague (ie. Monads, Promises etc.) might be a way to make some of those ideas of Lanier’s concrete and practically progammable.)

Of course, I may be wrong in my answer and that’s fine, but I think the main problem here is that I just demanded a slightly more “creative” level of reading from SE, and that’s obviously a down-votable offence. :-/

Programming Language Features for Large Scale Software

My Quora Answer to the question : What characteristics of a programming language makes it capable of building very large-scale software?

The de facto thinking on this is that the language should make it easy to compartmentalize programming into well segregated components (modules / frameworks) and offers some kind of “contract” idea which can be checked at compile-time.

That’s the thinking behind, not only Java, but Modula 2, Ada, Eiffel etc.

Personally, I suspect that, in the long run, we may move away from this thinking. The largest-scale software almost certainly runs on multiple computers. Won’t be written in a single language, or written or compiled at one time. Won’t even be owned or executed by a single organization.

Instead, the largest software will be like, say, Facebook. Written, deployed on clouds and clusters, upgraded while running, with supplementary services being continually added.

The web is the largest software environment of all. And at the heart of the web is HTML. HTML is a great language for large-scale computing. It scales to billions of pages running in hundreds of millions of browsers. Its secret is NOT rigour. Or contracts. It’s fault-tolerance. You can write really bad HTML and browsers will still make a valiant effort to render it. Increasingly, web-pages collaborate (one page will embed services from multiple servers via AJAX etc.) And even these can fail without bringing down the page as a whole.

Much of the architecture of the modern web is built of queues and caches. Almost certainly we’ll see very high-level cloud-automation / configuration / scripting / data-flow languages to orchestrate these queues and caches. And HADOOP-like map-reduce. I believe we’ll see the same kind of fault-tolerance that we expect in HTML appearing in those languages.

Erlang is a language designed for orchestrating many independent processes in a critical environment. It has a standard pattern for handling many kinds of faults. The process that encounters a problem just kills itself. And sooner or later a supervisor process restarts it and it picks up from there. (Other processes start to pass messages to it.)

I’m pretty sure we’ll see more of this pattern. Nodes or entire virtual machines that are quick to kill themselves at the first sign of trouble, and supervisors that bring them back. Or dynamically re-orchestrate the dataflow around trouble-spots.

Many languages are experimenting with Functional Reactive Programming : a higher-level abstraction that makes it easy to set up implicit data-flows and event-driven processing. We’ll see more languages that approach complex processing by allowing the declaration of data-flow networks, and which simplify exception / error handling in those flows with things like Haskell’s “Maybe Monad”.

Update : Another thing I’m reminded of. Jaron Lanier used to have this idea of “Phenotropic Programming” (WHY GORDIAN SOFTWARE HAS CONVINCED ME TO BELIEVE IN THE REALITY OF CATS AND APPLES) Which is a bit far out, but I think it’s plausible that fault-tolerant web APIs and the rest of the things I’m describing here, may move us closer.

Cthulhu

My software is more or less like Cthulhu. Normally dead and at the bottom of the sea, but occasionally stirring and throwing out a languid tentacle to drive men’s minds insane. (Or at least perturb a couple of more recklessly adventurous users.)

However there’s been a bit more bubbling agitation down in R’lyeh recently. The latest weird dream returning to trouble the world is GeekWeaver, the outline based templating language I wrote several years ago.

GeekWeaver was basically driven by two things : my interest in the OPML Editor outliner, and a need I had to create flat HTML file documentation. While the idea was strong, after the basic draft was released, it languished. 

Partly because I shifted from Windows to Linux where the OPML Editor just wasn’t such a pleasurable experience. Partly because GW’s strength is really in having a templating language when you don’t have a web server; but I moved on to doing a lot of web-server based projects where that wasn’t an issue. And partly, it got led astray – spiralling way out of control – by my desire to recreate the more sophisticated aspects of Lisp, with all kinds of closures, macros, recursion etc.

I ended up assuming that the whole enterprise had got horribly crufty and complicated and was an evolutionary dead end.

But suddenly it’s 2013, I went to have quick look at GeekWeaver, and I really think it’s worth taking seriously again.

Here are the three reasons why GeekWeaver is very much back in 2013 :

Fargo

Most obviously, Dave Winer has also been doing a refresh of his whole outlining vision with the excellent browser-based Fargo editor. Fargo is an up-to-date, no-comprise, easy to use online OPML Editor. But particularly important, it uses Dropbox to sync. outlines with your local file-system. That makes it practical to install GeekWeaver on your machine and compile outlines that you work on in Fargo.

I typically create a working directory on my machine with a symbolic link to the OPML file which is in the Fargo subdirectory in Dropbox and the fact that the editor is remote is hardly noticable (maybe a couple of seconds lag between finishing an edit and being able to compile it).

GitHub

What did we do before GitHub? Faffed, that’s what. I tried to put GeekWeaver into a Python Egg or something, but it was complicated and full of confusing layers of directory.  And you need a certain understanding of Python arcana to handle it right. In contrast, everyone uses Git and GitHub these days. Installing and playing on your machine is easier. Updates are more visible.

GeekWeaver is now on GitHub
. And as you can see from the quickstart guide on that page, you can be up and running by copying and pasting 4 instructions to your Linux terminal. (Should work on Mac too.) Getting into editing outlines with Fargo (or the OPML Editor still works fine) is a bit more complicated, but not that hard. (See above.)

Markdown

Originally GeekWeaver was conceived as using the same UseMod derived wiki-markup that I used in SdiDesk (and now Project ThoughtStorms for Smallest Federated Wiki). Then part of the Lisp purism got to me and I decided that such things should be implementable in the language, not hardwired, and so started removing them. 

The result was, while GeekWeaver was always better than hand-crafting HTML, it was still, basically hand-crafting HTML, and maybe a lot less convenient that using your favourite editor with built-in snippets or auto-complete.

In 2013 I accepted the inevitable. Markdown is one of the dominant wiki-like markup languages. There’s a handy Python library for it which is a single, install away. And Winer’s Fargo / Trex ecosystem already uses it. 


So in the last couple of days I managed to incorporate a &&markdown mode into GeekWeaver pretty easily. There are a couple of issues to resolve, mainly because of collisions between Markdown and other bits of GeekWeaver markup, but I’m now willing to change GeekWeaver to make Markdown work. It’s obvious that even in its half-working state, Markdown is a big win that makes it a lot easier to write a bigger chunks of text in GeekWeaver. And, given that generating static documentation was GeekWeaver’s original and most-common use-case, that’s crucial.

Where Next?


Simplification. I’m cleaning out the cruft, throwing out the convoluted and buggy attempts to make higer-order blocks and lexical closures. (For the meantime.) 
  
Throwing out some of my own idiosyncratic markup to simplify HTML forms, PHP and javascript. Instead GW is going to refocus on being a great tool for adding user-defined re-usable abstractions to a) Markdown and b) any other text file.

In recent years I’ve done other libraries for code-generation. For example, Gates of Dawn is Python for generating synthesizers as PureData files. (BTW : I cleaned up that code-base a bit, recently, too.)

Could you generate synths from GeekWeaver? Sure you could. It doesn’t really help though, but I’ve learned some interesting patterns from Gates of Dawn, that may find their way into GW.

Code Generation has an ambiguous reputation. It can be useful and can be more trouble than it’s worth. But if you’re inclined to think using outlining AND you believe in code-gen then GeekWeaver is aiming to become the perfect tool for you.

Java Hater

Someone on Quora asked me to answer a question on my personal history of using Java. It became a kind of autobiographical confession. 

I’ve never had a good relationship with Java.

My first OO experience was with Smalltalk. And that spoiled me for the whole C++ / Java family of strongly typed, compiled OO languages. 

Because I’d learned Smalltalk and this new fangled OO thing when it was still relatively new (in the sense of the late 80s!) I thought I had it sussed. But actually I had very little clue. I enthusiastically grabbed the first C++ compiler I could get my hands on and proceeded to spend 10 years writing dreadful programs in C++ and then Java. I had assumed that the OOness of both these languages made them as flexible as I remembered Smalltalk to be. I thought that OO was the reason for Smalltalk’s elegance and that C++ and Java automatically had the same magic. 

Instead I created bloated frameworks of dozens of classes (down to ones handling tiny data fragments that would have been much better as structs or arrays). I wrote hugely brittle inheritance hierarchies. And then would spend 3 months having to rewrite half my classes, just to be able to pass another argument through a chain of proxies, or because somewhere in the depths of objects nested inside objects inside objects I found I needed a new argument to a constructor. The problem was, I was programming for scientific research and in industry but I hadn’t really been taught how to do this stuff in C++ or Java. I had no knowledge of the emerging Pattern movement. Terms like “dependency injection” probably hadn’t even been invented. 

I was very frustrated. And the funny thing I started to notice was that when I had to write in other languages : Perl, Javascript, Visual Basic (Classic), even C, I made progress much faster. Without trying to model everything in class hierarchies I found I just got on and got the job done. Everything flowed much faster and more smoothly.

Perl’s objects looked like the ugliest kludge, and yet I used them happily on occasion. In small simulations C structs did most of what I wanted objects to do for me (and I did finally get my head around malloc, though I never really wrote big C programs). And I had no idea what the hell was going on with Javascript arrays, but I wrote some interesting, very dynamic, cross browser games in js (this is 1999) using a bunch of ideas I’d seen in Smalltalk years before (MVC, a scheduler, observer patterns etc.) and it just came out beautifully. 

It wasn’t until the 2000s that I started to find and read a lot of discussions online about programming languages, their features, strength and weaknesses. And so I began my real education as a programmer. Before this, a lot of the concepts like static and dynamic typing were vague to me. I mean, I knew that some languages you had to declare variables with a type and in some you didn’t. But it never really occurred to me that this actually made a big difference to what it was like to USE a language. I just thought that it was a quirk of dialect and that good programmers took these things in their stride. I assumed that OO was a kind of step-change up from mere procedural languages, but the important point was the ability to define classes and make multiple instances of them. Polymorphism was a very hazy term. I had no real intuitions about how it related to types or how to use it to keep a design flexible.

Then, in 2002 I had a play with Python. And that turned my world upside-down.
For the first time, I fell in love with a programming language. (Or maybe the first time since Smalltalk, which was more of a crush).

Python made everything explicit. Suddenly it was clear what things like static vs. dynamic typing meant. That they were deep, crucial differences. With consequences. That the paraphernalia of OO were less important than all the other stuff. That the fussy bureaucracy of Java, the one class per file, the qualified names, the boilerplate, was not an inevitable price you had to pay to write serious code, but a horribly unnecessary burden.
Most of all, Python revealed to me the contingency of Java. In the small startup where I’d been working, I had argued vehemently against rewriting our working TCL code-base in Java just because Java was OO and TCL wasn’t. I thought this was a waste of our time and unnecessary extra work. I’d lost the argument, the rewrite had taken place, and I hated now having to do web-stuff with Java. Nevertheless, I still accepted the principle that Java was the official, “grown up” way to do this stuff. Of course you needed proper OO architecture to scale to larger services, to “the enterprise”. Ultimately the flexibility and convenience of mere “scripting” languages would have to be sacrificed in favour of discipline. (I just didn’t think we or our clients needed that kind of scaling yet.) 

What Python showed me was we weren’t obliged to choose. That you could have “proper” OO, elegant, easy to read code, classes, namespaces, etc. which let you manage larger frameworks in a disciplined manner and yet have it in a language that was light-weight enough that you could write a three line program if that’s what you needed. Where you didn’t need an explicit compile phase. Or static typing. Or verbosity. Or qualified names. Or checked exceptions. What I realised was that Java was not the inevitable way to do things, but full of design decisions that were about disciplining rather than empowering the programmer. 

And I couldn’t stomach it further. Within a few months of discovering Python I had quit my job. Every time I opened my machine and tried to look at a page of Java I felt literally nauseous. I couldn’t stand the difference between the power and excitement I felt writing my personal Python projects, and the frustration and stupidity I felt trying to make progress in Java. My tolerance for all Java’s irritations fell to zero. Failing to concentrate I would make hundreds of stupid errors : incompatible types, missing declarations or imports, forgetting the right arguments to send to library methods. Every time I had to recompile I would get bored and start surfing the web. My ability to move forward ground to a halt.
I was so fucking happy the day I finally stopped being a Java programmer.

Postscript : 

1) Something I realized a while after my bad experience was how important the tools are. My period in Java hell was trying to write with Emacs on a small-screen laptop without any special Java tools (except basic Java syntax colouring). I realize this is far from the ideal condition to write Java and that those who are used to Eclipse or IntelliJ have a totally different experience and understanding of the language. 

2) A few years later, I taught the OO course in the local university computer science department. All in Java. By that time, I’d read a couple of Pattern books. Read Kent Beck’s eXtreme Programming. Picked up some UML. And I had a much better idea what Polymorphism really means, how to use Interfaces to keep designs flexible, and why composition is better than inheritance. I tried to get the students to do a fair amount of thinking about and practising refactoring code, doing test driven development etc. It all seemed quite civilized, but I’m still happy I’m not writing Java every day. 

3) A couple of years ago I did do quite a lot of Processing. I was very impressed how the people behind it managed to take a lot of the pain of Java away from novice programmers. I wonder how far their approach could be taken for other domains.