The Bug is Your Fault

My Quora answer : Phil Jones’s answer to What are some great truths of computer programming?

  1. The bug is your fault

  2. The bug is your fault

  3. The bug is your fault

  4. The bug is your fault

  5. The bug is your fault.

  6. No. The bug really IS your fault.

  7. The bug is your fault. Because you made an assumption that wasn’t true.

  8. The bug is your fault. And it won’t go away until you check your assumptions, find the one that isn’t true, and change it.

What do you think computers will be like in 10 years?

My Quora answer that’s pretty popular : (1) Phil Jones’s answer to What do you think computers will be like in 10 years?
Related to my previous story of trying to use the CHIP for work.

This is a $9 CHIP (Get C.H.I.P. and C.H.I.P. Prto)

It runs Debian. A couple of weeks ago, I took one travelling with me instead of my laptop to see what it would be like to use for real work.
I successfully ran my personal “productivity” software (three Python based wiki servers and some code written in Racket) from it. It also has a browser, Emacs and I was doing logic programming with minikanren library in Python. It runs Sunvox synth and CSOUND too, though I wasn’t working on those on this trip.
In 10 years time, that computing power will be under a dollar. And if anyone can be bothered to make it in this format, the equivalent of this Debian machine will be tantamount to free.
Of course most of that spectacular power will be wasted on useless stuff. But, to re-emphasize, viable computing power that you can do real work with, will be “free”.
The pain is the UI. How would we attach real keyboards, decent screens etc when we need them?
I HOPE that people will understand this well enough that our current conception of a “personal computer” will explode into a “device swarm” of components that can be dynamically assembled into whatever configuration is convenient.
I, personally, would LIKE the main processor / storage to live somewhere that’s strongly attached to my body and hard to lose (eg. watch or lanyard). I’d like my “phone” to become a cheap disposable touch-screen for this personal server rather than the current repository of my data.
I bought a cheap bluetooth keypad for about $8. It was surprisingly OK to type on, but connections with the CHIP were unreliable. In 10 years time, that ought to be fixed.
So, in 10 years time, I personally want a computer on my wrist that’s powerful to do all my work with (that means programming and creating music). That can be hooked up to some kind of dumb external keyboard / mouse / screen interface (today the Motorola Lapdock is the gold-standard) that costs something like $20. Sure, I’ll probably want cloud resources for sharing, publishing, storage and even high-performance processing, AI and “knowledge” etc.
And, of course, I want it to run 100% free-software that puts me in control.
This is all absolutely do-able.

The End of Dynamic Languages

A comment I made over on The End of Dynamic Languages

The problem with the static / dynamic debate is that the problems / costs appear in different places.
In static languages, the compiler is a gatekeeper. Code that gets past the gatekeeper is almost certainly less buggy than code that doesn’t get past the gatekeeper. So if you count bugs in production code, you’ll find fewer in statically typed languages.
But in static languages less code makes it past the compiler in the first place. Anecdotally, I’ve abandoned more “let’s just try this to see if its useful” experiments in Haskell where I fought the compiler and lost, than in Clojure, where the compiler is more lenient. (Which has the knock on effect of my trying fewer experiments in Haskell in the first place, and writing more Clojure overall.)
Static typing ensures that certain code runs correctly 100% of the time, or not at all.
But sometimes it’s acceptable for code to run 90% of the time, and to have a secondary system compensate for the 10% when it fails. There might even be cases where 90% failure and 10% success can still be useful. But only dynamic languages give you access to this space of “half-programs” that “work ok, most of the time”. Static languages lock you out of it entirely. They oblige you to deal correctly with all the edge cases.
Now that’s very good, you’ve dealt with the edge cases. But what if there’s an edge case that turns up extremely rarely, but costs three months of programmer time to fix. In a nuclear power station, that’s crucial. On a bog-standard commercial web-site, that’s something that can safely be put off until next year or the year after. But a static language won’t allow you that flexibility.
The costs of static and dynamic languages turn up in different places. Which is why empirical comparisons are still hard.

Conway’s Corollary

Ian Bicking’s post on Conway’s Corollary is a must-read thought on isomorphisms between the organization and product structures.

What, asks Bicking, if we don’t fight this, but embrace it. Organizational structures are allegedly for our benefit. Why not allow them to shape product? Or when this is inappropriate why not recognize that the two MUST be aligned and if product can’t follow organization, we should refactor organization to reflect and support product.

The Android Studio / Gradle Experience

Im sure my answer / comment on What is Gradle in Android Studio? will get downvoted into oblivion with short-shrift fairly soon. (Maybe deservedly).

But I’ll make it here :

[quote]
At the risk of being discursive I think behind this is the question of why the Android Studio / Gradle experience is so bad.
Typical Clojure experience :
* download project with dependencies listed in project.clj.
* Leiningen gets the dependencies thanks to Clojars and Maven.
* Project compiles.
Typical Android Studio / Gradle experience :
* “Import my Eclipse project”.
* OK project imported.
* Gradle is doing it’s thang … wait … wait … wait … Gradle has finished.
* Compile … can’t compile because I don’t know what an X is / can’t find Y library.
I’m not sure this is Gradle’s fault exactly. But the “import from Eclipse project” seems pretty flaky. For all of Gradle’s alleged sophistication and the virtues of a build-system, Android Studio just doesn’t seem to import the build dependencies or build-process from Eclipse very well.
It doesn’t tell you when it’s failed to import a complete dependency graph. The Android Studio gives no useful help or tips as to how to solve the problem. It doesn’t tell you where you can manually look in the Eclipse folders. It doesn’t tell you which library seems to be missing. Or help you search Maven etc. for them.
In 2016 things like Leiningen / Clojars, or node’s npm, or Python’s pip, or the Debian apkg (and I’m sure many similar package managers for other languages and systems) all work beautifully … missing dependencies are thing of the past.
Except with Android. Android Studio is now the only place where I still seem to experience missing-dependency hell.
I’m inclined to say this is Google’s fault. They broke the Android ecosystem (and thousands of existing Android projects / online tutorials) when they cavalierly decided to shift from Eclipse to Android Studio / Gradle without producing a robust conversion process. People whose projects work in Eclipse aren’t adapting them to AS (presumably because it’s a pain for them). And people trying to use those projects in AS are hitting the same issues.
And anyway, if Gradle is this super-powerful build system, why am I still managing a whole lot of other dependencies in the sdk manager? Why can’t a project that needs, say, the ndk specify this in its Gradle file so that it gets automatically installed and built-against when needed? Why is NDK special? Similarly for target platforms? Why am I installing them explicitly in the IDE rather than just checking my project against them and having this all sorted for me behind the scenes?
[/quote]

Why do IDEs get criticized so much?

My Quora answer :

Simon Kinahan‘s answer is good, though I think he’s over-emphasizing the snobbery aspect.

What’s definitely true is that IDEs are often not particularly optimized for the application you want to write. And often they’re optimized for the application you DON’T want to write.
This is particularly true as, like most user-facing application software, IDEs tend to carry a lot of historical cruft; because radically changing interfaces really pisses people off.
So IDEs were born in the age when people wanted to build desktop GUIs, and maintain all the infrastructure and UI conventions for doing that, even when people want to use them to write something else : small command-line tools, web-applications, mobile apps. etc.
Ideally, IDEs would be highly optimised and tuned for the application we do want to write. In practice that usually becomes your IDE needs to be loaded up with a whole lot of new plugins for each new application, but because adding and taking away plugins is kind of clunky you’re left with all the historical plugins you installed for the last application; and anyway all the plugins are second-class citizens compared to the activities that were assumed to be standard when the IDE was originally released.

That translates into … the IDE is overloaded with options and SLOOOOW.

In 15 years, I’ve never owned a computer that was fast enough to run Eclipse without me feeling like I was trying to type through toffee. I’d like to install Android Studio … but it seems like I don’t even have enough anything on my computer to run Android Studio. Not memory, not disk space. Not screen resolution.

The ideal IDE would be nothing but a plain editor. And everything else would be a plugin. So that it could be radically reconfigured for each new application type. And that’s why people love Vim and Emacs, which work to that principle.

Apart from Emacs (which I have a love-hate relationship with.) I think I’ve liked two IDEs in my time : the original VB classic, which was perfect for me, when I wanted to write simple Windows GUI programs. And Processing, which is the perfect IDE to write little computer art programs. (Because that’s all it knows how to do.)

I’ve used a Python IDE which was OK. But I didn’t miss it when I moved back to a simple (tabbed) editor. I’ve used various IDEs to write C++ but they’ve always lacked the most obvious thing I’ve wanted in a C++ environment : useful help in finding and linking the libraries I’m trying to use. [rant]Despite library management being a big part of C / C++ development, most IDEs I’ve seen seem to treat finding the library you want and configuring the compiler to include it, to be some fiddly infrastructure thing that they’re embarrassed to get their hands dirty with. Why the hell don’t C++ IDEs have a big “Fix the fucking paths” button on their toolbar? Better still, why don’t they just fix the fucking paths without me having to do anything?[/rant]
tl;dr : what’s wrong with IDEs?

1) Too slow.

2) Cluttered up with too many irrelevant options. Why can’t they focus on the ones relevant to me now?

I think we’re partly to blame though. I think we kind of hope for one big tool that will do everything. Rather than accept that we need different tools for different applications. I’m hoping that, in the future, we’ll end up with specialized development editors, perhaps delivered in the browser. So you want to write C++ for games on Windows, go to web-ides.com and select the C++ for games on Windows page and get an editor / dev environment that’s specialized just for that.

Ward Cunningham Interview

The job was really to take C++, which was a fairly static language, and show people how to write dynamic programs in a static language. That’s what most of the patterns in that book were about. And in the process, patterns extended the life of C++ by a decade, which is not what I thought would happen. What I thought would happen is people, when they learned these patterns, would look at them and say, “Wow, these patterns are hard in C++ and they’re easy in Smalltalk. So if I want to think in terms of these patterns, I might as well use a language where they’re easily expressed.” And extend the life of Smalltalk by a decade. But the opposite happened.

I always suspected that the patterns everyone got so excited about were basically a way of overcoming static typing. Ward confirms it 🙂