Another Quora answer : Phil Jones’s answer to Why is C++ considered a bad language?

This is one of those rare occasions I disagree with Simon Kinahan; although his answer sets the scene for this one.

[Simon says that C++ isn’t a bad language. It’s the right choice if you need low-level memory control and the ability to build powerful higher-level abstractions.]

C++ is a bad language because it’s built on a flawed philosophy : which is that you should add power to a language by kludging it in “horizontally” in the form of libraries rather than “vertically” by building new Domain Specific Languages to express it.

Stroustrup is very explicit about this, rhetorically asking “why go to other languages for new features when you can add them as libraries in C++?”

Well, the answer is, adding new higher level conceptual thinking in the form of a library doesn’t really hide the old thinking from you. Or allow you to abandon it.

C++’s abstractions leak, more or less on purpose. Because you can never escape the underlying low-level thinking when you’re just using this stuff via libraries. You are stuck with the syntax and mindset of the low-level, even as you add more and more frameworks on top.

Yes, you can explicitly add garbage collection to a C++ program. But you can’t relax and stop thinking about memory management. It can’t disappear completely beneath the horizon of things you need to be aware of the way it does in Java.

Yes, you can have higher-level strings, that can abstract Unicode-ness etc. via a library. But you can never be sure that you won’t confront strings that are simple byte-arrays coming from another part of your large system.

Etc.

C++’s ability to build high-level abstractions uncomfortably falls between two stools.

It encourage you to think you can and should be building large applications full of application logic. But doesn’t give you the real abstracting power to focus only at that application level.

This explains the otherwise mysterious paradox that C is a good language, so how could something that is “C plus more stuff” possibly be a bad one? Well, it’s exactly the “more plussing” that’s the problem.

With C, you KNOW you should only use it to build relatively small things. It doesn’t pretend to offer you mechanisms to build big things. And so you turn to the obvious tools for scaling up : lex and yacc to build small scripting languages, the Unix pipe to orchestrate multiple small tools . C++ gives you just enough rope to hang yourself. Just enough powerful abstraction building capacity in the language itself that you (or that guy who used to work in your company 15 years ago ) thought it might be possible to reinvent half of Common Lisp’s data-manipulation capability and half an operating systems’ worth of concurrent process management inside your sprawling monolithic application.

I don’t see why it shouldn’t be possible to combine low-level memory management and efficiency with high level abstraction building. But proper abstraction building requires something more like macros or similar capabilities to make a level of expression which really transcends the low level. That’s exactly what C++ lacks.

My Quora answer : Phil Jones’s answer to What are some great truths of computer programming?

  1. The bug is your fault

  2. The bug is your fault

  3. The bug is your fault

  4. The bug is your fault

  5. The bug is your fault.

  6. No. The bug really IS your fault.

  7. The bug is your fault. Because you made an assumption that wasn’t true.

  8. The bug is your fault. And it won’t go away until you check your assumptions, find the one that isn’t true, and change it.

My Quora answer that’s pretty popular : (1) Phil Jones’s answer to What do you think computers will be like in 10 years?

Related to my previous story of trying to use the CHIP for work.

This is a $9 CHIP (Get C.H.I.P. and C.H.I.P. Prto)

It runs Debian. A couple of weeks ago, I took one travelling with me instead of my laptop to see what it would be like to use for real work.

I successfully ran my personal “productivity” software (three Python based wiki servers and some code written in Racket) from it. It also has a browser, Emacs and I was doing logic programming with minikanren library in Python. It runs Sunvox synth and CSOUND too, though I wasn’t working on those on this trip.

In 10 years time, that computing power will be under a dollar. And if anyone can be bothered to make it in this format, the equivalent of this Debian machine will be tantamount to free.

Of course most of that spectacular power will be wasted on useless stuff. But, to re-emphasize, viable computing power that you can do real work with, will be “free”.

The pain is the UI. How would we attach real keyboards, decent screens etc when we need them?

I HOPE that people will understand this well enough that our current conception of a “personal computer” will explode into a “device swarm” of components that can be dynamically assembled into whatever configuration is convenient.

I, personally, would LIKE the main processor / storage to live somewhere that’s strongly attached to my body and hard to lose (eg. watch or lanyard). I’d like my “phone” to become a cheap disposable touch-screen for this personal server rather than the current repository of my data.

I bought a cheap bluetooth keypad for about $8. It was surprisingly OK to type on, but connections with the CHIP were unreliable. In 10 years time, that ought to be fixed.

So, in 10 years time, I personally want a computer on my wrist that’s powerful to do all my work with (that means programming and creating music). That can be hooked up to some kind of dumb external keyboard / mouse / screen interface (today the Motorola Lapdock is the gold-standard) that costs something like $20. Sure, I’ll probably want cloud resources for sharing, publishing, storage and even high-performance processing, AI and “knowledge” etc.

And, of course, I want it to run 100% free-software that puts me in control.

This is all absolutely do-able.

A comment I made over on The End of Dynamic Languages

The problem with the static / dynamic debate is that the problems / costs appear in different places.

In static languages, the compiler is a gatekeeper. Code that gets past the gatekeeper is almost certainly less buggy than code that doesn’t get past the gatekeeper. So if you count bugs in production code, you’ll find fewer in statically typed languages.

But in static languages less code makes it past the compiler in the first place. Anecdotally, I’ve abandoned more “let’s just try this to see if its useful” experiments in Haskell where I fought the compiler and lost, than in Clojure, where the compiler is more lenient. (Which has the knock on effect of my trying fewer experiments in Haskell in the first place, and writing more Clojure overall.)

Static typing ensures that certain code runs correctly 100% of the time, or not at all.

But sometimes it’s acceptable for code to run 90% of the time, and to have a secondary system compensate for the 10% when it fails. There might even be cases where 90% failure and 10% success can still be useful. But only dynamic languages give you access to this space of “half-programs” that “work ok, most of the time”. Static languages lock you out of it entirely. They oblige you to deal correctly with all the edge cases.

Now that’s very good, you’ve dealt with the edge cases. But what if there’s an edge case that turns up extremely rarely, but costs three months of programmer time to fix. In a nuclear power station, that’s crucial. On a bog-standard commercial web-site, that’s something that can safely be put off until next year or the year after. But a static language won’t allow you that flexibility.

The costs of static and dynamic languages turn up in different places. Which is why empirical comparisons are still hard.

Ian Bicking’s post on Conway’s Corollary is a must-read thought on isomorphisms between the organization and product structures.

What, asks Bicking, if we don’t fight this, but embrace it. Organizational structures are allegedly for our benefit. Why not allow them to shape product? Or when this is inappropriate why not recognize that the two MUST be aligned and if product can’t follow organization, we should refactor organization to reflect and support product.

Im sure my answer / comment on What is Gradle in Android Studio? will get downvoted into oblivion with short-shrift fairly soon. (Maybe deservedly).

But I’ll make it here :

[quote]

At the risk of being discursive I think behind this is the question of why the Android Studio / Gradle experience is so bad.

Typical Clojure experience :

* download project with dependencies listed in project.clj.
* Leiningen gets the dependencies thanks to Clojars and Maven.
* Project compiles.

Typical Android Studio / Gradle experience :

* “Import my Eclipse project”.
* OK project imported.
* Gradle is doing it’s thang … wait … wait … wait … Gradle has finished.
* Compile … can’t compile because I don’t know what an X is / can’t find Y library.

I’m not sure this is Gradle’s fault exactly. But the “import from Eclipse project” seems pretty flaky. For all of Gradle’s alleged sophistication and the virtues of a build-system, Android Studio just doesn’t seem to import the build dependencies or build-process from Eclipse very well.

It doesn’t tell you when it’s failed to import a complete dependency graph. The Android Studio gives no useful help or tips as to how to solve the problem. It doesn’t tell you where you can manually look in the Eclipse folders. It doesn’t tell you which library seems to be missing. Or help you search Maven etc. for them.

In 2016 things like Leiningen / Clojars, or node’s npm, or Python’s pip, or the Debian apkg (and I’m sure many similar package managers for other languages and systems) all work beautifully … missing dependencies are thing of the past.

Except with Android. Android Studio is now the only place where I still seem to experience missing-dependency hell.

I’m inclined to say this is Google’s fault. They broke the Android ecosystem (and thousands of existing Android projects / online tutorials) when they cavalierly decided to shift from Eclipse to Android Studio / Gradle without producing a robust conversion process. People whose projects work in Eclipse aren’t adapting them to AS (presumably because it’s a pain for them). And people trying to use those projects in AS are hitting the same issues.

And anyway, if Gradle is this super-powerful build system, why am I still managing a whole lot of other dependencies in the sdk manager? Why can’t a project that needs, say, the ndk specify this in its Gradle file so that it gets automatically installed and built-against when needed? Why is NDK special? Similarly for target platforms? Why am I installing them explicitly in the IDE rather than just checking my project against them and having this all sorted for me behind the scenes?

[/quote]