That's a good set of questions for 2014. Questions that have become important more recently include:

- Imperative? Functional? Some mixture of both? Mixtures of the two tend to have syntax problems.

- Concurrency primitives. The cool kids want "async" now. Mostly to handle a huge number of slow web clients from one server process. Alternatively, there are "green threads", which Go calls "goroutines". All this stuff implies some level of CPU dispatching in compiled code.

- Concurrency locking. Really hard problem. There's the Javascript run to completion approach, which reappears in other languages as "async". Ownership based locking, as in Rust, is promising, but may lock too much. And what about atomic operations and lockless operations? Source of hard to find bugs. Get this right.

- Ownership. Rust got serious about this, with the borrow checker. C++ now tries to do "move semantics", with modest success, but has trouble checking at compile time for ownership errors. Any new language that isn't garbage collected has to address this. Historically, language design ignored this problem, but that's in the past.

- Metaprogramming. Templates. Generics. Capable of creating an awful mess of unreadable code and confusing error messages. From LISP macros forward, a source of problems. Needs really good design, and there aren't many good examples to follow.

- Integration with other languages. Protocol buffers? SQL? HTML? Should the language know about those?

- GPUs. Biggest compute engines we have. How do we program them?

> GPUs. Biggest compute engines we have. How do we program them?

1. Carefully. They only give you - roughly and unless you're super-lucky - around one order of magnitude improvement in raw flops and raw GB/sec memory bandwidth over a perfectly-exploited CPU. (Not that it's easy to properly exploit a CPU of course; it's actually quite difficult.) That means that if you cut even a few corners - you're just going to lose that edge, and a (again, perfectly-programmed) CPU beats you.

2. With a programming language which has zero or very-low cost abstractions, and models well the computations a GPU "thread" can perform. Preferably with some JIT'ing capability to be dynamic enough. For the first two, that trusty warthog, C++, does fairly well - especially with some help from nice libraries. (Shameless self-plug time... this kind of help: https://github.com/eyalroz/cuda-kat ; I'm thinking of submitting that to "Show HN" soon.) JITing for GPUs is maaaaasively under-explored and under-developed IMHO, and if someone is interesting in collaborating on that, feel free to write me.

Would other languages do? If there's not enough ability to abstract, you get a mountain of specialized code; if there's too much abstraction, you start paying through your nose. Could Rust work? I don't know. Perhaps one of those "better C"'s like Zig? The fact that CPU-side libraries can't really run on the GPU kind of levels the playing field against languages with a large base of already-written software.

> Ownership... C++ now tries to do "move semantics", with modest success, but has trouble checking at compile time for ownership errors

Actually, C++ has, over the past decade or so, introduced several measures to address the issue of ownership and resource leakage. If you combine move semantics, library facilities (mostly smart pointers), the problem is half-solved. Static analysis is improving too, especially when you "decorate" parameters, e.g. with `owner>` or `non_null>` and such. The "C++ Core Guidelines" (https://github.com/isocpp/CppCoreGuidelines) aim to be machine-checkable whenever possible.