Power is a weakness in a programming language, not a strength - some of the most interesting research languages today are not even Turing-complete. It's easy to add expressiveness to a clunky language; it's much harder to add limits to an expressive language that prevent expressing nonsense.
Some really interesting things may happen when you add arbitrarily powerful macros to a bondage and discipline non-Turing-complete language.
One of them is that, each time you read a new program, you have to learn a new language. Probably no language benefitted / suffered from this more than Lisp.
> One of them is that, each time you read a new program, you have to learn a new language
Which is not bad at all. You still have to do it with the programs written in the same language built with different libraries and targeting the different problem domains. In the latter case the language is obscuring the essence of the code.
And if you're using a well designed DSL, and you're familiar with the problem domain, it will be readable naturally, just like a pseudocode.
A lot of domain languages need infix operators though. And even if you understand the domain, it's hard to read and refactor code in the presence of unrestricted macros, whereas in a language where such DSLs are implemented via the type system the tooling will understand that.
And? DSLs can have any syntax you like. And any type system you want.
And DSLs done the right way, via macros, are much better in integrating with tools than any ad hoc interpreted DSLs would ever be able to. You can easily have syntax and semantic highlighting infered, with auto indentation, intellisense and all the bells and whistles. For no extra cost.
> And DSLs done the right way, via macros, are much better in integrating with tools than any ad hoc interpreted DSLs would ever be able to. You can easily have syntax and semantic highlighting infered, with auto indentation, intellisense and all the bells and whistles. For no extra cost.
No you can't. If the macro is arbitrary code then no tool can offer those things - there's no way to offer intellisense if you don't know what strings are meaningful in the language, and an unconstrained macro could use anything to mean anything.
The tools could have hooks for this.
It doesn't take much imagination.
You know how GNU Bash is customizeable with custom completion for any command, so that when you're, say, in the middle of a git command, it will complete on a branch name or whatever?
Similarly, we can teach a syntax highlighter, completer or whatever in some IDE how to work with our custom macro.
Sure - but at that point we've lost a lot of the value of having a standardized language at all. The whole point of a language standard is that multiple independent tools can be written to work with it - that your profiler and your linter and your compiler can be written independently, because they'll be written to the spec. If everyone has to customize all their tools to work with their own code, that's a lot of duplicated effort. Better to have a common standard for how you embed DSLs in the language, so that all the tools already understand how to work with them.
It is a broken approach. A much better way is to have a standard protocol (see slime for example, or IPython, or whatever else), and use the same tools as your compiler does, instead of reimplementing all the crap over and over again from the language standard.
I expect that not that many C++ tools that do not use libclang will remain.
At that point you're essentially advocating treating libclang as the standard. All the usual problems of "the implementation is the spec" apply.
A good example of such a thing would be something like https://github.com/kframework/c-semantics