What does HackerNews think of redo?
Smaller, easier, more powerful, and more reliable than make. An implementation of djb's redo.
It's not going in the same direction you are exactly but building a mental model of it in my head was salutary in terms of thinking through the purpose and implementation of build tools and hopefully it'll be similarly useful to you.
If you are looking for Make replacement - I cannot sing enough praises for redo: https://github.com/apenwarr/redo
I wrote a blog post to show how to integrate dependency output for both dependency and non-existence dependency generation [2]. The game “Liberation Circuit” can be built with my redo implementation; you can output a dependency graph usable with Graphviz [4] using “redo-dot”.
There is only one other redo implementation that I would recommend, the one from Jonathan de Boyne Pollard [5], who rightly notices that compilers should output information about non-existence dependencies [6].
I would not recommend the redo implementation from Avery Pennarun [7], which is often referenced (and introduced me to the concept), mainly because it is not implemented well: It manages to be both larger and slower than my shell script implementation, yet the documentation says this about the sqlite dependency (classic case of premature optimization):
> I don't think we can reach the performance we want with dependency/build/lock information stored in plain text files
[1] http://news.dieweltistgarnichtso.net/bin/redo-sh.html
[2] http://news.dieweltistgarnichtso.net/posts/redo-gcc-automati...
[3] https://github.com/linleyh/liberation-circuit
[4] https://en.wikipedia.org/wiki/Graphviz
[5] http://jdebp.eu./Softwares/redo/
[6] http://jdebp.eu./FGA/introduction-to-redo.html#CompilerDefic...
I've work in embedded software for over a decade, and all projects have used Make.
I have a love-hate relationship with Make. It's powerful and effective at what it does, but its syntax is bad and it lacks good datastructures and some basic functions that are useful when your project reaches several hundred files and multiple outputs. In other words, it does not scale well.
Worth noting that JGC's Gnu Make Standard Library (GMSL) [1] appears to be a solution for some of that, though I haven't applied it to our current project yet.
Everyone ends up adding their own half-broken hacks to work around some of Make's limitations. Most commonly, extracting header file dependency from C files and integrating that into Make's dependency tree.
I've looked at alternative build systems. For blank-slate candidates, tup [2] seemed like the most interesting for doing native dependency extraction and leveraging Lua for its datastructures and functions (though I initially rejected it due the the silliness of its front page.) djb's redo [3] (implemented by apenwarr [4]) looked like another interesting concept, until you realize that punting on Make's macro syntax to the shell means the tool is only doing half the job: having a good language to specify your targets and dependency is actually most of the problem.
Oh, and while I'm around I'll reiterate my biggest gripe with Make: it has two mechanisms to keep "intermediate" files, .INTERMEDIATE and .PRECIOUS. The first does not take wildcard arguments, the second does but it also keeps any half-generated broken artifact if the build is interrupted, which is a great way to break your build. Please can someone better than me add wildcard support to .INTERMEDIATE.
[1] http://gmsl.sourceforge.net
[2] http://gittup.org/tup/ Also its creator, Mike Shal, now works at Mozilla on their build system
The two weak points are: (1) multiple outputs from a compilation step (yytab.c yytab.h from yacc) is not properly supported, and (2) no windows support. Other than that it's the perfect minimalistic make replacement -- what make should have been.
Additionally, there tup[1]. With some assumptions about the build process that usually hold in non-distributed builds, it is the fastest, simplest make replacement; You just write a list of commands that builds your final outputs, and by tracing the processes it figures out exactly what needs to be done next time -- nothing more, and nothing less.
[0] https://github.com/apenwarr/redo [1] http://gittup.org/tup/
The apenwarr implementation includes a full Python implementation as well as a minimal version in < 200 lines of sh. The minimal version doesn't support incremental rebuilds -- the out-of-date-ness tracking in the full Python version uses sqlite -- but it's good for understanding the redo concept. Also good for embedded contexts.
I used the full redo implementation for some data processing tasks once, with mixed results. It was a situation where I couldn't declare the dependency graph up front. With redo, each target declares its dependencies locally when it builds, and redo assembles and tracks the dynamic dependency graph. It's pretty neat, but become difficult to reason about and debug. Could be that I never got comfortable with the new paradigm, or could be that essential tooling was missing, not sure. I still think redo is promising.
Anyway after a decade of messing with shiny new build tools, I finally learned to stop worrying and love the bomb (make). It's weird and warty but surprisingly capable. Worth the learning investment. Oh and jgrahamc's "GNU Make Book" is great. [2]
[0] https://github.com/apenwarr/redo
So if, like any significant project I know, your build specifies lots of wildcard ("implicit") rules, you're stuck using .PRECIOUS, in which case you can never interrupt the build for risk of leaving it in a broken state.
FWIW, djb's "redo" concept [2] is based centrally around atomicity to avoid such a problem.
[1] https://www.gnu.org/software/make/manual/html_node/Special-T...
Cross-compiled kernel, bootloader, custom packages, full distro rootfs to final sdcard image without a single Makefile. I owe that man a beer.
It seems like the downside to using redo would be that there are .do files sitting throughout the source tree. It'd be nice if there was a way to consolidate several .do files into a single Redofile, similar to a Makefile or Sakefile.
Like the non-minimal redo, sake is also implemented in Python. I wonder how they compare, performance-wise.
Do you have some kind of way to verify that your makefile dependencies conform to your source dependencies? Is clang/gcc tracking sufficient for your use case? What about upgrading the compiler itself, does your makefile depend on that? If so, how?
Have you considered tup[0]? Or djb-redo[1]? Both seem infinitely better than Make if you are paranoid. tup even claims to work on Windows, although I have no idea how they do that (or what the slowdown is like). Personally, I'm in the old Unix camp of many-small-executables, non of which goes over 1M statically linked (modern "small"), so it's rarely more than 3 secs to rebuild an executable from scratch.
> (deterministic mode for ar)
Why do you care about ar determinism? Shouldn't it be ld determinism you are worried about?
In general, I found CMake quite useable for my needs, and quite clean. It also required less build system code than redo. CMake fits quite nicely into a (C or C++) project which consists of many binaries and libraries which can depend on each other.
"as you can see in default.o.do, you can declare a dependency after building the program. In C, you get your best dependency information by trying to actually build, since that's how you find out which headers you need. redo is based on the following simple insight: you don't actually care what the dependencies are before you build the target; if the target doesn't exist, you obviously need to build it. Then, the build script itself can provide the dependency information however it wants; unlike in make, you don't need a special dependency syntax at all. You can even declare some of your dependencies after building, which makes C-style autodependencies much simpler."
https://github.com/apenwarr/redo
It might be interesting to see if the two of you interpreted DJB's documentation in the same ways.