To add to all the excellent answers in this thread: unit tests are massively overrated.
It is good to know that your base64 encoding function is tested for all corner cases, but integration and behaviour tests for the external interface/API are more important than exercising an internal implementation detail.
What is the external interface of the kernel? What is its surface area? A kernel is so central and massive the only way to test its complete contract with the user (space) is... just to run stuff and see if it breaks.
TDD has some good ideas but for a while it had turned in a religion. While tests are great to have, a good and underrated integration testing system is just for someone to run your software. If no one complains, either no one is using it, or the software is doing its work. Do you really need tests for the read(2) syscall when Linux is running on a billion devices, and that syscall is called some 10^12 times per second globally?
Isn't SQLite a counterexample? They have more code that has tests than their actually running code. https://www.sqlite.org/testing.html
SQLite doesn't have hardware drivers (that IIRC are still majority of kernel bugs) that need actual hardware to test (as hardware mocks are at best mildly useful, coz real hardware can be... weird)
And unit testing is relatively useless, what would be useful are end to end tests that start with userspace API but that's much harder task, altho hopefully those tests wouldn't need to be changed that often.
There's also the Linux testing project, which is technically third party. It's not clear to me how extensive it is but for a project as important as Linux I think it has to be graded as "needs improvement."