Academic code typically just has to work once or a handful of times, for a small number of highly expert users, frequently just for the author. Ease of update is of the essence - you'll rewrite most of it many times, as your understanding the problem change. You can use all sorts of ugly hacks so long as you get what you're after.

If any of it ever becomes commercially released or whatever, there'll need to be a complete rewrite that makes it usable and maintainable by people other than yourself. But most of the code will never get to that point because most of what you've done up until about a week ago is wrong and worthless, and the current, correct-until-next-week iteration is stuck together with duct tape.

Speed only matters on the infrequent hot paths, which is why Python is popular. The rule of thumb is nobody cares about speed / resource consumption until it needs to run on a cluster, but then you care a lot because cluster time is metered and simulations can get huge. Fortran is still fairly popular because many math libraries are on it and porting would require huge effort from a very small group of very busy people.

Most of the coders are not software engineers and don't know / don't follow best practices; on the other hand the popular best practices are not designed for their use-case and frequently don't fit. Versioning (of the I-don't-know-which-of-the-fifty-copies-on-my-laptop-is-the-right-one type) is a big issue. Data loss happens. Git/Github/etc has steep learning curve, but so does all the various workflow systems designed for research use.

We've had good luck with some academic code bases in production -- ETH Zurich puts out some great code [1,2].

[1] https://github.com/libigl/libigl

[2] https://github.com/pybind/pybind11