Solving problems by studying tomes of knowledge is the job description of wizards/witches. Large improvements towards optimality, for some problems, are effectively locked away in some of these papers. As the article points out, there generally isn't much benefit in the context of building CRUD apps.
Some contexts have larger research communities. For example, there isn't nearly as many papers on real-time path planning for agent mutable environments vs static environments. I assume this is because we still don't have Boston Dynamics robots in people's homes. If we could get the cost low enough it may be more profitable to send mining robots to mars than people, but I guess there are other applications as well.
I spent some months trying to find, understand, and implement the state-of-the-art algorithms in real-time path planning within mutable environments(Minecraft). I started with graph algorithms like A*[0] and their extensions. For my problem this was very slow. D* lite[1] seemed like an improvement, but it has issues with updates near its root. Sample based planners came next such as rrt[2], rrt*, and many others.
I built a virtual reality website to visualize and interact with the rrt* algorithm. I can release this if anyone is interested. I've found that many papers do a poor job describing when their algorithms perform poorly. The best way I've found to understand an algorithm's behavior is to implement it, apply it to different problems, and visualize the execution over many problem instances. This is time consuming, but yields the best understanding in my experience.
Sample based planners have issues with formations like bug traps. For my use case this was a large issue. Moving over to Monte Carlo Tree Search(MCTS)[3] worked very well given the nature of decision making when moving through an environment the agent can change. The way it builds great plans from random attempts of path planning is still shocking.
Someone must incorporate these papers' best aspects into novel solutions. There exists an opportunity to extract value from the information differential between research and industry. For some reason many papers do not provide source code. A good open source implementation brings these improvements to a larger audience.
Some good resources I've found are websites like Semantic Scholar[4] and arxiv[5] along with survey papers such as one for MCTS[3]. The later half of this article is what gets me excited to build new things. I would encourage people to explore the vast landscape of problems to find one that interests them then look into the research.
[0] https://en.wikipedia.org/wiki/A*_search_algorithm
[1] https://en.wikipedia.org/wiki/D\\\*
[2] https://en.wikipedia.org/wiki/Rapidly-exploring_random_tree
[3] https://www.semanticscholar.org/paper/A-Survey-of-Monte-Carl... /c37f1baac3c8ba30250084f067167ac3837cf6fd
[4] semanticscholar.org
Although the method uses a variant of A* and might not be that “fancy” in academia terms, it’s astonishing how far it can achieve (see demos like [1] and [2]) and might actually be far more useful to study it closely instead of more theoretical papers.