Here’s the problem with IP addresses: there aren’t enough of them....
  As a consequence, the Internet has allowed intermediate
  computers to rule. These are like parasites that have grown
  too large to remove without killing the host. The technical
  flaw that favored intermediate computers prefigured a world
  where middlemen business models thrive.
The handwave is the word "prefigure". How did IPv4 and NAT play any role in the dominance of Facebook, Airbnb et cetera? This is an analogy masquerading as an argument.

  It is not fundamentally necessary to have any intermediate 
  company profiting whenever a person reads news from their 
  friends, rents an apartment from a stranger, or orders a 
  ride from a driver.
The author provides no evidence that the services of Airbnb, Uber etc. have no value added. These companies carefully designed interfaces to help us find what we need. If they did not add value, we would still be using newsgroups.

And this is important because the author misses the point as to why people get value from the "middlemen" -- to find stuff. Even with a content-centered web, there will be a need to find content and people, and new discovery engines and aggregators will emerge.

It's hard to imagine how things could be different, but for many services you don't need a middleman. For a long time, banking was thought to be a necessary middleman because you "have to" trust at least some actor to verify the transactions. Turns out you don't need to, with distributed ledgers (Bitcoin, etc).

When it comes to search engines, two things shake the centralized assumption: (1) you can crawl DHTs like https://github.com/boramalper/magnetico does for BitTorrent, and this can be done self-hosted, (2) too often we assume that searches must be global, but in many cases you are implicitly constraining your search space to regional boundaries or interest boundaries. E.g. as a westerner you are probably not interested in content from Vietnam or Pakistan. Also, when searching the Web, rarely do people browse through multiple Google search result pages. As a consequence, things like `awesome-*` lists ( https://github.com/bayandin/awesome-awesomeness ) are a good starting point for a decentralized curated catalogue that could cover that use case. So you could imagine a curated catalogue of programming websites compiled as a few hundred megabyte file, it could pretty much cover your interest boundaries, and as a file this can be shared over Dat today very easily. Want to search another 'interest boundary'? Run the same search software but on a different catalogue file. These things are possible.

In other words, centralization is overestimated. Maybe there are a couple of genuine centralized-middleman use cases but most of use cases can be very well covered by decentralization.