I don't think latency is actually the most important feature of edge computing. Sure, it's great that if you use Cloudflare Workers you get very close to end users and have low latency. I think the real advantages will be:

1. Set-it-and-forget-it scalability

2. Compliance with data locality laws

#2 is going to be the absolute wave of the future. Not just in Europe, but everywhere. Every country is going to introduce laws that mean their citizens' data needs to stay in region or in country. A widely distributed edge will make that easy to handle because it can be a configuration option.

What exactly do you mean by #1?

I mean that the promise of edge computing is that you write some code, deploy it and don't worry about scaling it, regions, availability zones and the like. It's just code that gets run.

> It's just code that gets run.

I agree. But it also means that it's mostly unrelated to the edge, or edge computing; with the same constraints the code can run anywhere.

To me the promise of edge is that it could work quite well with decentralized apps (not limited to blockchain-based). The work you guys are doing with IPFS is a great start.

Add: When you think about it, most current apps are centralized and it's unsurprising that they can work well with the cloud (AWS/Azure/Google); edge is just mostly CDN for now. Decentralized is when those models become incompatible, and where edge can show strength.

CDN is mostly related to "content" (i.e., data) delivery. Edge is more generic, and relates not only to "content", but also to "processing" and "networking" to reduce the response time multiple clients experience.

Decentralized apps is not really related, though one can really employ edge datacenters to achieve so..

The point I'm making is that edge can't do much now in terms of "processing". Most data is centralized and you'll need to hit a centralized system to access it. Facilities needing lower latency are more likely to keep it on-prem. As of now, it's mostly dealing with CDNish workloads.

However, that changes with decentralized apps since they place a different set of architectural demands, and don't have centralized datasources. For instance, a search in p2p space might involve connecting to a lot of peers - latency matters. Data (often signed chains) might need to be fetched from dozens of different sources, combined and queried locally - again latency matters. Clusters of people who you talk to are often co-located, edge wins again.

Latency doesn't matter when it's a hand countable number of simultaneous queries (as in current apps). We can even work around it with approaches like batching, as with GraphQL.

Sure, most apps still follow the client x server architecture. To properly take advantage of edge computing, one needs to rearchitect current applications, and microservices is one example of how to do it.

The point of edge computing is exactly to incentivize facilities/companies to not keep data on premises, or at least to ship some data (i.e. non sensitive) out of it. It doesn’t scale the way most companies need it to.

Latency is key for some important applications like self-driving cars and industrial automation, not really to make some queries in GraphQL..

> ... and microservices is one example of how to do it.

The fundamental problem is where the data resides. Microservices are well understood today, but taking them to the edge isn't; there isn't a path to do that for typical apps. So most microservices which are being used at the edge are doing caching/transcoding/resizing etc.

> Latency is key for some important applications like self-driving cars and industrial automation

They keep compute on-vehicle or on-prem. For data services (not media delivery), latency is:

a) either supremely important to be fully local (vehicles, automation)

b) or it doesn't matter enough to be on a 3rd party edge network. The diminishing returns in typical apps is what the article is alluding to.

> not really to make some queries in GraphQL

You're misrepresenting what I said - and it seems deliberate.

I mentioned GraphQL as one of the attempts to solve latency issues in typical apps.

For example, some apps use graphql/dataloader[1] because it can "coalesce all individual loads which occur within a single frame of execution before calling your batch function with all requested keys. This ensures no additional latency while capturing many related requests into a single batch."

So in typical apps, there isn't a big benefit to putting general compute on the edge - because the network calls are chunky and not chatty, and their data is centralized. GraphQL (along with libs/frameworks) being one way to turn chatty into chunky.

[1]: https://github.com/graphql/dataloader