So wish we could have made it to this. In any case ZeroTier was conceived with Internet decentralization motives in mind specifically with the goal of making edge device connectivity easy. It can be and is used for other things but that was the original motive.

https://www.zerotier.com/

/no relation to ZeroNet, didn't know about that when I named it.

Nice work you people are doing. I have no opinion on features and stuff ATM except to say that VPN's plus high usability and open source is a category I like seeing expand. Far as this list, I think what keeps you off is this:

"ZeroTier endpoint nodes form a peer to peer network and use a set of pre-configured nodes called root servers (currently run by us, federation is planned) as stable anchor points for near-instantaneous zero-configuration peer location and connection setup."

Still centralized. Solve that then you might get on the list.

We wanted to build something useful in mainstream, casual, and commercial applications, not just for geeks. For that reason we took on the following non-negotiable design requirements:

- An endpoint can join the network in <~10s. No bootstrapping time.

- Any endpoint can reach any other endpoint in the world in <~10s (or less).

- No configuration at all is required. It "just works." Any knob that must be tweaked or config that must be entered is a bug.

- The endpoint must be small enough to fit in an embedded device like a thermostat, light bulb, etc. (Or at least be able to be made that small without inordinate levels of pain.)

- Performance overhead must be on par with e.g. OpenVPN, GRE/IPSec, etc.

- Must be mobile-friendly. (phones, tablets, etc.)

- Must not conscript user devices into infrastructure roles without explicit opt-in.

- Very strong resistance to sybil and DDOS attacks, at least comparable to current Internet BGP community.

- It must be able to scale to Internet size (tens of billions of devices) without disproportionate levels of pain or discontinuities where the system suddenly "melts down."

- The design must be simple enough to fully describe in a relatively concise RFC.

- The design should be no more centralized than other common Internet systems like DNS and BGP.

The current design satisfies all those goals. It's zero-config, runs on phones with minimal battery life impact, could be scaled down to embedded code and memory footprints without too terribly much effort, and is no more centralized than DNS or BGP.

I'm not sure if I see the intrinsic advantage of trying to be less centralized than the Internet while still using the Internet for transport. A true decentralized new-Internet would have to use radio and user-provisioned DIY links. Centralization(X) = max(Centralization(all parts of(X)))

Pretty much everything popular right now in the decentralized Internet community is conclusively "out" for mobile and embedded use outside of niche applications where the user doesn't mind their phone becoming a hand-warmer and their battery life dropping to 45 minutes. In particular we almost certainly rule out:

- DHTs -- too much RAM, too slow, have a warmup/bootstrap time, hard to harden against sybil attacks, and solutions to these problems involve root-server-like centralization anyway so we're back where we started.

- Block chain -- way too compute and storage intensive by many orders of magnitude.

- Rumor mill and other noisy protocols -- way too bandwidth intensive for mobile and small devices, don't scale.

- Aggressive data replication and "raft consensus" type stuff -- too much storage and network overhead for mobile and embedded devices.

Right now our thinking revolves around making it possible to locally federate the root servers for on-site or in-personal-cloud use. But this has to be thought out very carefully so as not to negatively impact security or any of the other constraints above. We can't have people setting up sybil roots that can be used to DOS the network.

Our other thought is to create a separate community-driven institution to hold the root infrastructure. This is fraught with non-technical political difficulties of making sure this institution is well governed and sustainable.

See also:

https://www.zerotier.com/misc/2011__A_Little_Centralization_...

https://en.wikipedia.org/wiki/CAP_theorem

http://adamierymenko.com/decentralization-i-want-to-believe/

https://whispersystems.org/blog/the-ecosystem-is-moving/

The latter post makes excellent points and gives us significant pause about federation and delegation. We have to be able to keep improving things and to respond to threats (e.g. DDOS) rapidly.

-- Edit: meta:

I tend to disagree philosophically with the lack of pragmatism in the Internet decentralization community. It reminds me of OSI, which had some theoretically-superior ideas about networking but which never actually shipped anything that worked at scale. As a result we have IP, which works well but lacks some of the theoretical benefits of more throughly designed systems. Things that work always win over things that don't work. See also: semantic web vs. web+search, Project Xanadu vs. www.

Right now the dominant paradigm online is highly centralized cloud silo networks where all traffic is MITMed by design. I think making it trivially easy to network endpoints with an end-to-end encrypted network that "just works" is a huge improvement and could enable a lot of other things.

Also note that ZT carries standard protocols over standard virtualized networks: IPv4, IPv6, etc. This means that it doesn't impose lock-in on systems built with it. It's just neutral transport.

re decentralized in general

You don't have to convince me. I'm "that guy" (probably)...

https://news.ycombinator.com/item?id=10845128

...who always points out that banking, auditing, database, and eCommerce fields already achieved many of Bitcoin's goals with more efficiency and simple algorithms. I particularly love your comment about how centralized version of Bitcoin could run on a RPI. Haha. Similar to my statements here:

https://news.ycombinator.com/item?id=11184214

Note: The tangent with "contingencies" has me describing how it boils down to politics, laws, and human cooperation in blockchain model anyway. So, why not apply that to more efficient model.

re ZeroTier design

Nice constraints. I'm going to copy your comment and excellent article for now to fully read and think on technical specifics at another time when I have more time. For now, I think you might be overstating the problem with decentralization but spot on about the crowds it attracts. ;) One thing that high-assurance taught me is we can't do everything perfectly. Our trick was to reduce our security, integrity, whatever problem down to some small component (eg TCB or kernel) everything else leveraged. It looks as if you reinvented the concept to a degree by minimizing centralization as its the "trusted" part. It might help, though, to tell (or remind) you of another thing high-assurance cemented in: often easier to do untrusted computation followed by trusted verifier which is simpler than computation.

Let's apply this principle to the trusted part of a scheme minimizing centralization. Instead of all in central or decentral, we can use my concept to do a central model that produces traces of what went in and came out. This is applied to as little of the scheme as possible. Maybe just registration, authentication, IP hopping, whatever. The supernodes are run by different non-profits, people, countries, and so on according to the same rules with their own financial support (or they drop out). Each receives updates on what others or a subset of others are doing in form of in/out states. Each performs fast, simple verification of that which, for some things, is basically just storing it into an in-memory database with disk persistence in case someone asks for it. Mismatches are corrected in standard ways, automated or by people. Like with banks, each organization is responsible for its users with mutually-suspicious auditing increasing their honesty. Users get a sane default on number of and which supernodes to contact with what thresholds and such. For remailer designs, I always made sure to force cooperation in two jurisdictions hostile to each other at a minimum.

Interestingly enough, the job that Google's F1 RDBMS is doing is much harder than what I just described. It's running Adwords with a strong-consistency model. CochroachDB is trying to clone it. Rather than using them, I'm just saying strong-consistency DB model with checking over computation might be get benefits of centralized and decentralized. Last benefit is replicating and checking essentially centralized programs gives us ability to use decades of work in reliability and security engineering on the implementations. Purely P2P and decentralized models are too new for high-security despite what their proponents wish. So many problems and solutions waiting to be discovered. "Tried and true beats novel or new" is my mantra for high-confidence systems.

Note: Might try out ZeroTier as well given you seemed to have open-sourced most critical part.

Everything in ZeroTier is open source except the web UI for my.zerotier.com and currently the Android and iOS GUIs. (The latter might change soon since we made the apps free.)

ZeroTier's root servers run exactly the same code as ordinary nodes. They're just "blessed" in something called a "world" (see world/ and node/World.hpp) and run at stable IPs with a lot of bandwidth. There are currently 12 of them divided into two six-node clusters:

https://www.zerotier.com/blog/?p=577

The role of the root servers is pretty minimal. They relay packets and provide P2P RENDEZVOUS services for NAT traversal. All of this is built into the ZT protocol (see node/Packet.hpp). Technically any peer can do what the roots do but the roots exist to provide zero-conf/no-bootstrap operation and a secure always-on "center" to the network.

It would in theory be possible to create some kind of consensus system whereby the world could be defined by the community, but we'd want this to be extremely sybil-resistant otherwise someone could take down the whole net by electing a bunch of sham nodes.

ZeroTier is being used for Bitcoin stuff, payment networks, etc., and we do get attacked. We've had several DDOS attempts and other attempts to attack the system. So far nothing's succeeded in impacting it much.

"ZeroTier's root servers run exactly the same code as ordinary nodes. They're just "blessed" in something called a "world" (see world/ and node/World.hpp) and run at stable IPs with a lot of bandwidth. There are currently 12 of them divided into two six-node clusters:"

Looks good. I've seen similar things work in five 9 types of setups. Potential there. Some components and clustering might be simple enough for medium-to-high assurance. That nodes benefit from peer review of open code is good. That they're the same is a claim we can't check without trusted hardware plus attestation. You also can't verify that yourself unless you have endpoint and SCM security that can survive penetration and/or subversion by high-strength attackers. That problem applies to most products, though.

I overall like it at the high level I'm reviewing at. Only drawback I see is that it appears to have been written in C++. Is that correct? If so, people can't apply state-of-the-art tools to either prove absence of bugs in code (eg Astree, SPARK), verify its properties (eg Liquid Types, AutoCorres), or automatically transform it to be safer (eg Softbound+CETS, SAFEcode, Code Pointer Integrity). What few tools are available for C++ are expensive and more limited. A rewrite needs to happen at some time to deal with that. Perhaps Rust as it solves dynamic allocation and concurrency problems that even Ada ignores plus was partly validated by Dropbox's deployment in low-level, critical components.

"but we'd want this to be extremely sybil-resistant otherwise someone could take down the whole net by electing a bunch of sham nodes."

I could only speculate on that stuff. It's not my strong area and still a new-ish field. What I do know is that many systems work by (a) having simple, clear rules; (b) maintaining audit logs of at least what happens between mistrusting entities; (c) auditing that; (d) forcing financial or other corrections based on detected problems. Rules and organizations are the tricky part. From that point, it's just your code running on their servers or datacenter of their choosing.

One scheme I thought about was getting IT companies, Universities, or nonprofits involved that have long history of acting aboveboard on these things. Make sure their own security depends on it. Then, you have at least one per country in a series of countries where government can't or is unlikely to take it down. Start with privacy, tax, and data havens plus First World countries with best, cheapest backbone access. Knocks out most of the really subversive stuff right off the bat. What remains is a small amount of subversion potential plus the bigger problem of politics on protocol or rule decisions.

"and we do get attacked. We've had several DDOS attempts and other attempts to attack the system. So far nothing's succeeded in impacting it much."

Glad to see you're making it on that. Surviving those is another benefit of centralized models. Carries over to centralized with distributed checking as well if you use link encryptors and/or dedicated lines to at least key supernodes. That's for the consensus and administrative parts, I mean.

Do you have links for the tools you mention? Some of them are hard to do web searches for.

The ones I referenced are here:

https://www.cis.upenn.edu/acg/softbound/

http://sva.cs.illinois.edu/downloads.html

http://dslab.epfl.ch/proj/cpi/

https://en.wikipedia.org/wiki/SPARK_(programming_language)

http://goto.ucsd.edu/csolve/

http://saturn.stanford.edu/pages/overviewindex.html

Replaced Astree with Saturn as most won't be able to afford Astree. Do test Softbound, SAFEcode, and CPI on various libraries and vulnerabilities to find what works or doesn't. Academics need feedback on that stuff and help improving those tools. There's a serious performance hit for full, memory safety like Softbound+CETS but knocking out that many vulnerabilities might easily be worth some extra servers. Have fun. :)

Thanks for that. Looks like only SPARK is available in the operating system that I use. That is one big problem with research software and software research, it is often disconnected from the programmer community.

BTW, what do you think of this tool?

https://github.com/TrustInSoft/tis-interpreter