I came to Gmane back when I needed to search newsgroups (NNTP) archives, then later on, mailing lists.

Maybe I'm nostalgic but I really miss the newsgroups that focused on just the messages, and could be consumed by any client, stored offline, searched, etc.

> "Maybe I'm nostalgic but I really miss the newsgroups that focused on just the messages, and could be consumed by any client, stored offline, searched, etc."

I don't see anything nostalgic about that, there's value in the benefits you list and rightfully associate to application-layer protocols.

A few months ago I stumbled upon a good article that praised them too, over today's "HTTP for everything", but I'm sadly unable to find it in my bookmarks. Anyway; more than anything, it lauded the interoperability that comes with those protocols.

I'm sending email from Thunderbird, and can reply minutes later from my iPad, and follow-up/search my archive at work from Outlook or Mail.app. Transmission merrily talks to uTorrent or Deluge or rTorrent. My windows box reads video files from a Samba share.

What now? Apps interfaces are at worst totally obscure (Skype), or at best exchange readable but undocumented JSON/XML over HTTP(S). Put differently: had email been designed & implemented today, we would hardly enjoy the same interoperability. There seems to be little interest around designing new protocols (and maybe little help coming from languages/libraries too?). IRC v3 ( http://ircv3.net/ ) comes to mind, but it's pretty niche. Anyway, I'm ranting. Can anyone complement from experience trying to do such protocol design/implementation today and the challenges associated with it?

HTTP won out because it's the 'universal' protocol: you GET a resource, you say 'I prefer text/html but really anything is fine', the server puts bytes on the wire, it includes some metadata (headers, Content-Type), and then your user-agent interprets that Content-Type and displays the result. It's an extensibility advocate's dream. Using this and the jack-of-all-trades datatype HTML, we developed documents that link to other documents. When we were no longer okay with static pages, we hooked up programs that wrote HTML onto stdout and at the end of the day, everything just came across as a sequence of bytes. There's no formalized, official application-level logic to the HTTP state machine (although there are third-party attempts [1]).

Using these universal building blocks, we built applications where state transitions consists of GETs and POSTs. Eventually, when we wanted machine-structured data, we did XML-RPC, later codified into SOAP, before the backlash against hard-to-understand standards led to JSON being traded between server backends and client-side obfuscated, minified Javascript state machines.

Not enough people make new running-on-TCP or running-on-UDP protocols because new protocols are hard to design, they don't work with the one application where everyone spends 70+% of their time (the web browser), and they probably get blocked on a middlebox except if you use port 80 or 443 and fake being HTTP anyway. For all but very specialized use-cases, vomiting blobs of JSON (or if you want to feel extra good, some custom binary serialization format like protobuf or Thrift or Cap'nProto or MessagePack) across HTTP endpoints is pretty okay.

[1] https://github.com/for-GET/http-decision-diagram