What does HackerNews think of tfc?

Tinfoil Chat - Onion-routed, endpoint secure messaging system

Language: Python

#5 in React
#9 in Python
The encryption protocols available haven't been the weak link for some time, no one cracks the messages in transit; you just go for the endpoint. If you can hack the OS and exfiltrate screenshots it doesn't matter how secure your app or network channel is.

The only approach that's given me hope has been Tin Foil Chat, isolating the keychain from the network. Me and some fellow noisebridgers built a kind of cyberdeck prototype implementing this but couldn't find much interest in commercializing it. Since the input stage is isolated from output, you can't copy paste any public keys but have to enter them manually (448bits as 56 characters) and messages/attachments can't be forwarded. Juice wasn't worth the squeeze overall. We'll see if there's a milder approach here but I don't see anyway to get around the endpoint security.

https://github.com/maqp/tfc

>No e2ee app has compromised device part of their threat model.

Oh really, here's one I made earlier https://github.com/maqp/tfc :-)

>The whole OS can.

So how are you backdooring a bash script that comments out lines of code from Linux source before compiling it?

You lying to policy makers with "it can be done" mindset sound like a stupid con that burns a lot of money and time in the process.

There is software that lives up to these claims, it's Tinfoil Chat. The article is correct about the necessary trade-offs: due to peer to peer transport (onion hidden service 2 onion hidden service) both ends of the conversation have to be online -- it at least spools the message waiting for the recipient to appear.

For hole punching and signaling that has to be done by third party, well, the third party is TOR

TFC then goes on to break out the encryption and decryption machines from the network and passes messaging over opto-couplers to prevent your keys from getting exfiltrated. Qubes qrexec could similarly isolate the components.

https://github.com/maqp/tfc

> If you want maximum security use an air gapped computer. But that won't let you send messages on the go.

You can, with some inconvenience, use optical diodes to transmit data from a trusted input device to an untrusted network device for transport over tor, and then push the received messages over a second diode to a display device that decrypts the messages, so that even if you receive an exploit/malware, there is no physical connection that allows unencrypted data to be exfiltrated.

https://github.com/maqp/tfc

Briar is one of the most important secure messaging projects currently. Not only does it remove the need to trust the vendor about content (like with all E2EE messaging apps), you also get to keep the metadata about communication to yourself as data transits from one Tor Onion Service to another.

The downside is of course, you need to keep the endpoint powered on when you want to be reachable so it will increase the battery drain on your phone.

Note: There's also a desktop client if that's easier to keep online https://briarproject.org/download-briar-desktop/

One extremely important thing Briar is doing, is it's using the P2P as means to host alternative social interaction formats, like forums and blogs. Similar to Signal/WhatsApp stories (which is somewhat similar to microblogs/FB wall), it's a way to indirectly share information. You could pretty much emulate any social media platform on top of E2EE protocol with ~zero infrastructure cost and without having to worry about data mining. I'd argue what Briar's innovating on here is one of the most important aspects in what's left for secure messaging.

Finally a small caveat: Briar will share your Bluetooth MAC address with all peers so it can automatically use that when you're in close proximity with your peer. Thus sharing your Briar ID publicly is not a good idea for two reasons:

1) major global adversaries may have access to the leaking Bluetooth MAC (e.g. if Google aggregates it) which can deanonymize your account. This also allows slightly technical person to confirm identity of briar account if they suspect it's you (a bit wonky threat model but still).

2) it ties everything you do across your accounts on same device together, so there's strong linkability even if you rotate the identity key by reinstalling the app.

Briar is pretty clear about this in it's FAQ, but it's still not very well known although it definitely should be.

---

That being said, if you want similar Onion Service based communication with no such linkability, there's https://cwtch.im/ which is a fantastic project.

There's also https://www.ricochetrefresh.net/

Both are spiritual successors to John Brooks' `Ricochet` application which pioneered the whole Onion Service based instant messaging in 2014.

You can also chat and share files (among other things) with https://onionshare.org/

(And finally, you can get remote exfiltration security for keys/plaintexts with TFC https://github.com/maqp/tfc (my personal work), at the cost of losing some features like message forwarding etc that the architecture prevents you from doing.)

Most secure/private chat application I know of is tinfoil chat: https://github.com/maqp/tfc
There are indeed. It's not just spies. My work wrt endpoint secure comms is FOSS and free for anyone to use https://github.com/maqp/tfc (the HW costs a bit naturally but in other respects).
That's defeatist thinking. Just because some agencies of major governments can break into many devices doesn't necessarily mean they do.

And there are other threats you'll want to defend from as well, including governments and agencies with smaller budget.

Anyway, if endpoint security is part of your threat model, you'll be pleased to know I've spent the past decade looking into how to address the problem https://github.com/maqp/tfc

>You seem to have a chip on your shoulder regarding telegram.

You need to understand I don't see messaging apps as living things. I see muscles, bones, nerves and veins. I have no grudge toward any app. That would be silly. My problem is with dangerous implementation of cryptography in general. It's completely agnostic of vendor. I've criticized a myriad of apps in my lifetime from Palringo to Foocrypt to DataGateKeeper to Telegram to TimeAI (lol): bullshit crypto has many forms. I've even criticized apps I now find more or less good, such as Threema and Element (during Riot times).

> all cryptography is “untested” or “unused” at some point

That's not the problem. Telegram's cloud encryption doesn't become "tested" at some point. It's fundamentally broken because by definition the decryption key is with Telegram (the service provider), and NOT your peer.

"Advertising as a secure messenger (or: the most secure) as it is stupid to say “most” and the security is untested."

Yeah I tend to agree. If you want to take a look at how far security design in secure messaging rabbit hole goes, my research might be of interest https://github.com/maqp/tfc

>being of russian origin

No please don't take my comments to infer anything from something being of Russian origin. There isn't "Russians are bad" aspect to my criticism. There is Durov's military training, there is connections to government from VKontakte days, and there is technical deficiencies that are indistinguishable from state sponsored honeypots. I'm not as interested in WHO Durov might give keys to access the servers. I'm interested in the issue of why its dangerous it can happen in the first place: lack of ubiquitous E2EE.

>telegram marketing is so slick

Exactly, they have fantastic social media team that's expert at handling criticism with snide remarks, memes, pop culture references. They really get people. And I find that terrifying. The platform's an orgy of fun, and all I see is by looking at the veins and bones, is another Facebook. Hundreds of millions of people living 90% of their social life through an app they think stands for them, without understanding they're feeding another monster.

Durov should know better what is ethical to build, but he doesn't care. Even if he didn't collect data for the purpose of using it against people, it's his responsibility to know he's not all-powerful, his servers are not hack-proof, and the data that sits there is a tremendous liablity.

Moxie might not be able to pull off the UX, but at least his heart is in the right place, and he's come the closest wrt design that's secure by default. But the most astonishing thing was the v2 group design, that required pushing the boundaries of cryptography as a field. That was incredible achievement. It seems all we can hope now is Signal's features will one day match large enough portion of Telegram's footguns. Lack of usernames, markdown mode, and replying with stickers are the main gripes for me ATM. Or perhaps Threema, Wire, or Element will catch and surpass Signal. Time will tell.

Thanks to everyone, I have realized I need to be more specific.

I'm setting up my personal website, at myname.url , and I want to have a way of letting people get in touch with me privately and securely.

Right now I have me@URL alongside me@gmail and me@hey because sometimes mails just don't arrive correctly, or go to spam, etc. Depending on the sender to send again to another address is not satisfying to me.

I'd prefer not to have to stand up and maintain my own server. I can get a PO box, and I can get a synthetic phone number, but I really crave something I can really trust. Something encrypted, deniable, and having all sorts of nice qualities.

From an security aesthetics POV, I love projects like https://github.com/maqp/tfc . What other projects like this are there?

Do you know of some interesting research in this area besides maqp's https://github.com/maqp/tfc?
The thing is, for the Horcrux to have any effect at all you need at least three devices per endpoint. And at that point using split TCB architecture is much better as under some assumptions it's provably secure against remote key exfiltration. See my work on TFC here: https://github.com/maqp/tfc
"- if you receive a stream of unencrypted postcards from Grandma on vacation"

That's such a bullshit excuse. Everything goes with outer layer of encryption these days, what matters is will Telegram offer to lock themselves out of the messages to which the answer is no by default. If you want to chat on desktop or create a group, the answer is no whether you like it or not.

So again, some niché use case of "it's probably nothing sensitive so you might as well send it in the clear because that says you're not a dissident" is thus not even valid. There's almost always outer layer of encryption.

"The availability of metadata, who can access that metadata etc etc plays a role."

Indeed. All the more reason to avoid Telegram that by default stores all that metadata.

"someone always have to pull the E2E: Good, anything else: Bad."

No the point is we'll never even get to the debate on reducing metadata as long as we need to play whack-a-mole with shit apps like Telegram that don't E2EE by default, let alone provide any kind of metadata protection, even sealed sender like Signal does.

As the author of messaging system[1] that provides both E2EE by default for everything as well as metadata protection (more than any other app out there) and advanced protections like endpoint security, I don't really like you putting me into some square of caring only about E2EE. All I can say to you is, first things first.

[1] https://github.com/maqp/tfc

"I think it's important to distinguish mass surveillance from targeted surveillance. They present very different threat models."

Targeted attacks against centralized points that enable wide-scale surveillance are mass surveillance. Imagine NSA would claim "A fiber optic splitter in the bottom of the ocean is a targeted attack against one device (repeater), or one inch segment of glass wire, it's not mass surveillance".

It's vital that we define targeted surveillance as something where the target is a single entity. Hacking Moxie's phone is targeted surveillance. Hacking Signal server is not. Hacking every visitor of a CP site is mass surveillance https://www.eff.org/deeplinks/2016/09/playpen-story-fbis-unp...

"You might want to look at their in-progress P2P work."

It will be a nice to have sure, but I think P2P should work exclusively via Tor if you want to hide metadata. wrt that, you might find my work interesting https://github.com/maqp/tfc

"Briar fails unless you only talk to people using smartphones."

A picture is worth a thousand words

https://twitter.com/Amlk_B/status/1286642831239647232/photo/...

"MoxieTalk fails because it exposes people to mass surveillance."

Jabs like these aren't really appreciated. Extraordinary claims require extraordinary evidence.

Right.

The Tinfoil Chat setup uses optocouplers to enforce one-way data transmission.[0] And one can use inexpensive CD-R and micro SD cards for single-use data transfer. But transferring anything but plain text is dangerous.

0) https://github.com/maqp/tfc

I apologize for the delay, I don't come here all the time.

"E2EE's security is based on the use encryption and secret keys, which must remain secret -- there are a variety of ways that secret keys could potentially be compromised,"

The argument can not be "anything can be compromised with sufficient resources, _therefore it does not matter if you don't follow best practices within limitations of the architecture_".

"Most of these attacks are out-of-band."

Agreed, but you also can't argue "the rest of the entire software stack isn't perfect, therefore I don't have to follow best practices". If E2EE fails because of vulnerability in OS, you aren't responsible. If it fails because of vulnerability in your app, then you are responsible. The same way, messaging app vendor is responsible for not using E2EE protocol which eliminates a huge gaping hole: e.g. the entire crypto community has criticized Telegram cloud chats that leave plaintext copies of messages on the server. Not knowing this is the same as not doing your job.

It's also the case you can design your software around more secure architecture to protect against remote key exfiltration, see my work on TFC for example: https://github.com/maqp/tfc

"All I know is that if I ever created an app and ran a company and acted with the best intentions for my users"

The question is, with E2EE becoming almost ubiquitous, would you really think not implementing modern encryption protocol is the same as acting with best intentions.

"That is why I give ToTok the benefit of the doubt"

I can see where you're coming from but having worked with secure messaging so long, to me it's the same as surgeon not verifying they were using industry standard materials for medical screws they use. Were they really acting in best interest of client? You can't excuse not verifying (industry best practice) just because you're "trying your best to minimize other complications".

You're working with sensitive data. You have industry checklists available. If you choose the business, there will be obligations. You don't get benefit of the doubt as a participation award when it's effin obvious you did not use the checklist.

Dealing with the endpoint security is a really tough problem but I have a pet-project that pushes the price per endpoint just below $500 https://github.com/maqp/tfc
It'd be cool to have an integrated mobile version of Tinfoil Chat.[0] Basically, three computers routed via Tor, with one that's effectively isolated using optocouplers.

You'd probably want at least that one to be EMF shielded, as well. And you could generalize for web browsing etc. Maybe also use ideas from Qubes Air.

0) https://github.com/maqp/tfc

I had thought of using multiple SoCs for compartmentalization, instead of VMs. Rather like a hardware version of Qubes. I was very inspired by Tinfoil Chat.[0] Using optoisolators, one could have a secure device with strictly text-only communication with other components.

But it's over my head. I lack knowledge and equipment for measuring RF leakage among SoCs. So it goes. Maybe the Qubes team will do it :)

0) https://github.com/maqp/tfc