Dumb question from someone who knows not to much yet about LLMs. How can you trust the other computers? Will I end up with a bunch of swear words coming back from the other nodes that are playing a prank?
I'm not entirely how the approach they're using works [0], but I study federated learning and one of the highly-cited survey papers has several chapters (5 and 6 in particular) addressing potential attacks, failure modes, and bias [1].