So, where do I get premade versions of this included all words of the 28 largest languages? This is one of the most valuable properties of word2vec and co: Prebuilt versions for many languages, with every word of the dictionary in them.

Once you have that, we can talk about actually replacing word2vec and similar solutions.

Look, let's talk about replacing word2vec because it's old and not that good on its own. Everyone making word vectors, including this article, compares them to word2vec as a baseline because beating word2vec on evaluations is so easy. It's from 2013 and machine learning moves fast these days.

You can replace pre-trained word2vec in 12 languages (with aligned vectors!) with ConceptNet Numberbatch [1]. You can be sure it's better because of the SemEval 2017 results where it came out on top in 4 out of 5 languages and 15 out of 15 aligned language pairs [2]. (You will not find word2vec in this evaluation because it would have done poorly.)

If you want to bring your own corpus, at least update your training method to something like fastText [3], though I recommend looking at how to improve it using ConceptNet anyway, because distributional semantics alone will not get you to the state of the art.

Also: what pre-built word2vec are you using that actually contains valid word associations in many languages? Something trained on just Wikipedia? Al-Rfou's Polyglot? Have you ever actually tested it?

[1] https://github.com/commonsense/conceptnet-numberbatch

[2] http://nlp.arizona.edu/SemEval-2017/pdf/SemEval002.pdf

[3] https://fasttext.cc/