Unsupervised machine translation works by distribution-matching embeddings on the corpora you want to translate between. If the corpora are large enough, their distributions can be estimated robustly, and if they have sufficient overlap in the topics they cover, it's likely that words with similar distribution have similar meanings.
So if there were a large amount of undeciphered Linear A inscriptions on a guessable range of topics, unsupervised machine translation might be worth a try.
Unfortunately, there aren't that many Linear A inscriptions, and for those where the kind of content was known, the distribution matching has already been carried out by hand. E.g. from the article: "the word AB81-02, or KU-RO if transliterated using Linear B sound-values, is one of the few words whose meaning we do know: it appears at the end of lists next to the sum of all the listed numerals, and so clearly means ‘total’. But we still don’t actually know how to pronounce this word, or what part of speech it is, and we can’t identify it with any similar words in any known languages."
So if there were a large amount of undeciphered Linear A inscriptions on a guessable range of topics, unsupervised machine translation might be worth a try.
Unfortunately, there aren't that many Linear A inscriptions, and for those where the kind of content was known, the distribution matching has already been carried out by hand. E.g. from the article: "the word AB81-02, or KU-RO if transliterated using Linear B sound-values, is one of the few words whose meaning we do know: it appears at the end of lists next to the sum of all the listed numerals, and so clearly means ‘total’. But we still don’t actually know how to pronounce this word, or what part of speech it is, and we can’t identify it with any similar words in any known languages."