The first CAT revolution was Trados in the mid-1990s and the invention of translation memory — it defined the basic paradigm of Computer-Assisted Translation for the next thirty years.

The second revolution, in the 2010s, was cloud-collaborative CAT. MemoQ, Phrase and Memsource moved translation projects from the desktop to the cloud and solved multi-user collaboration. But the product core didn't change — it was still memory + terminology + segmentation.

The third revolution, happening now, is not about plugging AI into a CAT tool. It is about using LLMs to redesign every atomic step of translation. Segmentation is no longer just periods and breakpoints — it is semantic. TM recall is no longer just character similarity — it is vector search. Term injection is no longer simple replacement — it is the model respecting style and context at generation time.

The hardest thing translators used to face — locking the AI's output into a consistent style — becomes something you can actually engineer inside an LLM-native architecture. A good enough style guide, plus a project-specific glossary and reference samples, can keep LLM output within a deliverable range.

The winners of this cycle won't necessarily be the incumbent CAT vendors. They have customers and data, yes, but architectural replacement costs are high. For new entrants, this is a once-in-thirty-years reshuffling window.