We present various empirical methods and tools that can objectify and optimize the evaluation of translated texts as parallel translation corpora produced by professional translators or by a machine translation service. The proposed methods and tools are based on an empirical analysis of information processing in translated texts and on the utilitarian role of machine translation and its methods, and they can be implemented in a tool-based translation evaluation apparatus in a professional context. The salient part of these methods (which can be deployed automatically or manually) relies on the comparison of two parameters that are measurable in most natural languages, namely the length of the segments in characters and the number of lexical words they contain. Our recent work (Poirier, 2017 and Poirier, 2021) has shown that these parameters have a strong positive correlation in translation (above 0.9 as a rule, and most often exceeding 0.95): the more characters or lexical words the source segment contains, the more characters or lexical words the translation contains. The measurement of the lexical words and the information volume of the translations allows distinguishing heteromorphic translations (more or less information) from isomorphic segments (same information content). The manual and partially automatic analysis of heteromorphic segments opens up new empirical horizons in the professional evaluation of translations as well as in the contrastive study of discourse and textual translation techniques (e.g., in phraseology, textometry and stylistics). MT can be integrated into various stages before and after the evaluation of translated texts. Upstream, it can be used as an element of comparison of mandatory or optional informational gaps (cultural or personal bias in the target language-culture) in professional translation. Downstream, it can provide the revised translation or revision with counterexamples that negatively justify the suitability or conventionality of the divergent formulations.