OPUS machine translation models using marian-nmt

We (OPUS / University of Helsinki) curate a large amount of very multilingual parallel corpora, and use them to train machine translation models, currently mostly with Marian-NMT. See https://opus.nlpl.eu/ and GitHub - Helsinki-NLP/Opus-MT: Open neural machine translation models and web services. Are our models a good fit for integrating at Sotabench, given that our emphasis is more on data than on ML engineering? I imagine the OPUS corpora definitely could make good benchmarks, especially for smaller languages.