1. PRIMERA is a pre-trained model for multi-document representation with a focus on summarization.
2. It uses an efficient encoder-decoder transformer to simplify the processing of concatenated input documents.
3. Experiments on 6 multi-document summarization datasets from 3 different domains show that PRIMERA outperforms current state-of-the-art models in most settings with large margins.
The article is generally trustworthy and reliable, as it provides detailed information about the PRIMERA pre-trained model and its performance on various datasets. The article does not appear to be biased or one-sided, as it presents both the advantages and limitations of the model in an objective manner. Furthermore, the article provides evidence for its claims by citing experiments conducted on multiple datasets from different domains, which demonstrates that the model is effective in summarizing multi-document inputs. Additionally, the article includes a link to the code and pre-trained models used in the experiments, which further adds to its credibility. In conclusion, this article can be considered reliable and trustworthy due to its comprehensive coverage of the topic and lack of bias or unsupported claims.