1. A method is introduced to improve the structural understanding abilities of language models.
2. This approach pretrains language models on a collection of task-agnostic corpora to generate structures from text.
3. The pretraining enables zero-shot transfer of the learned knowledge that models have about the structure tasks, and it obtains state-of-the-art performance on 21 out of 28 datasets evaluated.
The article is generally reliable and trustworthy, as it provides evidence for its claims in the form of experiments conducted on 28 datasets spanning 10 structure prediction tasks. The authors also provide a detailed description of their approach and its results, which makes it easy to understand and evaluate their findings. Furthermore, the article does not appear to be biased or one-sided in any way, as it presents both sides equally and does not make any unsupported claims or omit any points of consideration. Additionally, there are no promotional content or partiality present in the article, and all possible risks are noted throughout. Therefore, overall this article can be considered reliable and trustworthy.