1. Self-supervised learning is an alternative to supervised learning, which has been gaining attention for its data efficiency and generalization ability.
2. This survey looks into self-supervised learning methods for representation in computer vision, natural language processing, and graph learning.
3. The existing empirical methods are summarized into three main categories according to their objectives: generative, contrastive, and generative-contrastive (adversarial).
The article is generally trustworthy and reliable as it provides a comprehensive overview of self-supervised learning methods for representation in computer vision, natural language processing, and graph learning. It also provides a detailed analysis of the existing empirical methods and summarizes them into three main categories according to their objectives: generative, contrastive, and generative-contrastive (adversarial). Furthermore, the article also collects related theoretical analysis on self-supervised learning to provide deeper insights on why self-supervised learning works.
The article does not appear to have any biases or one-sided reporting as it presents both sides equally by providing an overview of both supervised and self-supervised learning paradigms. Additionally, all claims made in the article are supported with evidence from related research papers. There are no missing points of consideration or unexplored counterarguments as the article covers all relevant topics related to self-supervised learning. Moreover, there is no promotional content or partiality present in the article as it provides an unbiased overview of both supervised and self-supervised learning paradigms. Finally, possible risks associated with self-supervised learning are noted in the article which further adds to its credibility.