1. Gaussian processes are ubiquitous in nature and engineering, and a class of neural networks in the infinite-width limit have priors that correspond to Gaussian processes.
2. This article perturbatively extends this correspondence to finite-width neural networks, yielding non-Gaussian processes as priors.
3. The methodology developed allows for tracking the flow of preactivation distributions by progressively integrating out random variables from lower to higher layers, and performing Bayesian inference with weakly non-Gaussian priors.
The article is written by Sho Yaida from Facebook AI Research, which may lead to potential bias due to the author's affiliation with the company. The article is well written and provides a comprehensive overview of the topic at hand, however it does not provide any evidence or sources for its claims. Additionally, there is no discussion of potential risks associated with using non-Gaussian processes or any counterarguments that could be made against them. Furthermore, the article does not present both sides equally and does not explore any unexplored points of consideration or missing evidence for its claims. All in all, while the article provides an interesting overview of non-Gaussian processes and neural networks at finite widths, it lacks sufficient evidence and exploration of potential risks associated with them.