1. The chapter discusses the stochastic back-propagation method for training neural networks, which is an instance of a more general technique called stochastic gradient descent.
2. It provides background material and explains why SGD is a good learning algorithm when the training set is large.
3. The chapter appears in the “reloaded” edition of the tricks book (springer).
The article is written by Leon Bottou, a well-known researcher in the field of neural networks, and published by Springer in 2012 as part of their Neural Networks, Tricks of the Trade, Reloaded edition. The article provides useful information on stochastic gradient descent and its application to training neural networks.
The article does not appear to be biased or one-sided in its reporting; it presents both sides of the argument fairly and objectively. It also provides evidence for its claims and explores counterarguments where appropriate. There are no promotional elements or partiality present in the article either.
The only potential issue with this article is that it does not discuss any possible risks associated with using SGD for training neural networks, such as overfitting or underfitting data due to incorrect hyperparameter settings or lack of regularization techniques. This should be noted as a potential point of consideration when using SGD for training neural networks.