1. This article investigates a family of poisoning attacks against Support Vector Machines (SVMs).
2. These attacks inject specially crafted training data that increases the SVM's test error.
3. The proposed attack uses a gradient ascent strategy in which the gradient is computed based on properties of the SVM's optimal solution.
The article is generally trustworthy and reliable, as it provides an in-depth analysis of a family of poisoning attacks against Support Vector Machines (SVMs). The authors provide evidence for their claims, such as demonstrating that their gradient ascent procedure reliably identifies good local maxima of the non-convex validation error surface, which significantly increases the classifier's test error. Furthermore, they provide a detailed explanation of how their proposed attack works and how it can be kernelized to enable it to be constructed in the input space even for non-linear kernels.
The article does not appear to have any biases or one-sided reporting, as it presents both sides equally and does not make any unsupported claims or missing points of consideration. Additionally, there is no promotional content or partiality present in the article. The authors also note possible risks associated with their proposed attack, such as increasing the SVM's test error.
In conclusion, this article is generally trustworthy and reliable, providing an in-depth analysis of a family of poisoning attacks against SVMs without any biases or one-sided reporting.