Matteo Bregonzio

Data poisoning attacks

Data poisoning is an increasingly important security concern for Machine Learning (ML) systems. As machine learning models are becoming more prevalent in our lives, they are also becoming more vulnerable to malicious attacks. Data poisoning attacks are one of the most insidious and difficult-to-detect kinds of threats on ML models.

Data poisoning is a type of adversarial attack in which a cybercriminal injects malicious data into a machine learning model. These attacks can be used to manipulate the results of a machine learning system, or to redirect the system’s resources away from its intended purpose.

Continue reading