NIST’s National Cybersecurity Center of Excellence (NCCoE) has released a draft report on machine learning (ML) for public comment. A Taxonomy and Terminology of Adversarial Machine Learning (Draft ...
Machine learning systems are vulnerable to cyberattacks that could allow hackers to evade security and prompt data leaks, scientists at the National Institute of Standards and Technology warned. There ...
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More The National Institute of Standards and Technology (NIST) has released an ...
An AI system can malfunction if an adversary finds a way to confuse its decision making. In this example, errant markings on the road mislead a driverless car, potentially making it veer into oncoming ...
To human observers, the following two images are identical. But researchers at Google showed in 2015 that a popular object detection algorithm classified the left image as “panda” and the right one as ...
The vulnerabilities of machine learning models open the door for deceit, giving malicious operators the opportunity to interfere with the calculations or decision making of machine learning systems.
The fields of machine learning (ML) and artificial intelligence (AI) have seen rapid developments in recent years. ML, a branch of AI and computer science, is the process through which computers can ...
The National Institute of Standards and Technology is promoting an experimentation testbed to address the changing cybersecurity landscape and increasing threats targeting machine learning algorithms, ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results