Navigating the Tangled Web of AI Cybersecurity Vulnerabilities

Defending Against AI Security Attacks

Insights from the National Institutes of Standards and Technology (NIST)

Tobias Faiss
3 min readJan 17, 2024

--

As our technological society becomes more sophisticated, artificial intelligence (AI) has seeped into almost every crevice, fulfilling tasks that range from assisting in medical diagnoses to steering autonomous vehicles. Yet, these wonders of innovation are not without their risks, and that’s precisely the focus of a recent publication from the National Institute of Standards and Technology (NIST).

The document, ‘Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100–2)’, dissects the intricacies of AI and machine learning (ML) vulnerabilities, highlighting the susceptibilities of AI systems to purposeful manipulation, termed poisoning, leading to their malfunction. This can occur either during an AI’s learning phase or as the AI fine-tunes its actions while engaging with the physical world.

Akin to all creations, AI’s weakness can often be traced back to its origin. It’s worth noting that the data that form the learning material for AI systems may not always be reliable. A lack of veracity of source data could lead to corruption and eventual performance issues with the AI.

To tackle these issues, the collaborative effort behind the publication explores possible mitigation techniques, providing a broad taxonomy of attack techniques relevant to various AI systems. It acknowledges, however, that no defense mechanism currently offers robust assurances of full mitigation against these risks, making it crucial for industry, academia, and the public to engage in the exploration and development of more reliable defenses.

Although the technology community is striving towards creating trustworthy AI, we must accept the reality of our circumstances — AI systems, like all innovations, come with their sets of challenges. Adversarial manipulation is a major stumbling block in the quest for AI perfection, but one that cannot and should not dampen the spirit of advancement.

NIST’s AI Risk Management Framework stands as an anchor, guiding the integration of security principles during the design and development phases. Their recently published document could serve as a roadmap for organizations aiming to shield their AI from vulnerabilities, or even as a call to arms, emphasizing the importance of vigilance and preparedness.

The age of AI offers opportunities for tremendous progress and innovation, but as leaders in business and technology, we must temper our enthusiasm with prudence and security foresight. NIST’s publication serves as a stern reminder that the pathway to success is often paved with challenges and offers valuable insights into meeting these trials head-on.

Let’s keep innovating, but more importantly, let’s keep our AI secure and resilient against the challenges of the digital era.

The cybersecurity chess match continues.

If you want to learn more about AI security, you definitely should look into the training Managing AI Risks with the NIST AI RMF.

About Tobias Faiss

Tobias is a Senior Engineering Manager, focusing on applied Leadership, Analytics and Cyber Resilience. He has a track record of 18+ year in managing software-projects, -services and -teams in the United States, EMEA and Asia-Pacific. He currently leads several multinational teams in Germany, India, Singapore and Vietnam. Also, he is the founder of the delta2 edventures platform where its mission is to educate students, and IT-Professionals to transition into an IT-Management role.

Tobias’ latest book is ‘The Art of IT-Management: How to Successfully Lead Your Company Into the Digital Future’. You can also contact him on his personal website tobiasfaiss.com

--

--

Tobias Faiss
Tobias Faiss

Written by Tobias Faiss

Senior Manager | Building a Cyber Resilient World

No responses yet