Rana / Chicone | Triangulating AI Security | Buch | 978-1-394-36848-8 | www.sack.de

Buch, Englisch, 496 Seiten

Rana / Chicone

Triangulating AI Security


1. Auflage 2025
ISBN: 978-1-394-36848-8
Verlag: Wiley

Buch, Englisch, 496 Seiten

ISBN: 978-1-394-36848-8
Verlag: Wiley


Up-to-date reference enabling readers to address the full spectrum of AI security challenges while maintaining model utility

Generative AI Security: Defense, Threats, and Vulnerabilities delivers a technical framework for securing generative AI systems, building on established standards while focusing specifically on emerging threats to large language models and other generative AI systems. Moving beyond treating AI security as a dual-use technology, this book provides detailed technical analysis of three critical dimensions: implementing AI-powered security tools, defending against AI-enhanced attacks, and protecting AI systems from compromise through attacks like prompt injection, model poisoning, and data extraction.

The book provides concrete technical implementations supported by real-world case studies of actual AI system compromises, examining documented cases like the DeepSeek breaches, Llama vulnerabilities, and Google’s CaMeL security defenses to demonstrate attack methodologies and defense strategies while emphasizing foundational security principles that remain relevant despite technological shifts. Each chapter progresses from theoretical foundations to practical applications.

The book also includes an implementation guide and hands-on exercises focusing on specific vulnerabilities in generative AI architectures, security control implementation, and compliance frameworks.

Generative AI Security: Defense, Threats, and Vulnerabilities discusses topics including: - Machine learning fundamentals, including supervised, unsupervised, and reinforcement learning and feature engineering and selection
- Intelligent Security Information and Event Management (SIEM), covering AI-enhanced log analysis, predictive vulnerability assessment, and automated patch generation
- Deepfakes and synthetic media, covering image and video manipulation, voice cloning, audio deepfakes, and AI’s greater impact on information integrity
- Security attacks on generative AI, including jailbreaking, adversarial, backdoor, and data poisoning attacks
- Privacy-preserving AI techniques including federated learning and homomorphic encryption

Generative AI Security: Defense, Threats, and Vulnerabilities is an essential resource for cybersecurity professionals and architects, engineers, IT professionals, and organization leaders seeking integrated strategies that address the full spectrum of Generative AI security challenges while maintaining model utility.

Rana / Chicone Triangulating AI Security jetzt bestellen!

Weitere Infos & Material


Chapter 1:

Abstract

1.1: What is Generative AI?

1.2: The Evolution of AI in Cybersecurity

1.3 Overview of GAI in Security

1.4 Current Landscape of Generative AI Applications

1.5 A Triangular Approach

Chapter 1 Summary

Hypothetical Case Study: The Triangular Approach to AI Security

References

Chapter 2: Understanding Generative AI Technologies

Abstract

2.1: ML Fundamentals

2.2 Deep Learning and Neural Networks

2.3 Generative Models

2.4 NLP in Generative AI

2.5 Computer Vision in Generative AI

Conclusion

Chapter 2 Summary:

Case Study:

References

Chapter 3: Generative AI as a Security Tool

Abstract

3.1 AI-Powered Threat Detection and Response

3.2 Automated Vulnerability Discovery and Patching

3.3 Intelligent SIEMs

3.4 AI in Malware Analysis and Classification

3.5 Generative AI in Red Teaming

3.6 J-Curve for Productivity in AI-Driven Security

3.7 Regulatory Technology (RegTech)

3.8 AI for Emotional Intelligence (EQ) in Cybersecurity

Chapter 3 Summary:

Case study: GAI as a Tool

References

Chapter 4: Weaponized Generative AI

Abstract

4.1 Deepfakes and Synthetic Media

4.2 AI-Powered Social Engineering

4.3 Automated Hacking and Exploit Generation

4.4 Privacy Concerns

4.5 Weaponization of AI: Attack Vectors

4.6 Defensive Strategies Against Weaponized Generative AI

Chapter 4 Summary:

Case Study 1: Weaponized AI in a Small-Sized Organization

Case Study 2: Weaponized AI in a Large Organization

References

Chapter 5: Generative AI Systems as a Target of Cyber Threats

Abstract

5.1 Security Attacks on Generative AI

5.2 Privacy Attacks on Generative AI

5.3 Attacks on Availability

5.4 Physical Vulnerabilities

5.5 Model Extraction and Intellectual Property Risks

5.6 Model Poisoning and Supply Chain Risks

5.7 Open-Source GAI Models

5.8 Application-specific Risks

5.9 Challenges in Mitigating Generative AI Risks

Chapter 5 Summary:

Case Study 1: Small Organization - Securing Customer Chatbot Systems

Case Study 2: Medium-Sized Organization - Defending Against Model Extraction

Case Study 3: Large Organization - Addressing Data Poisoning in AI Training Pipelines

References

Chapter 6: Defending Against Generative AI Threats

Abstract

6.1 Deepfake Detection Techniques

6.2 Adversarial Training and Robustness

6.3 Secure AI Development Practices

6.4 AI Model Security and Protection

6.5 Privacy-Preserving AI Techniques

6.6 Proactive Threat Intelligence and AI Incident Response

6.7 MLSecOps/SecMLOPs for Secure AI Development

Chapter 6 Summary:

Case Study: Comprehensive Defense Against Generative AI Threats in a Multinational Organization

References

Chapter 7: Ethical and Regulatory Considerations

Abstract

7.1 Ethical Challenges in AI Security

7.2 AI Governance Frameworks

7.3 Current and Emerging AI Regulations

7.4 Responsible AI Development and Deployment

7.5 Balancing Innovation and Security

Chapter 7 Summary

Case Study: Navigating Ethical and Regulatory Challenges in AI Security for a Financial Institution

References

Chapter 8: Future Trends in Generative AI Security

Abstract

8.1 Quantum Computing and AI Security

8.2 Human Collaboration in Cybersecurity

8.3 Advancements in XAI

8.4 The Role of Generative AI in Zero Trust

8.5 Micromodels

8.6 AI and Blockchain

8.7 Artificial General Intelligence (AGI)

8.8 Digital Twins

8.9 Agentic AI

8.10 Multimodal models

8.11 Robotics

Chapter 8 Summary:

Case Study: Applying the Triangular Framework to Generative AI Security

References

Chapter 9: Implementing Generative AI Security in Organizations

Abstract

9.1 Assessing Organizational Readiness

9.2 Developing an AI Security Strategy

9.3 Shadow AI

9.4 Building and Training AI Security Teams

9.5 Policy Recommendations for AI and Generative AI Implementation: A Triangular Approach

Chapter 9 Summary

Case Study: Implementing Generative AI Security in Organizations – A Triangular Path Forward

References

Chapter 10 Future Outlook on AI and Cybersecurity

Abstract

10.1 The Evolving Role of Security Professionals

10.2 AI-Driven Incident Response and Recovery

10.3 GAI Security Triad Framework (GSTF)

10.5 Preparing for Future Challenges

10.5 Responsible AI Security

Chapter 10 Summary:

Practice Quiz: AI Security Triangular Framework

References

Index


Shaila Rana, PhD, is a professor of Cybersecurity, co-founder of the ACT Research Institute, a cybersecurity, AI, and technology think tank, and serves as the Chair of the IEEE Standards Association initiative on Zero Trust Cybersecurity for Health Technology, Tools, Services, and Devices.

Rhonda Chicone, PhD, is a retired professor and the co-founder of the ACT Research Institute. A former CSO, CTO, and Director of Software Development, she brings decades of experience in software product development and cybersecurity.



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.