Learning Path Icon Path

Protect AI Workloads with Model Armor

Managed by Google Cloud Partners
4 activities
Last updated 22 дні
Learning Path Shape
Sign in or join

This learning path is part of the Lead the Secure Innovation with Google Cloud Priority Play. It focuses on securing Generative AI applications by implementing robust safety filters and prompt sanitization. You will start with the fundamentals of AI security risks before moving into hands-on configuration of Model Armor templates and floor settings to establish baseline security controls. Through specialized labs, you'll learn how to sanitize user prompts and model responses to prevent data leakage and injection attacks. The path also covers critical operational skills, including setting up audit logging and using monitoring dashboards to track security telemetry, ensuring your AI workflows remain compliant and protected across diverse deployment scenarios.

You can continue your learning journey on Leading the Secure Innovation with Google Cloud curriculium by completing the other learning paths: Designing Secure Landing Zones and Architecture, the remaining Learning Paths are coming soon.