Gianluca Mongiello
Member since 2025
Silver League
1389 points
Member since 2025
AI Agents represent a major shift beyond traditional large language models (LLMs): instead of simply generating text-based solutions, they can also act autonomously to execute them. This course introduces the fundamentals of AI Agents, how they differ from LLM APIs, and where they add value in the real world. Based on Google’s agents whitepaper, it provides the theoretical foundation needed before writing your first lines of agent code—ideal for developers, architects, and technical decision-makers who want to understand AI systems through the lens of autonomous, goal-directed behavior (and not just text generation). Join the community forum for questions and discussions.
Welcome to the Cloud TPUs course. We'll explore the advantages and disadvantages of TPUs in various scenarios and compare different TPU accelerators to help you choose the right fit. You'll learn strategies to maximize performance and efficiency for your AI models and understand the significance of GPU/TPU interoperability for flexible machine learning workflows. Through engaging content and practical demos, we'll guide you step-by-step in leveraging TPUs effectively.
Curious about the powerful hardware behind AI? This module breaks down performance-optimized AI computers, showing you why they're so important. We'll explore how CPUs, GPUs, and TPUs make AI tasks super fast, what makes each one unique, and how AI software gets the most out of them. By the end, you'll know exactly how to pick the right GPU for your AI projects, helping you make smart choices for your AI workloads.
Ready to get started with AI Hypercomputers? This course makes it easy! We'll cover the basics of what they are and how they help AI with AI workloads. You'll learn about the different components inside a hypercomputer, like GPUs, TPUs, and CPUs, and discover how to pick the right deployment approach for your needs.
This course is dedicated to equipping you with the knowledge and tools needed to uncover the unique challenges faced by MLOps teams when deploying and managing Generative AI models, and exploring how Vertex AI empowers AI teams to streamline MLOps processes and achieve success in Generative AI projects.
This is an introductory-level microlearning course aimed at explaining what responsible AI is, why it's important, and how Google implements responsible AI in their products. It also introduces Google's 3 AI principles.
This is an introductory level micro-learning course that explores what large language models (LLM) are, the use cases where they can be utilized, and how you can use prompt tuning to enhance LLM performance. It also covers Google tools to help you develop your own Gen AI apps.
This is an introductory level microlearning course aimed at explaining what Generative AI is, how it is used, and how it differs from traditional machine learning methods. It also covers Google Tools to help you develop your own Gen AI apps.