In this course, you’ll learn to use the Google Agent Development Kit to build complex, multi-agent systems. You will build agents equipped with tools, and connect them with parent-child relationships and flows to define how they interact. You’ll run your agents locally and deploy them to Vertex AI Agent Engine to run as a managed agentic flow, with infrastructure decisions and resource scaling handled by Agent Engine. Please note these labs are based off a pre-released version of this product. There may be some lag on these labs as we provide maintenance updates.
NotebookLM is an AI-powered collaborator that helps you do your best thinking. After uploading your documents, NotebookLM becomes an instant expert in those sources so you can read, take notes, and collaborate with it to refine and organize your ideas. NotebookLM Pro gives you everything already included with NotebookLM, as well as higher utilization limits, access to premium features, and additional sharing options and analytics.
Unite Google’s expertise in search and AI with Gemini Enterprise, a powerful tool designed to help employees find specific information from document storage, email, chats, ticketing systems, and other data sources, all from a single search bar. The Gemini Enterprise assistant can also help brainstorm, research, outline documents, and take actions like inviting coworkers to a calendar event to accelerate knowledge work and collaboration of all kinds. (Please note Gemini Enterprise was previously named Google Agentspace, there may be references to the previous product name in this course.)
Complete the intermediate Explore Generative AI with the Gemini API in Vertex AI skill badge to demonstrate skills in text generation, image and video analysis for enhanced content creation, and applying function calling techniques within the Gemini API. Discover how to leverage sophisticated Gemini techniques, explore multimodal content generation, and expand the capabilities of your AI-powered projects.
(This course was previously named Multimodal Prompt Engineering with Gemini and PaLM) This course teaches how to use Vertex AI Studio, a Google Cloud console tool for rapidly prototyping and testing generative AI models. You learn to test sample prompts, design your own prompts, and customize foundation models to handle tasks that meet your application's needs. Whether you are looking for text, chat, code, image or speech generative experiences Vertex AI Studio offers you an interface to work with and APIs to integrate your production application.
This course on Integrate Vertex AI Search and Conversation into Voice and Chat Apps is composed of a set of labs to give you a hands on experience to interacting with new Generative AI technologies. You will learn how to create end-to-end search and conversational experiences by following examples. These technologies complement predefined intent-based chat experiences created in Dialogflow with LLM-based, generative answers that can be based on your own data. Also, they allow you to porvide enterprise-grade search experiences for internal and external websites to search documents, structure data and public websites.
In this course, you'll use text embeddings for tasks like classification, outlier detection, text clustering and semantic search. You'll combine semantic search with the text generation capabilities of an LLM to build Retrieval Augmented Generation (RAG) solutions, such as for question-answering systems, using Google Cloud's Vertex AI and Google Cloud databases.
This course explores Google Cloud technologies to create and generate embeddings. Embeddings are numerical representations of text, images, video and audio, and play a pivotal role in many tasks that involve the identification of similar items, like Google searches, online shopping recommendations, and personalized music suggestions. Specifically, you’ll use embeddings for tasks like classification, outlier detection, clustering and semantic search. You’ll combine semantic search with the text generation capabilities of an LLM to build Retrieval Augmented Generation (RAG) systems and question-answering solutions, on your own proprietary data using Google Cloud’s Vertex AI.
Bu kursta yapay zeka destekli arama teknolojileri, araçları ve uygulamalarını keşfedeceksiniz. Vektör yerleştirmelerinin kullanıldığı semantik aramayı, semantik ve anahtar kelime yaklaşımlarının birleştirildiği karma aramayı ve yapay zeka temsilcisini temellendirerek yapay zeka halüsinasyonlarının en aza indirildiği veriyle artırılmış üretimi (RAG) öğrenin. Akıllı arama motorunuzu oluşturmak için Vertex AI Vector Search'ü uygulamalı olarak deneyin.
This course will help ML Engineers, Developers, and Data Scientists implement Large Language Models for Generative AI use cases with Vertex AI. The first two modules of this course contain links to videos and prerequisite course materials that will build your knowledge foundation in Generative AI. Please do not skip these modules. The advanced modules in this course assume you have completed these earlier modules.
Bu kurs, derin öğrenmeyi kullanarak görüntülere altyazı ekleme modeli oluşturmayı öğretmektedir. Kurs sırasında görüntülere altyazı ekleme modelinin farklı bileşenlerini (ör. kodlayıcı ve kod çözücü) ve modelinizi eğitip değerlendirmeyi öğreneceksiniz. Bu kursu tamamlayan öğrenciler, kendi görüntülere altyazı ekleme modellerini oluşturabilecek ve bu modelleri görüntülere altyazı oluşturmak için kullanabilecek.
Bu kursta, görüntü üretme alanında gelecek vadeden bir makine öğrenimi modelleri ailesi olan "difüzyon modelleri" tanıtılmaktadır. Difüzyon modelleri fizikten, özellikle de termodinamikten ilham alır. Geçtiğimiz birkaç yıl içinde, gerek araştırma gerekse endüstri alanında difüzyon modelleri popülerlik kazandı. Google Cloud'daki son teknoloji görüntü üretme model ve araçlarının çoğu, difüzyon modelleri ile desteklenmektedir. Bu kursta, difüzyon modellerinin ardındaki teori tanıtılmakta ve bu modellerin Vertex AI'da nasıl eğitilip dağıtılacağı açıklanmaktadır.
Bu kurs, dönüştürücü mimarisini ve dönüştürücülerden çift yönlü kodlayıcı temsilleri (BERT - Encoder Representations from Transformers) modelini tanıtmaktadır. Kursta, öz dikkat mekanizması gibi dönüştürücü mimarisinin ana bileşenlerini ve BERT modelini oluşturmak için dönüştürücünün nasıl kullanıldığını öğreneceksiniz. Ayrıca sınıflandırma, soru yanıtlama ve doğal dil çıkarımı gibi BERT'in kullanılabileceği çeşitli görevler hakkında da bilgi sahibi olacaksınız. Kursun tahmini süresi 45 dakikadır.
Bu kursta, kodlayıcı-kod çözücü mimarisi özet olarak anlatılmaktadır. Bu mimari; makine çevirisi, metin özetleme ve soru yanıtlama gibi "sıradan sıraya" görevlerde yaygın olarak kullanılan, güçlü bir makine öğrenimi mimarisidir. Kursta, kodlayıcı-kod çözücü mimarisinin ana bileşenlerini ve bu modellerin nasıl eğitilip sunulacağını öğreneceksiniz. Laboratuvarın adım adım açıklamalı kılavuz bölümünde ise sıfırdan şiir üretmek için TensorFlow'da kodlayıcı-kod çözücü mimarisinin basit bir uygulamasını yazacaksınız.
Bu kursta nöral ağların, giriş sırasının belirli bölümlerine odaklanmasına olanak tanıyan güçlü bir teknik olan dikkat mekanizması tanıtılmaktadır. Kursta, dikkat mekanizmasının çalışma şeklini ve makine öğrenimi, metin özetleme ve soru yanıtlama gibi çeşitli makine öğrenimi görevlerinin performansını artırmak için nasıl kullanılabileceğini öğreneceksiniz.
Earn a skill badge by passing the final quiz, you'll demonstrate your understanding of foundational concepts in generative AI. A skill badge is a digital badge issued by Google Cloud in recognition of your knowledge of Google Cloud products and services. Share your skill badge by making your profile public and adding it to your social media profile.
This course explores Gemini in BigQuery, a suite of AI-driven features to assist data-to-AI workflow. These features include data exploration and preparation, code generation and troubleshooting, and workflow discovery and visualization. Through conceptual explanations, a practical use case, and hands-on labs, the course empowers data practitioners to boost their productivity and expedite the development pipeline.
Text Prompt Engineering Techniques introduces you to consider different strategic approaches & techniques to deploy when writing prompts for text-based generative AI tasks.
Bu kursta Vertex AI Studio tanıtılmaktadır. Bu araç, üretken yapay zeka modelleriyle etkileşime geçmek, kurumsal fikirlerin prototipini oluşturmak ve bunları gerçek hayatta uygulamak için kullanılır. Gerçek hayattan kullanım alanları, etkileşimli dersler ve uygulamalı laboratuvarlar aracılığıyla, ilk istemden son ürüne uzanan yaşam döngüsünü keşfedecek ve çoklu format destekli Gemini uygulamaları, istem tasarımı, istem mühendisliği ve model ayarlama konularında Vertex AI Studio'dan nasıl yararlanabileceğinizi öğreneceksiniz. Bu kursun amacı, Vertex AI Studio'yu kullanarak projelerinizde üretken yapay zekadan yararlanabilmenizi sağlamaktır.
Kurumsal yapay zeka ve makine öğreniminin kullanımı artmaya devam ettikçe, bunu sorumlu bir şekilde oluşturmanın önemi de artıyor. Sorumlu yapay zeka hakkında konuşmanın, onu uygulamaya koymaktan çok daha kolay olabilmesi burada bir zorluk oluşturmaktadır. Kuruluşunuzda sorumlu yapay zekayı nasıl işlevsel hale getireceğinizi öğrenmekle ilgileniyorsanız, bu kurs tam size göre. Bu kurs, Google Cloud'un sorumlu yapay zeka yaklaşımını nasıl uyguladığını derinlemesine inceleyerek, kendi sorumlu yapay zeka stratejinizi oluşturmanız için size kapsamlı bir çerçeve sunuyor.
Bu kurs, sorumlu yapay zekanın ne olduğunu, neden önemli olduğunu ve Google'ın sorumlu yapay zekayı ürünlerinde nasıl uyguladığını açıklamayı amaçlayan giriş seviyesinde bir mikro öğrenme kursudur. Ayrıca Google'ın 7 yapay zeka ilkesini de tanıtır.
A Business Leader in Generative AI can articulate the capabilities of core cloud Generative AI products and services and understand how they benefit organizations. This course provides an overview of the types of opportunities and challenges that companies often encounter in their digital transformation journey and how they can leverage Google Cloud's generative AI products to overcome these challenges.
In the last installment of the Dataflow course series, we will introduce the components of the Dataflow operational model. We will examine tools and techniques for troubleshooting and optimizing pipeline performance. We will then review testing, deployment, and reliability best practices for Dataflow pipelines. We will conclude with a review of Templates, which makes it easy to scale Dataflow pipelines to organizations with hundreds of users. These lessons will help ensure that your data platform is stable and resilient to unanticipated circumstances.
Complete the introductory Build a Data Mesh with Dataplex skill badge to demonstrate skills in the following: building a data mesh with Dataplex to facilitate data security, governance, and discovery on Google Cloud. You practice and test your skills in tagging assets, assigning IAM roles, and assessing data quality in Dataplex.
In this intermediate course, you will learn to design, build, and optimize robust batch data pipelines on Google Cloud. Moving beyond fundamental data handling, you will explore large-scale data transformations and efficient workflow orchestration, essential for timely business intelligence and critical reporting. Get hands-on practice using Dataflow for Apache Beam and Serverless for Apache Spark (Dataproc Serverless) for implementation, and tackle crucial considerations for data quality, monitoring, and alerting to ensure pipeline reliability and operational excellence. A basic knowledge of data warehousing, ETL/ELT, SQL, Python, and Google Cloud concepts is recommended.
This course helps learners create a study plan for the PDE (Professional Data Engineer) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.
In this second installment of the Dataflow course series, we are going to be diving deeper on developing pipelines using the Beam SDK. We start with a review of Apache Beam concepts. Next, we discuss processing streaming data using windows, watermarks and triggers. We then cover options for sources and sinks in your pipelines, schemas to express your structured data, and how to do stateful transformations using State and Timer APIs. We move onto reviewing best practices that help maximize your pipeline performance. Towards the end of the course, we introduce SQL and Dataframes to represent your business logic in Beam and how to iteratively develop pipelines using Beam notebooks.
This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.
This workload aims to upskill Google Cloud Partners about what Google Cloud VMware Engine is, design considerations, and basic implementation. It will also cover interoperability with on-premises vSphere and real world use cases for deployment. Finally, it will cover how VMware Engine compares to other cloud providers' offerings of VMware as a Service and future plans for VMware Engine.
This course introduces participants to MLOps tools and best practices for deploying, evaluating, monitoring and operating production ML systems on Google Cloud. MLOps is a discipline focused on the deployment, testing, monitoring, and automation of ML systems in production. Learners will get hands-on practice using Vertex AI Feature Store's streaming ingestion at the SDK layer.
This course introduces participants to MLOps tools and best practices for deploying, evaluating, monitoring and operating production ML systems on Google Cloud. MLOps is a discipline focused on the deployment, testing, monitoring, and automation of ML systems in production. Machine Learning Engineering professionals use tools for continuous improvement and evaluation of deployed models. They work with (or can be) Data Scientists, who develop models, to enable velocity and rigor in deploying the best performing models.
This course helps developers customize Chronicle and augment its abilities with third party integrations.
This course helps you understand how to use Chronicle to properly handle security incidents.
This course will familiarize you with the core functionality of Chronicle, including the user interface, connections, and settings.
Get hands-on experience applying and building rules for Chronicle. You learn what YARA-L is and how to customize & create event rules.
Learn the technical aspects you need to know about Chronicle and how it can help you detect and action threats.
Earn a DRI badge by completing the Infra Foundations - Implementing Least Privilege for Service Accounts quest, where you demonstrate your capabilities to manage service accounts, assign IAM roles, setting up and using impersonation and implementing logging sinks that target GCS buckets. When you complete this activity, you can earn the badge displayed above! View all the badges you have earned by visiting your profile page.
While the traditional approaches of using data lakes and data warehouses can be effective, they have shortcomings, particularly in large enterprise environments. This course introduces the concept of a data lakehouse and the Google Cloud products used to create one. A lakehouse architecture uses open-standard data sources and combines the best features of data lakes and data warehouses, which addresses many of their shortcomings.
This course introduces the Google Cloud big data and machine learning products and services that support the data-to-AI lifecycle. It explores the processes, challenges, and benefits of building a big data pipeline and machine learning models with Vertex AI on Google Cloud.
Complete the intermediate Build Infrastructure with Terraform on Google Cloud skill badge to demonstrate skills in the following: Infrastructure as Code (IaC) principles using Terraform, provisioning and managing Google Cloud resources with Terraform configurations, effective state management (local and remote), and modularizing Terraform code for reusability and organization.
This course equips students to build highly reliable and efficient solutions on Google Cloud using proven design patterns. It is a continuation of the Architecting with Google Compute Engine or Architecting with Google Kubernetes Engine courses and assumes hands-on experience with the technologies covered in either of those courses. Through a combination of presentations, design activities, and hands-on labs, participants learn to define and balance business and technical requirements to design Google Cloud deployments that are highly reliable, highly available, secure, and cost-effective.
Google Cloud Ağınızı Geliştirme kursunu tamamlayarak bir beceri rozeti kazanın. IAM rollerini keşfetme ve proje erişimi ekleme/kaldırma, VPC ağları oluşturma, Compute Engine sanal makinelerini dağıtma ve izleme, SQL sorguları yazma ve çeşitli dağıtım yaklaşımlarıyla Kubernetes'i kullanarak uygulama dağıtma gibi uygulamaları dağıtıp izlemeyle ilgili birden çok yöntemi öğreneceksiniz.
Welcome to the Getting Started with Google Kubernetes Engine course. If you're interested in Kubernetes, a software layer that sits between your applications and your hardware infrastructure, then you’re in the right place! Google Kubernetes Engine brings you Kubernetes as a managed service on Google Cloud. The goal of this course is to introduce the basics of Google Kubernetes Engine, or GKE, as it’s commonly referred to, and how to get applications containerized and running in Google Cloud. The course starts with a basic introduction to Google Cloud, and is then followed by an overview of containers and Kubernetes, Kubernetes architecture, and Kubernetes operations.
Giriş düzeyindeki Compute Engine İçin Cloud Load Balancing'i Uygulama beceri rozetini tamamlayarak şu konulardaki becerilerinizi gösterin: Compute Engine'de sanal makineler oluşturma ve dağıtma. Ağ ve uygulama yük dengeleyicileri yapılandırma.
Google Cloud'da Uygulama Geliştirme Ortamı Oluşturma kursunu tamamlayarak beceri rozeti kazanın. Bu kursta Cloud Storage, Identity and Access Management, Cloud Functions ve Pub/Sub gibi teknolojilerin temel özelliklerini kullanarak depolama odaklı bulut altyapısı oluşturma ve bu altyapıyla bağlantı kurmayı öğreneceksiniz.
This accelerated on-demand course introduces participants to the comprehensive and flexible infrastructure and platform services provided by Google Cloud. Through a combination of video lectures, demos, and hands-on labs, participants explore and deploy solution elements, including securely interconnecting networks, load balancing, autoscaling, infrastructure automation and managed services.
This accelerated on-demand course introduces participants to the comprehensive and flexible infrastructure and platform services provided by Google Cloud with a focus on Compute Engine. Through a combination of video lectures, demos, and hands-on labs, participants explore and deploy solution elements, including infrastructure components such as networks, systems and applications services. This course also covers deploying practical solutions including customer-supplied encryption keys, security and access management, quotas and billing, and resource monitoring.
This accelerated on-demand course introduces participants to the comprehensive and flexible infrastructure and platform services provided by Google Cloud with a focus on Compute Engine. Through a combination of video lectures, demos, and hands-on labs, participants explore and deploy solution elements, including infrastructure components such as networks, virtual machines and applications services. You will learn how to use the Google Cloud through the console and Cloud Shell. You'll also learn about the role of a cloud architect, approaches to infrastructure design, and virtual networking configuration with Virtual Private Cloud (VPC), Projects, Networks, Subnetworks, IP addresses, Routes, and Firewall rules.
Google Cloud Fundamentals: Core Infrastructure introduces important concepts and terminology for working with Google Cloud. Through videos and hands-on labs, this course presents and compares many of Google Cloud's computing and storage services, along with important resource and policy management tools.
Welcome Gamers! Today's game is all about experimenting with Big Query for Machine Learning! Use real life case studies to learn various concepts of BQML and have fun. Take labs to earn points. The faster you complete the lab objectives, the higher your score.