Jordan Victor Scher
Member since 2024
Diamond League
41710 points
Member since 2024
Generative AI applications can create new user experiences that were nearly impossible before the invention of large language models (LLMs). As an application developer, how can you use generative AI to build engaging, powerful apps on Google Cloud? In this course, you'll learn about generative AI applications and how you can use prompt design and retrieval augmented generation (RAG) to build powerful applications using LLMs. You'll learn about a production-ready architecture that can be used for generative AI applications and you'll build an LLM and RAG-based chat application.
Bu kursta, yapay zekada gizlilik ve güvenlik konuları ele alınmaktadır. Kurs boyunca, Google Cloud ürünleri ve açık kaynak araçları kullanarak yapay zekayla ilgili önerilen gizlilik ve güvenlik uygulamalarını benimsemenize yardımcı olacak pratik yöntemler ile araçları tanıyacaksınız.
This course equips machine learning practitioners with the essential tools, techniques, and best practices for evaluating both generative and predictive AI models. Model evaluation is a critical discipline for ensuring that ML systems deliver reliable, accurate, and high-performing results in production. Participants will gain a deep understanding of various evaluation metrics, methodologies, and their appropriate application across different model types and tasks. The course will emphasize the unique challenges posed by generative AI models and provide strategies for tackling them effectively. By leveraging Google Cloud's Vertex AI platform, participants will learn how to implement robust evaluation processes for model selection, optimization, and continuous monitoring.
Bu kursta yapay zekanın yorumlanabilirliği ve şeffaflığı kavramlarıyla ilgili temel bilgiler sunulmaktadır. Ayrıca geliştiriciler ve mühendisler için yapay zeka sistemlerinde şeffaflığın önemi ele alınmaktadır. Kurs boyunca, veri ve yapay zeka modellerinde yorumlanabilirliğin ve şeffaflığın sağlanmasına yardımcı olacak pratik yöntemleri ve araçları tanıyacaksınız.
Bu kursta, sorumlu yapay zeka kavramı ve yapay zeka ilkeleri tanıtılmaktadır. Kurs, adalet ve önyargıyı pratik şekilde tanımlama teknikleri ile yapay zeka/makine öğrenimi uygulamalarında önyargının azaltılması konularını ele almaktadır. Kurs boyunca, Google Cloud ürünleri ve açık kaynaklı araçları kullanarak sorumlu yapay zekayla ilgili en iyi uygulamaları benimsemenize yardımcı olacak pratik yöntemler ve araçları tanıyacaksınız.
Bu, üretken yapay zekanın ne olduğunu, nasıl kullanıldığını ve geleneksel makine öğrenme yöntemlerinden nasıl farklı olduğunu açıklamayı amaçlayan giriş seviyesi bir mikro öğrenme kursudur. Ayrıca kendi üretken yapay zeka uygulamalarınızı geliştirmenize yardımcı olacak Google Araçlarını da kapsar.
Bu giriş seviyesi mikro öğrenme kursunda büyük dil modelleri (BDM) nedir, hangi kullanım durumlarında kullanılabileceği ve büyük dil modelleri performansını artırmak için nasıl istem ayarlaması yapabileceğiniz keşfedilecektir. Ayrıca kendi üretken yapay zeka uygulamalarınızı geliştirmenize yardımcı olacak Google araçları hakkında bilgi verilecektir.
Bu kurs, MLOps ekiplerinin üretken yapay zeka modellerini dağıtırken ve yönetirken karşılaştığı zorlukların üstesinden gelmek için gereken bilgi ve araçları sağlamaktadır. Ayrıca yapay zeka ekiplerinin, MLOps süreçlerini kolaylaştırıp üretken yapay zeka projelerinde başarıya ulaşması için Vertex AI'ın nasıl yardımcı olduğunu öğrenmenizi amaçlamaktadır.
This course is an introduction to Vertex AI Notebooks, which are Jupyter notebook-based environments that provide a unified platform for the entire machine learning workflow, from data preparation to model deployment and monitoring. The course covers the following topics: (1) The different types of Vertex AI Notebooks and their features and (2) How to create and manage Vertex AI Notebooks.
This course introduces Google Cloud's AI and machine learning (ML) capabilities, with a focus on developing both generative and predictive AI projects. It explores the various technologies, products, and tools available throughout the data-to-AI lifecycle, empowering data scientists, AI developers, and ML engineers to enhance their expertise through interactive exercises.
Learn how to use NotebookLM to create a personalized study guide for the Professional Machine Learning Engineer certification exam (PMLE). You'll review NotebookLM features, create a notebook, and use the study guide to practice for a certification exam.
Kurumsal yapay zeka ve makine öğreniminin kullanımı artmaya devam ettikçe, bunu sorumlu bir şekilde oluşturmanın önemi de artıyor. Sorumlu yapay zeka hakkında konuşmanın, onu uygulamaya koymaktan çok daha kolay olabilmesi burada bir zorluk oluşturmaktadır. Kuruluşunuzda sorumlu yapay zekayı nasıl işlevsel hale getireceğinizi öğrenmekle ilgileniyorsanız, bu kurs tam size göre. Bu kurs, Google Cloud'un sorumlu yapay zeka yaklaşımını nasıl uyguladığını derinlemesine inceleyerek, kendi sorumlu yapay zeka stratejinizi oluşturmanız için size kapsamlı bir çerçeve sunuyor.
Giriş düzeyindeki Google Cloud'da Makine Öğrenimi API'leri İçin Veri Hazırlama beceri rozetini tamamlayarak şu konulardaki becerilerinizi gösterin: Dataprep by Trifacta ile veri temizleme, Dataflow'da veri ardışık düzenleri çalıştırma, Dataproc'ta küme oluşturma ve Apache Spark işleri çalıştırma ve makine öğrenimi API'lerini (Cloud Natural Language API, Google Cloud Speech-to-Text API ve Video Intelligence API dahil olmak üzere) çağırma.
Complete the intermediate Engineer Data for Predictive Modeling with BigQuery ML skill badge to demonstrate skills in the following: building data transformation pipelines to BigQuery using Dataprep by Trifacta; using Cloud Storage, Dataflow, and BigQuery to build extract, transform, and load (ETL) workflows; and building machine learning models using BigQuery ML.
Complete the intermediate Build a Data Warehouse with BigQuery skill badge course to demonstrate skills in the following: joining data to create new tables, troubleshooting joins, appending data with unions, creating date-partitioned tables, and working with JSON, arrays, and structs in BigQuery.
Complete the introductory Build a Data Mesh with Dataplex skill badge to demonstrate skills in the following: building a data mesh with Dataplex to facilitate data security, governance, and discovery on Google Cloud. You practice and test your skills in tagging assets, assigning IAM roles, and assessing data quality in Dataplex.
This course helps learners create a study plan for the PDE (Professional Data Engineer) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.
In the last installment of the Dataflow course series, we will introduce the components of the Dataflow operational model. We will examine tools and techniques for troubleshooting and optimizing pipeline performance. We will then review testing, deployment, and reliability best practices for Dataflow pipelines. We will conclude with a review of Templates, which makes it easy to scale Dataflow pipelines to organizations with hundreds of users. These lessons will help ensure that your data platform is stable and resilient to unanticipated circumstances.
In this second installment of the Dataflow course series, we are going to be diving deeper on developing pipelines using the Beam SDK. We start with a review of Apache Beam concepts. Next, we discuss processing streaming data using windows, watermarks and triggers. We then cover options for sources and sinks in your pipelines, schemas to express your structured data, and how to do stateful transformations using State and Timer APIs. We move onto reviewing best practices that help maximize your pipeline performance. Towards the end of the course, we introduce SQL and Dataframes to represent your business logic in Beam and how to iteratively develop pipelines using Beam notebooks.
This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.
Incorporating machine learning into data pipelines increases the ability to extract insights from data. This course covers ways machine learning can be included in data pipelines on Google Cloud. For little to no customization, this course covers AutoML. For more tailored machine learning capabilities, this course introduces Notebooks and BigQuery machine learning (BigQuery ML). Also, this course covers how to productionalize machine learning solutions by using Vertex AI.
In this course you will get hands-on in order to work through real-world challenges faced when building streaming data pipelines. The primary focus is on managing continuous, unbounded data with Google Cloud products.
In this intermediate course, you will learn to design, build, and optimize robust batch data pipelines on Google Cloud. Moving beyond fundamental data handling, you will explore large-scale data transformations and efficient workflow orchestration, essential for timely business intelligence and critical reporting. Get hands-on practice using Dataflow for Apache Beam and Serverless for Apache Spark (Dataproc Serverless) for implementation, and tackle crucial considerations for data quality, monitoring, and alerting to ensure pipeline reliability and operational excellence. A basic knowledge of data warehousing, ETL/ELT, SQL, Python, and Google Cloud concepts is recommended.
While the traditional approaches of using data lakes and data warehouses can be effective, they have shortcomings, particularly in large enterprise environments. This course introduces the concept of a data lakehouse and the Google Cloud products used to create one. A lakehouse architecture uses open-standard data sources and combines the best features of data lakes and data warehouses, which addresses many of their shortcomings.