加入 登录

Jordan Victor Scher

成为会员时间:2024

钻石联赛

41710 积分
在 Google Cloud 打造生成式 AI 應用程式 Earned Mar 11, 2025 EDT
開發人員的負責任 AI 技術:隱私權與安全性 Earned Mar 11, 2025 EDT
機器學習運作 (MLOps) 與 Vertex AI:模型評估 Earned Mar 11, 2025 EDT
開發人員的負責任 AI 技術:可解釋性與透明度 Earned Feb 28, 2025 EST
開發人員的負責任 AI 技術:公平性與偏誤 Earned Feb 28, 2025 EST
生成式 AI 簡介 Earned Feb 21, 2025 EST
大型語言模型簡介 Earned Feb 21, 2025 EST
生成式 AI 適用的機器學習運作 (MLOps) Earned Feb 21, 2025 EST
Working with Notebooks in Vertex AI Earned Feb 21, 2025 EST
Google Cloud 的 AI 和機器學習服務簡介 Earned Feb 20, 2025 EST
Build a Certification Study Guide: PMLE Earned Feb 13, 2025 EST
負責任的 AI 技術:透過 Google Cloud 採用 AI 開發原則 Earned Jan 14, 2025 EST
在 Google Cloud 為機器學習 API 準備資料 Earned Oct 11, 2024 EDT
使用 BigQuery ML 為預測模型進行資料工程 Earned Oct 10, 2024 EDT
透過 BigQuery 建構資料倉儲 Earned Oct 10, 2024 EDT
使用 Dataplex 建構資料網格 Earned Oct 5, 2024 EDT
Preparing for your Professional Data Engineer Journey Earned Oct 2, 2024 EDT
Serverless Data Processing with Dataflow: Operations Earned Sep 29, 2024 EDT
Serverless Data Processing with Dataflow: Develop Pipelines Earned Sep 29, 2024 EDT
Serverless Data Processing with Dataflow: Foundations Earned Sep 8, 2024 EDT
Smart Analytics, Machine Learning, and AI on Google Cloud Earned Sep 7, 2024 EDT
Build Streaming Data Pipelines on Google Cloud Earned Sep 3, 2024 EDT
Build Batch Data Pipelines on Google Cloud Earned Aug 31, 2024 EDT
Build Data Lakes and Data Warehouses on Google Cloud Earned Aug 25, 2024 EDT

大型語言模型 (LLM) 誕生之後,生成式 AI 應用程式帶來的嶄新使用者體驗,可說是幾乎前所未有。身為應用程式開發人員,您要如何在 Google Cloud,運用生成式 AI 建立出色的互動式應用程式? 本課程將帶您瞭解生成式 AI 應用程式,以及如何使用提示設計和檢索增強生成 (RAG),透過 LLM 建構強大的應用程式。我們也會介紹可用於正式環境的生成式 AI 應用程式架構。您將建構採用 LLM 和 RAG 的對話應用程式。

了解详情

本課程涵蓋「AI 隱私權」和「AI 安全性」這兩個重要主題。我們將介紹實用的方法和工具,協助您運用 Google Cloud 產品和開放原始碼工具,導入 AI 隱私權和安全性的建議做法。

了解详情

本課程針對評估生成式和預測式 AI 模型,向機器學習從業人員介紹相關的基礎工具、技術和最佳做法。模型評估是機器學習的重要領域,確保這類系統能在正式環境中提供可靠、準確且成效優異的結果。 學員將深入瞭解多種評估指標與方法,以及適用於不同模型類型和工作的應用方式。此外,也會特別介紹生成式 AI 模型帶來的獨特難題,並提供有效的應對策略。透過 Google Cloud Vertex AI 平台,學員將瞭解在模型挑選、最佳化和持續監控方面,該如何導入穩健的評估程序。

了解详情

本課程旨在說明 AI 的可解釋性和透明度概念、探討 AI 透明度對開發人員和工程師的重要性。課程中也會介紹實務方法和工具,有助於讓資料和 AI 模型透明且可解釋。

了解详情

本課程旨在說明負責任 AI 技術的概念和 AI 開發原則,同時介紹各項技術,在實務上找出公平性和偏誤,減少 AI/機器學習做法上的偏誤。我們也將探討實用方法和工具,透過 Google Cloud 產品和開放原始碼工具,導入負責任 AI 技術的最佳做法。

了解详情

這個入門微學習課程主要說明生成式 AI 的定義和使用方式,以及此 AI 與傳統機器學習方法的差異。本課程也會介紹各項 Google 工具,協助您開發自己的生成式 AI 應用程式。

了解详情

這是一堂入門級的微學習課程,旨在探討大型語言模型 (LLM) 的定義和用途,並說明如何調整提示來提高 LLM 成效。此外,也會介紹多項 Google 工具,協助您自行開發生成式 AI 應用程式。

了解详情

本課程旨在提供必要的知識和工具,協助您探索機器學習運作團隊在部署及管理生成式 AI 模型時面臨的獨特挑戰,並瞭解 Vertex AI 如何幫 AI 團隊簡化機器學習運作程序,打造成效非凡的生成式 AI 專案。

了解详情

This course is an introduction to Vertex AI Notebooks, which are Jupyter notebook-based environments that provide a unified platform for the entire machine learning workflow, from data preparation to model deployment and monitoring. The course covers the following topics: (1) The different types of Vertex AI Notebooks and their features and (2) How to create and manage Vertex AI Notebooks.

了解详情

本課程介紹 Google Cloud 的 AI 和機器學習 (ML) 功能,著重說明如何開發生成式和預測式 AI 專案。我們也會探討「從資料到 AI」整個生命週期都適用的技術、產品和工具,並透過互動式練習,協助資料科學家、AI 開發人員和機器學習工程師精進專業知識。

了解详情

Learn how to use NotebookLM to create a personalized study guide for the Professional Machine Learning Engineer certification exam (PMLE). You'll review NotebookLM features, create a notebook, and use the study guide to practice for a certification exam.

了解详情

隨著企業持續擴大使用人工智慧和機器學習,以負責任的方式發展相關技術也日益重要。對許多企業來說,談論負責任的 AI 技術可能不難,如何付諸實行才是真正的挑戰。如要瞭解如何在機構中導入負責任的 AI 技術,本課程絕對能助您一臂之力。 您可以從中瞭解 Google Cloud 目前採取的策略、最佳做法和經驗談,協助貴機構奠定良好基礎,實踐負責任的 AI 技術。

了解详情

完成 在 Google Cloud 為機器學習 API 準備資料 技能徽章入門課程,即可證明您具備下列技能: 使用 Dataprep by Trifacta 清理資料、在 Dataflow 執行資料管道、在 Dataproc 建立叢集和執行 Apache Spark 工作,以及呼叫機器學習 API,包含 Cloud Natural Language API、Google Cloud Speech-to-Text API 和 Video Intelligence API。

了解详情

完成使用 BigQuery ML 為預測模型進行資料工程技能徽章中階課程, 即可證明自己具備下列知識與技能:運用 Dataprep by Trifacta 建構連至 BigQuery 的資料轉換 pipeline; 使用 Cloud Storage、Dataflow 和 BigQuery 建構「擷取、轉換及載入」(ETL) 工作負載, 以及使用 BigQuery ML 建構機器學習模型。

了解详情

完成 透過 BigQuery 建構資料倉儲 技能徽章中階課程,即可證明您具備下列技能: 彙整資料以建立新資料表、排解彙整作業問題、利用聯集附加資料、建立依日期分區的資料表, 以及在 BigQuery 使用 JSON、陣列和結構體。

了解详情

完成「使用 Dataplex 建構資料網格」技能徽章入門課程,即可證明您具備下列技能:使用 Dataplex 建構資料網格, 以利在 Google Cloud 維護資料安全性,並協助治理和探索資料。您將練習並測試自己的技能,包括在 Dataplex 為資產加上標記、指派 IAM 角色,以及評估資料品質。

了解详情

This course helps learners create a study plan for the PDE (Professional Data Engineer) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.

了解详情

In the last installment of the Dataflow course series, we will introduce the components of the Dataflow operational model. We will examine tools and techniques for troubleshooting and optimizing pipeline performance. We will then review testing, deployment, and reliability best practices for Dataflow pipelines. We will conclude with a review of Templates, which makes it easy to scale Dataflow pipelines to organizations with hundreds of users. These lessons will help ensure that your data platform is stable and resilient to unanticipated circumstances.

了解详情

In this second installment of the Dataflow course series, we are going to be diving deeper on developing pipelines using the Beam SDK. We start with a review of Apache Beam concepts. Next, we discuss processing streaming data using windows, watermarks and triggers. We then cover options for sources and sinks in your pipelines, schemas to express your structured data, and how to do stateful transformations using State and Timer APIs. We move onto reviewing best practices that help maximize your pipeline performance. Towards the end of the course, we introduce SQL and Dataframes to represent your business logic in Beam and how to iteratively develop pipelines using Beam notebooks.

了解详情

This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.

了解详情

Incorporating machine learning into data pipelines increases the ability to extract insights from data. This course covers ways machine learning can be included in data pipelines on Google Cloud. For little to no customization, this course covers AutoML. For more tailored machine learning capabilities, this course introduces Notebooks and BigQuery machine learning (BigQuery ML). Also, this course covers how to productionalize machine learning solutions by using Vertex AI.

了解详情

In this course you will get hands-on in order to work through real-world challenges faced when building streaming data pipelines. The primary focus is on managing continuous, unbounded data with Google Cloud products.

了解详情

In this intermediate course, you will learn to design, build, and optimize robust batch data pipelines on Google Cloud. Moving beyond fundamental data handling, you will explore large-scale data transformations and efficient workflow orchestration, essential for timely business intelligence and critical reporting. Get hands-on practice using Dataflow for Apache Beam and Serverless for Apache Spark (Dataproc Serverless) for implementation, and tackle crucial considerations for data quality, monitoring, and alerting to ensure pipeline reliability and operational excellence. A basic knowledge of data warehousing, ETL/ELT, SQL, Python, and Google Cloud concepts is recommended.

了解详情

While the traditional approaches of using data lakes and data warehouses can be effective, they have shortcomings, particularly in large enterprise environments. This course introduces the concept of a data lakehouse and the Google Cloud products used to create one. A lakehouse architecture uses open-standard data sources and combines the best features of data lakes and data warehouses, which addresses many of their shortcomings.

了解详情