加入 登录

Vishnu Radhakrishnan

成为会员时间:2023

钻石联赛

20650 积分
Google Cloud 運算的基本概念:Cloud 運算基礎知識 Earned Mar 11, 2026 EDT
生成式 AI 簡介 Earned Mar 9, 2026 EDT
在 BigQuery 使用 Gemini 模型 Earned Jan 31, 2026 EST
透過 Gemini in BigQuery 提升工作效率 Earned Jan 19, 2026 EST
使用 Dataplex 建構資料網格 Earned Jan 13, 2026 EST
使用 BigQuery ML 為預測模型進行資料工程 Earned Jan 13, 2026 EST
在 Compute Engine 導入 Cloud Load Balancing Earned Jan 13, 2026 EST
透過 BigQuery 建構資料倉儲 Earned Jan 11, 2026 EST
Serverless Data Processing with Dataflow: Develop Pipelines Earned Jan 11, 2026 EST
Google Cloud 中的資料工程簡介 Earned Mar 25, 2025 EDT
Smart Analytics, Machine Learning, and AI on Google Cloud Earned Nov 14, 2024 EST
Serverless Data Processing with Dataflow: Operations Earned Nov 12, 2024 EST
Serverless Data Processing with Dataflow: Foundations Earned Nov 11, 2024 EST
Build Streaming Data Pipelines on Google Cloud Earned Nov 4, 2024 EST
Build Batch Data Pipelines on Google Cloud Earned Oct 30, 2024 EDT
Build Data Lakes and Data Warehouses on Google Cloud Earned Oct 1, 2024 EDT
Preparing for your Professional Data Engineer Journey Earned Jul 16, 2024 EDT

Google Cloud 運算基本概念課程,適合幾乎沒有雲端運算背景或經驗的學員。這些課程會說明雲端運算基本知識、大數據和機器學習的核心概念,以及 Google Cloud 的角色和定位。完成這一系列課程後,學員將能夠闡述這些概念並展示實用技能。學員應依以下順序完成課程: 1. Google Cloud 運算的基本概念:Cloud 運算基礎知識 2. Google Cloud 運算的基本概念:Google Cloud 基礎架構 3. Google Cloud 運算的基本概念:Google Cloud 的網路與安全性 4. Google Cloud 運算的基本概念:Google Cloud 中的資料、機器學習和 AI 第一門課會概略說明雲端運算、Google Cloud 的使用方式,以及不同的運算選項。

了解详情

這個入門微學習課程主要說明生成式 AI 的定義和使用方式,以及此 AI 與傳統機器學習方法的差異。本課程也會介紹各項 Google 工具,協助您開發自己的生成式 AI 應用程式。

了解详情

本課程將示範如何在 BigQuery 運用 AI/機器學行模型,以執行生成式 AI 任務。透過涉及顧客關係管理的應用實例,您將瞭解運用 Gemini 模型解決業務問題的工作流程。為了便於理解,本課程還提供了採用 SQL 查詢和 Python 筆記本的程式設計解決方案,指導您逐步操作。

了解详情

本課程會說明 Gemini in BigQuery,這是一套由 AI 輔助的功能,可協助「從資料到 AI」的工作流程。這些功能包含資料探索和準備、程式碼生成和疑難排解,以及工作流程探索和視覺化。本課程將透過概念解說、應用實例和實作實驗室,協助資料從業人員提升工作效率,並加速開發 pipeline。

了解详情

完成「使用 Dataplex 建構資料網格」技能徽章入門課程,即可證明您具備下列技能:使用 Dataplex 建構資料網格, 以利在 Google Cloud 維護資料安全性,並協助治理和探索資料。您將練習並測試自己的技能,包括在 Dataplex 為資產加上標記、指派 IAM 角色,以及評估資料品質。

了解详情

完成使用 BigQuery ML 為預測模型進行資料工程技能徽章中階課程, 即可證明自己具備下列知識與技能:運用 Dataprep by Trifacta 建構連至 BigQuery 的資料轉換 pipeline; 使用 Cloud Storage、Dataflow 和 BigQuery 建構「擷取、轉換及載入」(ETL) 工作負載, 以及使用 BigQuery ML 建構機器學習模型。

了解详情

完成「在 Compute Engine 導入 Cloud Load Balancing」技能徽章入門課程,即可證明您具備下列技能: 在 Compute Engine 建立及部署虛擬機器, 以及設定網路和應用程式負載平衡器。

了解详情

完成 透過 BigQuery 建構資料倉儲 技能徽章中階課程,即可證明您具備下列技能: 彙整資料以建立新資料表、排解彙整作業問題、利用聯集附加資料、建立依日期分區的資料表, 以及在 BigQuery 使用 JSON、陣列和結構體。

了解详情

In this second installment of the Dataflow course series, we are going to be diving deeper on developing pipelines using the Beam SDK. We start with a review of Apache Beam concepts. Next, we discuss processing streaming data using windows, watermarks and triggers. We then cover options for sources and sinks in your pipelines, schemas to express your structured data, and how to do stateful transformations using State and Timer APIs. We move onto reviewing best practices that help maximize your pipeline performance. Towards the end of the course, we introduce SQL and Dataframes to represent your business logic in Beam and how to iteratively develop pipelines using Beam notebooks.

了解详情

在本課程中,您會學到 Google Cloud 上的資料工程、資料工程師的角色與職責,以及這些內容如何對應至 Google Cloud 提供的服務。您也將瞭解處理資料工程難題的許多方法。

了解详情

Incorporating machine learning into data pipelines increases the ability to extract insights from data. This course covers ways machine learning can be included in data pipelines on Google Cloud. For little to no customization, this course covers AutoML. For more tailored machine learning capabilities, this course introduces Notebooks and BigQuery machine learning (BigQuery ML). Also, this course covers how to productionalize machine learning solutions by using Vertex AI.

了解详情

In the last installment of the Dataflow course series, we will introduce the components of the Dataflow operational model. We will examine tools and techniques for troubleshooting and optimizing pipeline performance. We will then review testing, deployment, and reliability best practices for Dataflow pipelines. We will conclude with a review of Templates, which makes it easy to scale Dataflow pipelines to organizations with hundreds of users. These lessons will help ensure that your data platform is stable and resilient to unanticipated circumstances.

了解详情

This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.

了解详情

In this course you will get hands-on in order to work through real-world challenges faced when building streaming data pipelines. The primary focus is on managing continuous, unbounded data with Google Cloud products.

了解详情

In this intermediate course, you will learn to design, build, and optimize robust batch data pipelines on Google Cloud. Moving beyond fundamental data handling, you will explore large-scale data transformations and efficient workflow orchestration, essential for timely business intelligence and critical reporting. Get hands-on practice using Dataflow for Apache Beam and Serverless for Apache Spark (Dataproc Serverless) for implementation, and tackle crucial considerations for data quality, monitoring, and alerting to ensure pipeline reliability and operational excellence. A basic knowledge of data warehousing, ETL/ELT, SQL, Python, and Google Cloud concepts is recommended.

了解详情

While the traditional approaches of using data lakes and data warehouses can be effective, they have shortcomings, particularly in large enterprise environments. This course introduces the concept of a data lakehouse and the Google Cloud products used to create one. A lakehouse architecture uses open-standard data sources and combines the best features of data lakes and data warehouses, which addresses many of their shortcomings.

了解详情

This course helps learners create a study plan for the PDE (Professional Data Engineer) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.

了解详情