In this second installment of the Dataflow course series, we are going to be diving deeper on developing pipelines using the Beam SDK. We start with a review of Apache Beam concepts. Next, we discuss processing streaming data using windows, watermarks and triggers. We then cover options for sources and sinks in your pipelines, schemas to express your structured data, and how to do stateful transformations using State and Timer APIs. We move onto reviewing best practices that help maximize your pipeline performance. Towards the end of the course, we introduce SQL and Dataframes to represent your business logic in Beam and how to iteratively develop pipelines using Beam notebooks.
Google Cloud 기초: 핵심 인프라 과정은 Google Cloud 사용에 관한 중요한 개념 및 용어를 소개합니다. 이 과정에서는 동영상 및 실무형 실습을 통해 중요한 리소스 및 정책 관리 도구와 함께 Google Cloud의 다양한 컴퓨팅 및 스토리지 서비스를 살펴보고 비교합니다.