Merve Çağlarer
회원 가입일: 2022
다이아몬드 리그
47815포인트
회원 가입일: 2022
초급 Google Cloud에서 ML API용으로 데이터 준비하기 기술 배지를 완료하여 Dataprep by Trifacta로 데이터 정리, Dataflow에서 데이터 파이프라인 실행, Dataproc에서 클러스터 생성 및 Apache Spark 작업 실행, Cloud Natural Language API, Google Cloud Speech-to-Text API, Video Intelligence API를 포함한 ML API 호출과 관련된 기술 역량을 입증하세요.
초급 Dataplex로 데이터 메시 빌드하기 기술 배지 과정을 완료하여, Dataplex를 통해 데이터 메시를 빌드해 Google Cloud에서 데이터 보안, 거버넌스, 탐색을 활용하는 역량을 입증하세요. Dataplex에서 애셋에 태그를 지정하고, IAM 역할을 할당하고, 데이터 품질을 평가하는 기술을 연습하고 테스트할 수 있습니다.
This 1-week, accelerated on-demand course builds upon Google Cloud Platform Big Data and Machine Learning Fundamentals. Through a combination of video lectures, demonstrations, and hands-on labs, you'll learn to build streaming data pipelines using Google cloud Pub/Sub and Dataflow to enable real-time decision making. You will also learn how to build dashboards to render tailored output for various stakeholder audiences.
In this second installment of the Dataflow course series, we are going to be diving deeper on developing pipelines using the Beam SDK. We start with a review of Apache Beam concepts. Next, we discuss processing streaming data using windows, watermarks and triggers. We then cover options for sources and sinks in your pipelines, schemas to express your structured data, and how to do stateful transformations using State and Timer APIs. We move onto reviewing best practices that help maximize your pipeline performance. Towards the end of the course, we introduce SQL and Dataframes to represent your business logic in Beam and how to iteratively develop pipelines using Beam notebooks.
In the last installment of the Dataflow course series, we will introduce the components of the Dataflow operational model. We will examine tools and techniques for troubleshooting and optimizing pipeline performance. We will then review testing, deployment, and reliability best practices for Dataflow pipelines. We will conclude with a review of Templates, which makes it easy to scale Dataflow pipelines to organizations with hundreds of users. These lessons will help ensure that your data platform is stable and resilient to unanticipated circumstances.
This course helps learners create a study plan for the PDE (Professional Data Engineer) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.
This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.
머신러닝을 데이터 파이프라인에 통합하면 데이터에서 더 많은 인사이트를 도출할 수 있습니다. 이 과정에서는 머신러닝을 Google Cloud의 데이터 파이프라인에 포함하는 방법을 알아봅니다. 맞춤설정이 거의 또는 전혀 필요 없는 경우에 적합한 AutoML에 대해 알아보고 맞춤형 머신러닝 기능이 필요한 경우를 위해 Notebooks 및 BigQuery 머신러닝(BigQuery ML)도 소개합니다. Vertex AI를 사용해 머신러닝 솔루션을 프로덕션화하는 방법도 다루어 보겠습니다.
이 과정에서는 데이터-AI 수명 주기를 지원하는 Google Cloud 빅데이터 및 머신러닝 제품과 서비스를 소개합니다. Google Cloud에서 Vertex AI를 사용하여 빅데이터 파이프라인 및 머신러닝 모델을 빌드하는 프로세스, 문제점 및 이점을 살펴봅니다.
This skill badge course aims to unlock the power of data visualization and business intelligence reporting with Looker, and gain hands-on experience through labs.
This is the second course in the Data to Insights course series. Here we will cover how to ingest new external datasets into BigQuery and visualize them with Looker Studio. We will also cover intermediate SQL concepts like multi-table JOINs and UNIONs which will allow you to analyze data across multiple data sources. Note: Even if you have a background in SQL, there are BigQuery specifics (like handling query cache and table wildcards) that may be new to you. After completing this course, enroll in the Achieving Advanced Insights with BigQuery course.
The third course in this course series is Achieving Advanced Insights with BigQuery. Here we will build on your growing knowledge of SQL as we dive into advanced functions and how to break apart a complex query into manageable steps. We will cover the internal architecture of BigQuery (column-based sharded storage) and advanced SQL topics like nested and repeated fields through the use of Arrays and Structs. Lastly we will dive into optimizing your queries for performance and how you can secure your data through authorized views. After completing this course, enroll in the Applying Machine Learning to your Data with Google Cloud course.
This workload aims to upskill Google Cloud partners to perform specific tasks for modernization using LookML on BigQuery. A proof-of-concept will take learners through the process of creating LookML visualizations on BigQuery. During this course, learners will be guided specifically on how to write Looker modeling language, also known as LookML and create semantic data models, and learn how LookML constructs SQL queries against BigQuery. At a high level, this course will focus on basic LookML to create and access BigQuery objects, and optimize BigQuery objects with LookML.
This course explores how to leverage Looker to create data experiences and gain insights with modern business intelligence (BI) and reporting.
This learning experience guides you through the process of utilizing various data sources and multiple Google Cloud products (including BigQuery and Google Sheets using Connected Sheets) to analyze, visualize, and interpret data to answer specific questions and share insights with key decision makers.
This course explores how to implement a streaming analytics solution using Pub/Sub.
초급 Google Cloud Observability로 모니터링 및 로깅 기술 배지를 획득하여 Compute Engine에서 가상 머신 모니터링, Cloud Monitoring을 활용한 다중 프로젝트 감독, Cloud Functions로 모니터링 및 로깅 기능 확장, 커스텀 애플리케이션 측정항목 생성 및 전송, 커스텀 측정항목을 기반으로 Cloud Monitoring 알림 구성 등의 기술을 입증하세요.
중급 BigQuery ML을 사용한 예측 모델링을 위한 데이터 엔지니어링 기술 배지를 획득하여 Dataprep by Trifact로 데이터 변환 파이프라인을 BigQuery에 빌드, Cloud Storage, Dataflow, BigQuery를 사용한 ETL(추출, 변환, 로드) 워크플로 빌드, BigQuery ML을 사용하여 머신러닝 모델을 빌드하는 기술 역량을 입증할 수 있습니다.
This course continues to explore the implementation of data load and transformation pipelines for a BigQuery Data Warehouse using Cloud Data Fusion.
Welcome to Cloud Data Fusion, where we discuss how to use Cloud Data Fusion to build complex data pipelines.
이 과정에서는 스트리밍 데이터 파이프라인을 빌드할 때 직면하는 실제 과제를 해결하기 위해 실습을 진행합니다. Google Cloud 제품을 사용하여 지속적이고 무제한적인 데이터를 관리하는 데 중점을 둡니다.
데이터 레이크와 데이터 웨어하우스를 사용하는 기존 접근방식은 효과적일 수 있지만, 특히 대규모 엔터프라이즈 환경에서는 단점이 있습니다. 이 과정에서는 데이터 레이크하우스의 개념과 데이터 레이크하우스를 만드는 데 사용되는 Google Cloud 제품을 소개합니다. 레이크하우스 아키텍처는 개방형 표준 데이터 소스를 사용하며 데이터 레이크와 데이터 웨어하우스의 장점을 결합하여 많은 단점을 해결합니다.
This quest offers hands-on practice with Cloud Data Fusion, a cloud-native, code-free, data integration platform. ETL Developers, Data Engineers and Analysts can greatly benefit from the pre-built transformations and connectors to build and deploy their pipelines without worrying about writing code. This Quest starts with a quickstart lab that familiarises learners with the Cloud Data Fusion UI. Learners then get to try running batch and realtime pipelines as well as using the built-in Wrangler plugin to perform some interesting transformations on data.
이 중급 과정에서는 Google Cloud에서 강력한 일괄 데이터 파이프라인을 설계, 빌드, 최적화하는 방법을 알아봅니다. 기본적인 데이터 처리를 넘어, 시의적절한 비즈니스 인텔리전스와 중요한 보고에 필수적인 대규모 데이터 변환과 효율적인 워크플로 조정에 대해 살펴봅니다. Apache Beam용 Dataflow와 Apache Spark용 서버리스(Dataproc Serverless)를 사용하여 구현을 실습하고, 파이프라인 안정성과 운영 우수성을 보장하기 위해 데이터 품질, 모니터링, 알림에 대한 중요한 고려사항을 다룹니다. 데이터 웨어하우징, ETL/ELT, SQL, Python, Google Cloud 개념에 대한 기본적인 지식이 있으면 좋습니다.
중급 Google Cloud에서 Cloud 보안 기본사항 구현하기 기술 배지 과정을 완료하여 Identity and Access Management(IAM)로 역할 생성 및 할당, 서비스 계정 생성 및 관리, 가상 프라이빗 클라우드(VPC) 네트워크에서 비공개 연결 사용 설정, IAP(Identity-Aware Proxy)를 사용한 애플리케이션 액세스 제한, Cloud Key Management Service(KMS)를 사용한 키와 암호화된 데이터 관리, 비공개 Kubernetes 클러스터 생성과 관련된 기술 역량을 입증하세요.
This self-paced training course gives participants broad study of security controls and techniques on Google Cloud. Through recorded lectures, demonstrations, and hands-on labs, participants explore and deploy the components of a secure Google Cloud solution, including Cloud Storage access control technologies, Security Keys, Customer-Supplied Encryption Keys, API access controls, scoping, shielded VMs, encryption, and signed URLs. It also covers securing Kubernetes environments.
In this self-paced training course, participants learn mitigations for attacks at many points in a Google Cloud-based infrastructure, including Distributed Denial-of-Service attacks, phishing attacks, and threats involving content classification and use. They also learn about the Security Command Center, cloud logging and audit logging, and using Forseti to view overall compliance with your organization's security policies.
이 자기 주도형 교육 과정은 수강생이 Google Cloud의 보안 제어 및 기술을 폭넓게 학습할 수 있도록 구성되어 있습니다. 수강생은 녹화 강의, 데모, 실습을 통해 Cloud ID, Resource Manager, IAM, 가상 프라이빗 클라우드(VPC) 방화벽, Cloud Load Balancing, Cloud Peering, Cloud Interconnect, VPC 서비스 제어처럼 안전한 Google Cloud 솔루션의 구성요소를 탐색하고 직접 배포해 봅니다. 이 과정은 'Google Cloud의 보안' 시리즈의 첫 번째 과정입니다. 이 과정을 완료한 후에는 'Google Cloud 보안 권장사항' 과정도 수강하세요.
In this course, you learn how to do the kind of data exploration and analysis in Looker that would formerly be done primarily by SQL developers or analysts. Upon completion of this course, you will be able to leverage Looker's modern analytics platform to find and explore relevant content in your organization’s Looker instance, ask questions of your data, create new metrics as needed, and build and share visualizations and dashboards to facilitate data-driven decision making.
이 과정에서는 Looker에서 고급 LookML 개념을 적용하는 실무를 직접 경험해 봅니다. Liquid를 사용하여 동적 측정기준과 측정항목을 맞춤설정 및 생성하고 동적 SQL 파생 테이블 및 맞춤설정된 기본 파생 테이블을 만들고 확장을 사용하여 LookML 코드를 모듈화하는 방법을 알아봅니다.
This course empowers you to develop scalable, performant LookML (Looker Modeling Language) models that provide your business users with the standardized, ready-to-use data that they need to answer their questions. Upon completing this course, you will be able to start building and maintaining LookML models to curate and manage data in your organization’s Looker instance.
중급 Looker에서 데이터 모델 관리하기 기술 배지 과정을 완료하여 다음 기술을 입증하세요. LookML 프로젝트 상태 유지하기, 데이터 검증을 위한 SQL Runner 활용하기, LookML 권장사항 적용하기, 성능을 위한 쿼리 및 보고 최적화하기, 영구 파생 테이블 및 캐싱 정책 구현하기
In this quest, you will get hands-on experience with LookML in Looker. You will learn how to write LookML code to create new dimensions and measures, create derived tables and join them to Explores, filter Explores, and define caching policies in LookML.
초급 Looker에서 LookML 객체 빌드 기술 배지 과정을 완료하여 새로운 측정기준 및 측정값, 뷰, 파생 테이블을 빌드하고, 요구사항에 따라 측정 필터 및 유형을 설정하고, 측정기준과 측정값을 업데이트하고, Explore를 빌드 및 미세 조정하고, 기존 Explore에 뷰를 조인하고, 비즈니스 요구사항에 따라 생성할 LookML 객체를 결정하는 기술 역량을 입증할 수 있습니다.
In this course, you shadow a series of client meetings led by a Looker Professional Services Consultant.
By the end of this course, you should feel confident employing technical concepts to fulfill business requirements and be familiar with common complex design patterns.
In this course you will discover additional tools for your toolbox for working with complex deployments, building robust solutions, and delivering even more value.
Develop technical skills beyond LookML along with basic administration for optimizing Looker instances
This course reviews the processes for creating table calculations, pivots and visualizations
This course is designed for Looker users who want to create their own ad-hoc reports. It assumes experience of everything covered in our Get Started with Looker course (logging in, finding Looks & dashboards, adjusting filters, and sending data)
In this course you will discover Liquid, the templating language invented by Shopify and explore how it can be used in Looker to create dynamic links, content, formatting, and more.
Hands on course covering the main uses of extends and the three primary LookML objects extends are used on as well as some advanced usage of extends.
This course is designed to teach you about roles, permission sets and model sets. These are areas that are used together to manage what users can do and what they can see in Looker.
This course aims to introduce you to the basic concepts of Git: what it is and how it's used in Looker. You will also develop an in-depth knowledge of the caching process on the Looker platform, such as why they are used and why they work
This course provides an introduction to databases and summarized the differences in the main database technologies. This course will also introduce you to Looker and how Looker scales as a modern data platform. In the lessons, you will build and maintain standard Looker data models and establish the foundation necessary to learn Looker's more advanced features.
Want to learn the core SQL and visualization skills of a Data Analyst? Interested in how to write queries that scale to petabyte-size datasets? Take the BigQuery for Analyst Quest and learn how to query, ingest, optimize, visualize, and even build machine learning models in SQL inside of BigQuery.
Want to scale your data analysis efforts without managing database hardware? Learn the best practices for querying and getting insights from your data warehouse with this interactive series of BigQuery labs. BigQuery is Google's fully managed, NoOps, low cost analytics database. With BigQuery you can query terabytes and terabytes of data without having any infrastructure to manage or needing a database administrator. BigQuery uses SQL and can take advantage of the pay-as-you-go model. BigQuery allows you to focus on analyzing data to find meaningful insights.
중급 BigQuery로 데이터 웨어하우스 빌드 기술 배지를 완료하여 데이터를 조인하여 새 테이블 만들기, 조인 관련 문제 해결, 합집합으로 데이터 추가, 날짜로 파티션을 나눈 테이블 만들기, BigQuery에서 JSON, 배열, 구조체 작업하기와 관련된 기술 역량을 입증하세요.
초급 BigQuery 데이터에서 인사이트 도출 기술 배지 과정을 완료하여 SQL 쿼리 작성, 공개 테이블 쿼리, BigQuery로 샘플 데이터 로드, BigQuery의 쿼리 검사기를 통한 일반적인 문법 오류 문제 해결, BigQuery 데이터를 연결해 Looker Studio에서 보고서를 생성하는 작업과 관련된 기술 역량을 입증하세요.
This course provides an iterative approach to plan, build, launch, and grow a modern, scalable, mature analytics ecosystem and data culture in an organization that consistently achieves established business outcomes. Users will also learn how to design and build a useful, easy-to-use dashboard in Looker. It assumes experience with everything covered in our Getting Started with Looker and Building Reports in Looker courses.
In this course, we’ll show you how organizations are aligning their BI strategy to most effectively achieve business outcomes with Looker. We'll follow four iterative steps: Plan, Build, Launch, Grow, and provide resources to take into your own services delivery to build Looker with the goal of achieving business outcomes.
By the end of this course, you should be able to articulate Looker's value propositions and what makes it different from other analytics tools in the market. You should also be able to explain how Looker works, and explain the standard components of successful service delivery.