Serverless Data Processing with Dataflow - Writing an ETL Pipeline using Apache Beam and Dataflow (Python) Ulasan

11434 ulasan

VERY BAD>> I am unable to run my jobs are with DataflowRunner as I am always getting resources contraints errors.. job is not able to spn up us-cerntral1 region.. I am getting same error in all labs which requires to submit jobs on dataflow. I am able to run with DirectRunner. Please help in this as I have spent too many hours but end up getting same error again and again

Mallikarjunarao G. · Diulas sekitar 11 jam lalu

couldnt finisih it lots of errors in the step by step or resources available

Sebastián P. · Diulas sekitar 15 jam lalu

couldn't finish because dataflow job could get the resources to actually run. Stupid and a waste of my time. still can't run due to limited resources

Tyler W. · Diulas sekitar 21 jam lalu

couldn't finish because dataflow job could get the resources to actually run. Stupid and a waste of my time.

Tyler W. · Diulas sekitar 24 jam lalu

1. There are 2 issues running the lab: - lab 1: apache_beam.io.filesystem.BeamIOError: Match operation failed with exceptions {'gs://qwiklabs-gcp-04-9d1f7241cd59/events.json': RefreshError(TransportError("Failed to retrieve https://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Engine metadata service. Compute Engine Metadata server unavailable. Last exception: HTTPSConnectionPool(host='metadata.google.internal', port=443): Max retries exceeded with url: /computeMetadata/v1/instance/service-accounts/default/?recursive=true (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate - lab 2 Startup of the worker pool in us-central1 failed to bring up any of the desired 1 workers P.S. Not working http.client transport only supports the http scheme, httpswas specified P.S.S. export GCE_METADATA_MTLS_MODE=none resolves an issue with certificate. But still insufficient resources to run the job.

Igor P. · Diulas 1 hari lalu

Luis Antonio C. · Diulas 1 hari lalu

Lingmin M. · Diulas 2 hari lalu

Lingmin M. · Diulas 2 hari lalu

Poor instructions - terrible!

Leighton C. · Diulas 3 hari lalu

My conclusion from this lab is that I should not use or depend on this type of platform for production loads. It if does not even work in a controlled lab, then production is a no-go! ERROR:apache_beam.runners.dataflow.dataflow_runner:2026-04-01T09:25:07.346Z: JOB_MESSAGE_ERROR: Startup of the worker pool in us-central1 failed to bring up any of the desired 1 workers. This is likely a quota issue or a Compute Engine stockout. The service will retry. For troubleshooting steps, see https://cloud.google.com/dataflow/docs/guides/common-errors#worker-pool-failure for help troubleshooting. ZONE_RESOURCE_POOL_EXHAUSTED: Instance 'my-pipeline-1775035379230-04010223-mm24-harness-86c2' creation failed: The zone 'projects/qwiklabs-gcp-02-e142b19585b8/zones/us-central1-a' does not have enough resources available to fulfill the request. Try a different zone, or try again later.

Mikael W. · Diulas 3 hari lalu

Luis Antonio C. · Diulas 3 hari lalu

In the Python script, allow users to pass alternative machine types, so there is no possibility of being unable to complete the lab due to insufficient resources being available (for N1?)

Felix V. · Diulas 3 hari lalu

1. There are 2 issues running the lab: - lab 1: apache_beam.io.filesystem.BeamIOError: Match operation failed with exceptions {'gs://qwiklabs-gcp-04-9d1f7241cd59/events.json': RefreshError(TransportError("Failed to retrieve https://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Engine metadata service. Compute Engine Metadata server unavailable. Last exception: HTTPSConnectionPool(host='metadata.google.internal', port=443): Max retries exceeded with url: /computeMetadata/v1/instance/service-accounts/default/?recursive=true (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate - lab 2 Startup of the worker pool in us-central1 failed to bring up any of the desired 1 workers P.S. Not working http.client transport only supports the http scheme, httpswas specified P.S.S. export GCE_METADATA_MTLS_MODE=none resolves an issue with certificate. But still insufficient resources to run the job.

Igor P. · Diulas 3 hari lalu

1. There are 2 issues running the lab: - lab 1: apache_beam.io.filesystem.BeamIOError: Match operation failed with exceptions {'gs://qwiklabs-gcp-04-9d1f7241cd59/events.json': RefreshError(TransportError("Failed to retrieve https://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Engine metadata service. Compute Engine Metadata server unavailable. Last exception: HTTPSConnectionPool(host='metadata.google.internal', port=443): Max retries exceeded with url: /computeMetadata/v1/instance/service-accounts/default/?recursive=true (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate - lab 2 Startup of the worker pool in us-central1 failed to bring up any of the desired 1 workers P.S. Not working http.client transport only supports the http scheme, httpswas specified

Igor P. · Diulas 3 hari lalu

Luis Antonio C. · Diulas 4 hari lalu

Charles F. · Diulas 4 hari lalu

Too much confuse the Python file and Instructions were not clear to proceed and modify the files. The code is running with errors. Need support.

Venkateswarlu Kuriseti N. · Diulas 4 hari lalu

konda l. · Diulas 4 hari lalu

Jorge Alberto M. · Diulas 7 hari lalu

There is currently an issue with google.auth library (https://github.com/googleapis/google-cloud-python/issues/16090) that prevents me from finishing the lab - took a long time to find the root couse and I run out of time. Additionally, the worker zones and machine types have to be specified because without them the dataflow cannot spawn workers and jobs keep failing.

Przemyslaw S. · Diulas 7 hari lalu

1. There are 2 issues running the lab: - lab 1: apache_beam.io.filesystem.BeamIOError: Match operation failed with exceptions {'gs://qwiklabs-gcp-04-9d1f7241cd59/events.json': RefreshError(TransportError("Failed to retrieve https://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Engine metadata service. Compute Engine Metadata server unavailable. Last exception: HTTPSConnectionPool(host='metadata.google.internal', port=443): Max retries exceeded with url: /computeMetadata/v1/instance/service-accounts/default/?recursive=true (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate - lab 2 Startup of the worker pool in us-central1 failed to bring up any of the desired 1 workers

Igor P. · Diulas 7 hari lalu

Bartłomiej B. · Diulas 7 hari lalu

Alice Wu 吳馨妤 E. · Diulas 10 hari lalu

not enough recources to complete the lab

Stef v. · Diulas 10 hari lalu

Sriyansh S. · Diulas 10 hari lalu

Kami tidak dapat memastikan bahwa ulasan yang dipublikasikan berasal dari konsumen yang telah membeli atau menggunakan produk terkait. Ulasan tidak diverifikasi oleh Google.