Serverless Data Processing with Dataflow - Writing an ETL Pipeline using Apache Beam and Dataflow (Python) Reviews
11435 reviews
Mara Malina F. · Reviewed 1 hour ago
VERY BAD>> I am unable to run my jobs are with DataflowRunner as I am always getting resources contraints errors.. job is not able to spn up us-cerntral1 region.. I am getting same error in all labs which requires to submit jobs on dataflow. I am able to run with DirectRunner. Please help in this as I have spent too many hours but end up getting same error again and again
Mallikarjunarao G. · Reviewed 14 hours ago
couldnt finisih it lots of errors in the step by step or resources available
Sebastián P. · Reviewed 18 hours ago
couldn't finish because dataflow job could get the resources to actually run. Stupid and a waste of my time. still can't run due to limited resources
Tyler W. · Reviewed 1 day ago
couldn't finish because dataflow job could get the resources to actually run. Stupid and a waste of my time.
Tyler W. · Reviewed 1 day ago
1. There are 2 issues running the lab: - lab 1: apache_beam.io.filesystem.BeamIOError: Match operation failed with exceptions {'gs://qwiklabs-gcp-04-9d1f7241cd59/events.json': RefreshError(TransportError("Failed to retrieve https://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Engine metadata service. Compute Engine Metadata server unavailable. Last exception: HTTPSConnectionPool(host='metadata.google.internal', port=443): Max retries exceeded with url: /computeMetadata/v1/instance/service-accounts/default/?recursive=true (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate - lab 2 Startup of the worker pool in us-central1 failed to bring up any of the desired 1 workers P.S. Not working http.client transport only supports the http scheme, httpswas specified P.S.S. export GCE_METADATA_MTLS_MODE=none resolves an issue with certificate. But still insufficient resources to run the job.
Igor P. · Reviewed 1 day ago
Luis Antonio C. · Reviewed 2 days ago
Lingmin M. · Reviewed 2 days ago
Lingmin M. · Reviewed 2 days ago
Poor instructions - terrible!
Leighton C. · Reviewed 3 days ago
My conclusion from this lab is that I should not use or depend on this type of platform for production loads. It if does not even work in a controlled lab, then production is a no-go! ERROR:apache_beam.runners.dataflow.dataflow_runner:2026-04-01T09:25:07.346Z: JOB_MESSAGE_ERROR: Startup of the worker pool in us-central1 failed to bring up any of the desired 1 workers. This is likely a quota issue or a Compute Engine stockout. The service will retry. For troubleshooting steps, see https://cloud.google.com/dataflow/docs/guides/common-errors#worker-pool-failure for help troubleshooting. ZONE_RESOURCE_POOL_EXHAUSTED: Instance 'my-pipeline-1775035379230-04010223-mm24-harness-86c2' creation failed: The zone 'projects/qwiklabs-gcp-02-e142b19585b8/zones/us-central1-a' does not have enough resources available to fulfill the request. Try a different zone, or try again later.
Mikael W. · Reviewed 3 days ago
Luis Antonio C. · Reviewed 3 days ago
In the Python script, allow users to pass alternative machine types, so there is no possibility of being unable to complete the lab due to insufficient resources being available (for N1?)
Felix V. · Reviewed 3 days ago
1. There are 2 issues running the lab: - lab 1: apache_beam.io.filesystem.BeamIOError: Match operation failed with exceptions {'gs://qwiklabs-gcp-04-9d1f7241cd59/events.json': RefreshError(TransportError("Failed to retrieve https://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Engine metadata service. Compute Engine Metadata server unavailable. Last exception: HTTPSConnectionPool(host='metadata.google.internal', port=443): Max retries exceeded with url: /computeMetadata/v1/instance/service-accounts/default/?recursive=true (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate - lab 2 Startup of the worker pool in us-central1 failed to bring up any of the desired 1 workers P.S. Not working http.client transport only supports the http scheme, httpswas specified P.S.S. export GCE_METADATA_MTLS_MODE=none resolves an issue with certificate. But still insufficient resources to run the job.
Igor P. · Reviewed 3 days ago
1. There are 2 issues running the lab: - lab 1: apache_beam.io.filesystem.BeamIOError: Match operation failed with exceptions {'gs://qwiklabs-gcp-04-9d1f7241cd59/events.json': RefreshError(TransportError("Failed to retrieve https://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Engine metadata service. Compute Engine Metadata server unavailable. Last exception: HTTPSConnectionPool(host='metadata.google.internal', port=443): Max retries exceeded with url: /computeMetadata/v1/instance/service-accounts/default/?recursive=true (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate - lab 2 Startup of the worker pool in us-central1 failed to bring up any of the desired 1 workers P.S. Not working http.client transport only supports the http scheme, httpswas specified
Igor P. · Reviewed 3 days ago
Luis Antonio C. · Reviewed 4 days ago
Charles F. · Reviewed 4 days ago
Too much confuse the Python file and Instructions were not clear to proceed and modify the files. The code is running with errors. Need support.
Venkateswarlu Kuriseti N. · Reviewed 4 days ago
konda l. · Reviewed 4 days ago
Jorge Alberto M. · Reviewed 7 days ago
There is currently an issue with google.auth library (https://github.com/googleapis/google-cloud-python/issues/16090) that prevents me from finishing the lab - took a long time to find the root couse and I run out of time. Additionally, the worker zones and machine types have to be specified because without them the dataflow cannot spawn workers and jobs keep failing.
Przemyslaw S. · Reviewed 7 days ago
1. There are 2 issues running the lab: - lab 1: apache_beam.io.filesystem.BeamIOError: Match operation failed with exceptions {'gs://qwiklabs-gcp-04-9d1f7241cd59/events.json': RefreshError(TransportError("Failed to retrieve https://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Engine metadata service. Compute Engine Metadata server unavailable. Last exception: HTTPSConnectionPool(host='metadata.google.internal', port=443): Max retries exceeded with url: /computeMetadata/v1/instance/service-accounts/default/?recursive=true (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate - lab 2 Startup of the worker pool in us-central1 failed to bring up any of the desired 1 workers
Igor P. · Reviewed 7 days ago
Bartłomiej B. · Reviewed 7 days ago
Alice Wu 吳馨妤 E. · Reviewed 10 days ago
not enough recources to complete the lab
Stef v. · Reviewed 10 days ago
We do not ensure the published reviews originate from consumers who have purchased or used the products. Reviews are not verified by Google.