我有3个用python编写的GCP云功能,即CF1,CF2,CF3。 CF1将检查某些条件,因此应并行执行CF2和CF3。
我尝试过
if condition is true:
requests.get("url of CF2")
print("CF2 executed successfully")
requests.get("url of CF3")
print("CF3 executed successfully")
CF1代码:
import requests
static_query = "select * from `myproject.mydataset.mytable`"
try:
# Executing query and loading data into temporary table.
client = bigquery.Client()
job_config = bigquery.QueryJobConfig()
dest_dataset = client.dataset(temporary_dataset,temporary_project)
dest_table = dest_dataset.table(temporary_table)
job_config.destination = dest_table
job_config.create_disposition = 'CREATE_IF_NEEDED'
job_config.write_disposition = 'WRITE_TRUNCATE'
query_job = client.query(static_query,location=bq_location,job_config=job_config)
query_job.result()
table = client.get_table(dest_table)
expiration = (datetime.now() + timedelta(minutes=expiration_time))
table.expires = expiration
table = client.update_table(table,["expires"])
logging.info("Query result loaded into temporary table: {}".format(temporary_table))
# Check row count of resultant query from temporary table.
count_query = "select count(*) size from `{}.{}.{}`".format(temporary_project,temporary_dataset,temporary_table)
job = client.query(count_query)
results = job.result()
count = 0
for row in results:
count = row.size
# If row count of query result is empty log error message on stack-driver.
if count == 0:
logging.error("Query executed with empty result set.")
# If row count of query result has records then trigger below two cloud functions (this should be parallel execution).
else:
# Trigger CF2 cloud function.
requests.get("{}".format(cf2_endpoint))
logging.info("CF2 executed successfully.")
# Trigger CF3 cloud function.
requests.get("{}".format(cf3_endpoint))
logging.info("CF3 executed successfully.")
except RuntimeError:
logging.error("Exception occurred {}".format(error_log_client.report_exception()))
在这里,我要异步执行CF2和CF3。任何建议和解决方案,谢谢。