GCP Ai Platform Pipeline Job
Google Cloud AI Platform (Vertex AI) Pipeline Jobs represent individual executions of a machine-learning workflow defined in Kubeflow Pipelines. A Pipeline Job coordinates the ordered set of tasks in the workflow, handles scheduling, retries and caching, and tracks the lineage of the produced artefacts. Each job is a REST resource located at projects/locations/pipelineJobs
and can be configured with its own service account, VPC network, encryption key and Cloud Storage root.
Official documentation: https://cloud.google.com/vertex-ai/docs/reference/rest/v1/projects.locations.pipelineJobs#PipelineJob
Supported Methodsโ
GET
: Get a gcp-ai-platform-pipeline-job by its "name"LIST
: List all gcp-ai-platform-pipeline-jobSEARCH
Possible Linksโ
gcp-iam-service-account
โ
A Pipeline Job executes under a specified IAM service account (serviceAccount
field). The service accountโs roles and permissions determine what resources the job can read, write or create during the workflow run.
gcp-compute-network
โ
Pipeline Jobs can be attached to a VPC network through the network
field, allowing all Pods spawned by the workflow to communicate privately within that network and to enforce egress restrictions.
gcp-cloud-kms-crypto-key
โ
If Customer-Managed Encryption Keys (CMEK) are enabled, the encryptionSpec.kmsKeyName
field of a Pipeline Job references a Cloud KMS crypto key that encrypts the jobโs metadata and any intermediate artefacts it produces.
gcp-storage-bucket
โ
Every Pipeline Job needs a pipelineRoot
, which is a Cloud Storage URI where workflow metadata and output artefacts are stored. The job therefore depends on, and writes to, the referenced Storage bucket.