Skip to main content

GCP Ai Platform Pipeline Job

Google Cloud AI Platform (Vertex AI) Pipeline Jobs represent individual executions of a machine-learning workflow defined in Kubeflow Pipelines. A Pipeline Job coordinates the ordered set of tasks in the workflow, handles scheduling, retries and caching, and tracks the lineage of the produced artefacts. Each job is a REST resource located at projects/locations/pipelineJobs and can be configured with its own service account, VPC network, encryption key and Cloud Storage root.
Official documentation: https://cloud.google.com/vertex-ai/docs/reference/rest/v1/projects.locations.pipelineJobs#PipelineJob

Supported Methodsโ€‹

  • GET: Get a gcp-ai-platform-pipeline-job by its "name"
  • LIST: List all gcp-ai-platform-pipeline-job
  • SEARCH

gcp-iam-service-accountโ€‹

A Pipeline Job executes under a specified IAM service account (serviceAccount field). The service accountโ€™s roles and permissions determine what resources the job can read, write or create during the workflow run.

gcp-compute-networkโ€‹

Pipeline Jobs can be attached to a VPC network through the network field, allowing all Pods spawned by the workflow to communicate privately within that network and to enforce egress restrictions.

gcp-cloud-kms-crypto-keyโ€‹

If Customer-Managed Encryption Keys (CMEK) are enabled, the encryptionSpec.kmsKeyName field of a Pipeline Job references a Cloud KMS crypto key that encrypts the jobโ€™s metadata and any intermediate artefacts it produces.

gcp-storage-bucketโ€‹

Every Pipeline Job needs a pipelineRoot, which is a Cloud Storage URI where workflow metadata and output artefacts are stored. The job therefore depends on, and writes to, the referenced Storage bucket.