GCP Ai Platform Model
A GCP AI Platform Model represents a logical container for one or more model versions that you wish to serve using Google Cloud’s managed machine-learning infrastructure. Each model groups together different versions (for example, trained with different hyper-parameters), manages IAM access control, and acts as the deployable artefact for prediction endpoints. Further details can be found in the official Google Cloud documentation: https://cloud.google.com/vertex-ai/docs/reference/rest/v1/projects.locations.models#Model
Supported Methods
GET
: Get a gcp-ai-platform-model by its "name"LIST
: List all gcp-ai-platform-modelSEARCH
Possible Links
gcp-artifact-registry-docker-image
Custom-training and prediction images stored in Artifact Registry can be referenced by a model version. If the model relies on a bespoke container image for serving, Overmind links the model to the corresponding gcp-artifact-registry-docker-image
so you can trace vulnerabilities in the image back to the model that will expose them.
gcp-ai-platform-pipeline-job
Pipeline jobs frequently train models and automatically register the resulting artefacts as new versions under a given model. Overmind establishes this relationship to show which pipeline produced the model and to surface risks introduced by the training workflow.
gcp-ai-platform-endpoint
A model can be deployed to one or more online prediction endpoints. Linking models to gcp-ai-platform-endpoint
resources allows you to see where the model is currently serving traffic and assess the blast-radius of any model change.
gcp-cloud-kms-crypto-key
If Customer-Managed Encryption Keys (CMEK) are used, the model’s metadata and artefacts are encrypted with a specific Cloud KMS crypto key. Overmind records this linkage so that you can audit key usage and verify that the correct encryption policy is applied to your ML assets.