Authentication
Pick an Auth class on k8s.Config and the plugin handles the rest. Token refresh, request signing, per-provider quirks. You configure auth once and the plugin keeps it valid for every API call the target makes.
Pick your auth class
| Cluster | Auth class | How it stays valid |
|---|---|---|
| Local kubectl context | KubeconfigAuth |
Reads your kubeconfig at apply time |
| Formae running as a pod | InClusterAuth |
ServiceAccount token from /var/run/secrets/... |
| AWS EKS | EKSAuth |
Presigned STS token, refreshed on every request |
| Azure AKS | AKSAuth |
Azure AD token, auto-refreshed |
| GCP GKE | GKEAuth |
OAuth2 access token, auto-refreshed |
| Oracle OKE | OCIAuth |
OCI signed request, signed each call |
OVHCloud K8s also works via OVHAuth (endpoint, certificateAuthority, serviceName, clusterId). See the plugin schema if you need it.
KubeconfigAuth - local development
The default for kubectl-reachable clusters. With no fields set, the plugin uses whatever your current context points at:
auth = new k8s.KubeconfigAuth {}
Override either field if you want something specific:
auth = new k8s.KubeconfigAuth {
context = "kind-formae-test"
kubeconfig = "/path/to/kubeconfig" // defaults to $KUBECONFIG or ~/.kube/config
}
Use this for OrbStack, kind, minikube, k3s, or any remote cluster you already have credentials for.
InClusterAuth - formae as a pod
When the formae agent runs inside the cluster it manages:
auth = new k8s.InClusterAuth {}
The plugin picks up the ServiceAccount token mounted at /var/run/secrets/kubernetes.io/serviceaccount/. No fields, no manual rotation. The pod needs the RBAC to do whatever the forma describes.
EKSAuth - AWS EKS
auth = new k8s.EKSAuth {
endpoint = eksCluster.res.endpoint
certificateAuthority = eksCluster.res.certificateAuthorityData
clusterName = eksCluster.res.name
region = "us-west-2" // optional, defaults to AWS_REGION env
}
Endpoint, CA, and clusterName come from the EKS cluster resource. Reference them by $ref and formae waits until the cluster exists before reading them. No race, no manual sequencing. Region falls back to the AWS_REGION env var or your AWS SDK config.
The plugin uses your AWS credentials (whatever the AWS SDK finds) to fetch a presigned STS token, then signs every K8s API call. Refresh is transparent.
The AWS principal making the call needs the eks:DescribeCluster permission. Whatever K8s RBAC is mapped to that principal (via the EKS access-entry API or the legacy aws-auth ConfigMap) is what it can do inside the cluster.
AKSAuth - Azure AKS
auth = new k8s.AKSAuth {
endpoint = aksCluster.res.fqdn
certificateAuthority = aksCluster.res.certificateAuthority
resourceGroup = network.resourceGroup.res.name
clusterName = aksCluster.res.name
}
The AKS create API doesn't return the cluster CA directly. The plugin makes a follow-up call with the cluster's admin credentials to fetch it. Use $ref for certificateAuthority and that follow-up runs automatically before your workload deploys.
The plugin uses DefaultAzureCredential (env vars, then az CLI, then managed identity) to get an Azure AD token, scoped to the cluster's resource group.
For AAD-RBAC-enabled clusters, your principal needs an Azure role bound to the cluster. The cross-cloud examples in the plugin repo (see examples) set Azure Kubernetes Service RBAC Cluster Admin for the calling user.
GKEAuth - GCP GKE
auth = new k8s.GKEAuth {
endpoint = gkeCluster.res.endpoint
certificateAuthority = gkeCluster.res.clusterCaCertificate
}
Two fields. The plugin uses application default credentials (gcloud auth application-default login locally, attached service account in CI) to mint an OAuth2 access token, then calls the cluster.
Your principal needs container.developer at minimum, or more granular RBAC bound at the cluster level.
OCIAuth - Oracle OKE
auth = new k8s.OCIAuth {
endpoint = okeBundle.okeCluster.res.endpoint
certificateAuthority = okeBundle.okeCluster.res.certificateAuthority
clusterOcid = okeBundle.okeCluster.res.id
region = "us-chicago-1" // optional, defaults to ~/.oci/config
}
The plugin signs each K8s API call with your OCI keys (api-key or session token, per your ~/.oci/config setup). No long-lived bearer token. Every request is signed fresh.
Your OCI user needs OKE_CLUSTER_USE policy plus K8s RBAC inside the cluster. OKE bootstraps an admin token for the cluster creator, so an apply that creates the OKE cluster gets RBAC for free.
When to use which
If formae runs outside the cluster:
- Local dev:
KubeconfigAuth. - CI deploying to a managed cluster: the matching cloud-native class (
EKSAuth/AKSAuth/GKEAuth/OCIAuth). Skip kubeconfig juggling entirely. - Provisioned the cluster in the same forma: the cloud-native class, with
$refinto the cluster resource. No race, no manual kubeconfig step.
If formae runs inside the cluster:
InClusterAuth. The pod's RBAC drives access.