Kubernetes
Deploying to a Kubernetes cluster? Point the K8s plugin at it. You get typed Pkl resources, schemas that match your cluster's exact version, and auth that works with EKS, AKS, GKE, OKE, or any kubeconfig.
Configuration
Target
import "@formae/formae.pkl"
import "@k8s/k8s.pkl" as k8s
target: formae.Target = new formae.Target {
label = "k8s-local"
config = new k8s.Config {
kubernetesVersion = "1.31"
auth = new k8s.KubeconfigAuth {}
}
}
Apply it against your current kubectl context with formae apply.
For managed clusters, swap the Auth class. See authentication for the full set and per-provider examples.
Kubernetes version
Set kubernetesVersion to your cluster's K8s version (e.g. "1.31"). The plugin matches it to the right schema, so fields that don't exist in that version of Kubernetes fail at pkl eval time instead of failing against your live cluster.
config = new k8s.Config {
kubernetesVersion = "1.31" // must match the @k8s/v<X.Y>/* imports below
auth = new k8s.KubeconfigAuth {}
}
Skip it and the plugin assumes 1.34 (the newest version it ships schemas for). Set it explicitly for anything older.
Each supported minor ships its own schema package and runs its own conformance suite on every push to main. The current supported set is visible in the conformance badges on the plugin's README.
Set the namespace on every namespaced resource
Every namespaced resource must set metadata.namespace explicitly. The plugin won't fall back to K8s' default namespace. Missing values produce an error at apply time. Declare a Namespace in the same forma and reference its name so it lives in one place:
local appNs = new namespace.Namespace {
metadata = new namespace.NamespaceMetadata { name = "my-app" }
}
forma {
appNs
new deployment.Deployment {
metadata = new k8s.NamespacedObjectMeta {
name = "api"
namespace = appNs.res.name // resolvable ref into the namespace above
}
spec { ... }
}
}
Heads up: namespaced kinds use NamespacedObjectMeta. Cluster-scoped kinds (Namespace, ClusterRole, ClusterRoleBinding, PersistentVolume, StorageClass, ...) use ObjectMeta. Pod templates and PVC templates also use ObjectMeta. Mix them up and pkl eval tells you exactly where.
Examples
Three workload examples that run on any provider:
| Example | What it deploys |
|---|---|
| bookstore | Frontend + backend webapp. Smoke test. |
| crossplane | Crossplane control plane. |
| lgtm | Grafana + Loki + Tempo + Mimir + OTel + MinIO. ~25 pods. |
Pick a cloud at apply time:
formae apply --mode reconcile --provider aws examples/bookstore/main.pkl
formae apply --mode reconcile --provider azure examples/lgtm/main.pkl
formae apply --mode reconcile --provider local examples/crossplane/main.pkl
--provider picks one of aws, azure, gcp, oci, or local. Local uses your current kubectl context.
Each example has its own README walking through prerequisites, deploy, verification, and tear-down.
Helm charts
Need a Helm chart deployed through formae? The formae-helm Pkl wrapper lets you reference a chart by name and version, set values inline, and apply it alongside the rest of your forma.
import "@formae-helm/v1.31/HelmChart.pkl"
local chart = new HelmChart {
chart = "bitnami/nginx"
version = "22.4.7"
releaseName = "my-nginx"
namespace = "demo"
values = new Dynamic {
replicaCount = 2
service { type = "ClusterIP" }
}
}
forma {
...chart.resources
}
Full reference: Helm integration.
Supported resources
| Type | Discoverable | Extractable | Comment |
|---|---|---|---|
| K8S::Admissionregistration::MutatingWebhookConfiguration | ✅ | ✅ | |
| K8S::Admissionregistration::ValidatingWebhookConfiguration | ✅ | ✅ | |
| K8S::Apps::DaemonSet | ✅ | ✅ | |
| K8S::Apps::Deployment | ✅ | ✅ | |
| K8S::Apps::ReplicaSet | ✅ | ✅ | |
| K8S::Apps::StatefulSet | ✅ | ✅ | |
| K8S::Autoscaling::HorizontalPodAutoscaler | ✅ | ✅ | |
| K8S::Batch::CronJob | ✅ | ✅ | |
| K8S::Batch::Job | ✅ | ✅ | |
| K8S::Coordination::Lease | ✅ | ✅ | |
| K8S::Core::ConfigMap | ✅ | ✅ | |
| K8S::Core::Endpoints | ✅ | ✅ | |
| K8S::Core::LimitRange | ✅ | ✅ | |
| K8S::Core::Namespace | ✅ | ✅ | |
| K8S::Core::PersistentVolume | ✅ | ✅ | |
| K8S::Core::PersistentVolumeClaim | ✅ | ✅ | |
| K8S::Core::Pod | ✅ | ✅ | |
| K8S::Core::ResourceQuota | ✅ | ✅ | |
| K8S::Core::Secret | ✅ | ✅ | |
| K8S::Core::Service | ✅ | ✅ | |
| K8S::Core::ServiceAccount | ✅ | ✅ | |
| K8S::Flowcontrol::FlowSchema | ✅ | ✅ | |
| K8S::Flowcontrol::PriorityLevelConfiguration | ✅ | ✅ | |
| K8S::Networking::Ingress | ✅ | ✅ | |
| K8S::Networking::IngressClass | ✅ | ✅ | |
| K8S::Networking::NetworkPolicy | ✅ | ✅ | |
| K8S::Node::RuntimeClass | ❌ | ❌ | |
| K8S::Policy::PodDisruptionBudget | ✅ | ✅ | |
| K8S::Rbac::ClusterRole | ❌ | ❌ | |
| K8S::Rbac::ClusterRoleBinding | ✅ | ✅ | |
| K8S::Rbac::Role | ✅ | ✅ | |
| K8S::Rbac::RoleBinding | ✅ | ✅ | |
| K8S::Scheduling::PriorityClass | ✅ | ✅ | |
| K8S::Storage::CSIDriver | ❌ | ❌ | |
| K8S::Storage::StorageClass | ❌ | ❌ |
CRDs and arbitrary custom resources aren't supported yet. The full per-kind schema lives in the plugin repo.
Discovery filters
formae discover skips a default set of system-installed resources so a fresh managed cluster doesn't drag control-plane noise into your inventory. Skipped by default:
- System namespaces:
kube-system,kube-public,kube-node-lease - Default ServiceAccounts and their tokens
- Controller-owned Pods (ReplicaSet, DaemonSet, Job, etc.)
system:*ClusterRoles and ClusterRoleBindings- Bootstrap FlowSchemas
- Cloud-provider default StorageClasses (
gp2,standard,local-path) - Cloud-provider admission webhooks prefixed
eks-,gke-,aks-
Want to manage one of these resources instead of skipping it? You'll need to fork the plugin, remove the matching entry from DiscoveryFilters(), and rebuild.
Release notes
See release notes.