Kubernetes 1.33: What's New and What It Means
— kubernetes, k8s, cloud-native, release-notes, k8s-1.33, octarine — 4 min read
Kubernetes v1.33 "Octarine" delivers a broad set of enhancements across the cluster – from batch processing and scheduling to security, networking, and storage. Highlights include graduations to GA, beta releases of new capabilities, alpha enhancements, and several notable deprecations/removals.
High-Level Summary
- In-Place Pod Vertical Scaling (β): Adjust CPU/memory for Pods without restarts
- Batch Jobs (GA): Introducing
SuccessPolicy
,backoffLimitPerIndex
,maxFailedIndexes
- Sidecar Containers (GA): Stable support using initContainers with
restartPolicy: Always
- Pod Lifecycle Hooks:
stopSignal
is now configurable andSleepAction
supports zero-second sleeps - Structured API Streaming: API server can stream large List responses, drastically reducing memory usage
- Security & Networking: Default user namespaces, supplementalGroup policies, bound SA tokens, multi-CIDR, DSR on Windows
- Storage & CSI: GA populator volumes, reclaim fixes, dynamic CSI capacity, Image Volumes β with
subPath
- Observability: New metrics & Pod conditions for resize/image-volume workflows
- Deprecations/Removals: Legacy Endpoints,
gitRepo
volumes, Windows HostNetworking alpha
Major Feature Deep Dives
In-Place Pod Vertical Scaling (Beta)
Pods can now scale their CPU/memory without restarting:
kubectl patch pod mypod --subresource=resize \ -p '{"spec":{"containers":[{"name":"app","resources":{"requests":{"cpu":"500m","memory":"512Mi"}}}]}}'
- Kubelet introduces Pod conditions like
PodResizePending
andPodResizeInProgress
- Ideal for stateful workloads that must scale without disruption
- Requires kubectl v1.32+
Batch Jobs GA Enhancements
Example workload:
apiVersion: batch/v1kind: Jobmetadata: name: index-jobspec: parallelism: 10 completionMode: Indexed successPolicy: rules: - succeededCount: 5 backoffLimitPerIndex: 2 maxFailedIndexes: 1
- Stops after any 5 successful tasks
- Retries each failed index up to 2 times
- Reduces wasted work in distributed/batch processing scenarios
Sidecar Containers GA
Use initContainer
with restartPolicy: Always
as a sidecar helper:
spec: initContainers: - name: sidecar image: busybox command: ["sleep", "86400"] restartPolicy: Always
Provides an officially supported sidecar pattern without CRD.
Pod Lifecycle Hooks
You can customize shutdown signals:
spec: containers: - name: myapp image: nginx lifecycle: stopSignal: SIGUSR1
SleepAction
now supports zero delaystopSignal
is now configurable in alpha
Scheduler & Control-Plane Enhancements
- API List Streaming: API server memory usage for large lists dropped ~80 GB → ~3 GB
- CPU Manager SMT Policy: Logical CPU assignment aligned to SMT siblings for better throughput
- kubectl Subresources: Full support for editing & patching Pod subresources (resize, eviction, statuses)
Security & Networking Upgrades
User Namespaces on by Default
hostUsers: false
enables UID/GID mapping isolation.
Supplemental Groups Policy (Beta)
securityContext: runAsUser: 1000 runAsGroup: 3000 supplementalGroups: [4000] supplementalGroupsPolicy: Strict
Ensures unprivileged pods can't escalate privileges via image-defined groups.
Additional Security Enhancements
- Bound ServiceAccount Tokens (Beta): Short-lived, audience-bound tokens reduce risk if leaked
- Image-Pull Secret Validation (Alpha): Ensures
imagePullSecrets
are respected even for cached images
Networking Improvements
- Multi-CIDR & DSR:
- Support for IPv4/IPv6 dual-stack service CIDRs
- Windows Service Direct Server Return (DSR) mode (beta)
- nftables support for kube-proxy (alpha)
- Endpoints Deprecation: Migrate to
EndpointSlice
before removal
Storage & CSI Enhancements
Delete-on-PV Deletion (GA)
Ensures storage is cleaned after PV removal, avoiding orphans.
Volume Populators (GA)
Pre-seed PV content from images or archives without initContainers.
CSI Node Allocatable Count (Alpha)
CSI drivers can dynamically report node capacity.
Storage Capacity Scoring (Alpha)
Scheduler can prioritize nodes based on free disk space:
shape: - {utilization: 0, score: 0} - {utilization: 100, score: 10}
Image Volumes (Beta) with subPath
volumes:- name: myvol image: reference: quay.io/crio/artifact:v2 pullPolicy: IfNotPresent
volumeMounts:- name: myvol mountPath: /data subPath: some/subdir
Mount specific directory paths from container images.
Observability & Debugging
- New
kubelet_image_volume_*
andpod_resize_*
Prometheus metrics - New Pod conditions for status tracking (resize, bind failures)
- Structured logging and improved error output
Deprecations & Removals
- Endpoints API → migrate to
EndpointSlice
- Removal of
kubeProxyVersion
in nodeInfo - Removal of
gitRepo
volume type → use git-sync - Windows HostNetworking alpha is gone
- Review release notes for additional minor removals
Use Cases & Operator Recommendations
Feature | Use Case | Recommendation |
---|---|---|
Pod Resize | Stateful apps needing scale flexibility | Deploy on non-critical clusters; patch via resize subresource |
Batch Jobs | Large indexed jobs | Use successPolicy , per-index backoff |
Security Policies | High-security environments | Enable Default UID/GID, Strict groups, token gating |
Storage | CSI-managed dynamic environments | Enable capacity scoring, Volume Populators |
Clusters @ scale | Large workloads en masse | Watch API server logs/metrics for streaming efficiency |
Implementation Tips
- Upgrade Path: Backup and test migration; enable beta features gradually
- Developer Tips: Update manifests with sidecar, group policies, Image Volume usage
- Monitoring: Ingest new metrics and inspect Pod conditions in Prometheus/Grafana
Further Reading
- Kubernetes 1.33 Release Notes
- SIG Blogs (batch, CSI, API, security)
- Official docs on Pod vertical scaling, Volume Populators, Image Volumes, and other features
Summary
Kubernetes 1.33 is a milestone release that brings essential infrastructure improvements. Whether you're scaling up apps dynamically, reinforcing cluster security, or optimizing large-scale job workflows, this version delivers. As always, test thoroughly, adopt GA features confidently, and plan for deprecations ahead. Happy clustering! 🚀