ComputePool Configuration Reference
This page provides technical reference information for configuring ComputePools.
Complete ComputePool Example
The following example shows a complete, production-ready ComputePool manifest with all major fields. Use this as a template and refer to the detailed sections below for field-specific guidance.
apiVersion: spectrocloud.com/v1alpha1
kind: ComputePool
metadata:
name: ml-cluster
namespace: my-project
labels:
environment: production
team: ml-platform
annotations:
description: 'Production ML compute cluster'
owner: 'ml-platform-team'
spec:
# ProfileBundle reference - defines infrastructure and application stacks
profileBundleRef:
name: edge-profilebundle
namespace: my-project
cloudType: edge-native
# Cluster variant configuration
clusterVariant:
# Optional: Resource groups filter which hosts are eligible for this cluster
controlPlaneResourceGroups:
'palette.ai': 'true'
workerResourceGroups:
'palette.ai': 'true'
'gpu-enabled': 'true'
# Dedicated cluster configuration
dedicated:
paletteClusterDeploymentConfig:
# Cloud type for deployment
cloudType: edge-native
# Node pool requirements
nodePoolRequirements:
# Control plane configuration
controlPlanePool:
nodeCount: 3
workerNodeEligible: false
architecture: amd64
cpu:
cpuCount: 4
memory:
memoryMiB: 16384
labels:
node-role: control-plane
annotations:
description: 'Control plane node'
# Worker pools
workerPools:
- name: cpu-pool
architecture: amd64
cpu:
cpuCount: 8
memory:
memoryMiB: 32768
labels:
workload-type: cpu-intensive
annotations:
description: 'CPU worker pool'
- name: gpu-pool
architecture: amd64
cpu:
cpuCount: 16
memory:
memoryMiB: 65536
gpu:
family: NVIDIA-A100
gpuCount: 2
gpuMemory: 40960
labels:
workload-type: gpu-training
annotations:
description: 'GPU worker pool for ML training'
# Deletion policy - what happens when ComputePool is deleted
deletionPolicy: delete
# Optional: SSH keys for cluster node access
sshKeys:
- 'ssh-rsa AAAAB3Nza... your-ssh-key'
# Edge configuration (overrides ComputeConfig defaults)
edge:
# Required: Virtual IP for cluster control plane
vip: '10.10.162.130'
# Optional: NTP servers for time synchronization
ntpServers:
- time.google.com
- time.cloudflare.com
# Optional: Network overlay configuration
networkOverlayConfig:
enabled: false
staticIp: false
cidr: '192.168.1.0/24'
overlayNetworkType: VXLAN
# Optional: Two-node deployment mode
isTwoNode: false
All configuration shown in the sections below fits into the structure shown in this complete example. Pay special attention to the nesting hierarchy to avoid placement errors.
Key nesting paths:
- Control plane config:
spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.nodePoolRequirements.controlPlanePool - Worker pools config:
spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.nodePoolRequirements.workerPools - Edge config:
spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.edge - Deletion policy:
spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.deletionPolicy
Control Plane Configuration
Use nodePoolRequirements.controlPlanePool to configure the control plane node pool.
-
Node count:
1,3, or5. -
Worker eligibility: For single-node clusters (
nodeCount: 1), you must setworkerNodeEligible: true. Without this setting, single-node clusters cannot schedule workloads and will not become Ready. For multi-node clusters (3 or 5 nodes), set this tofalseto keep the control plane dedicated. -
Architecture:
amd64orarm64. -
Resource requirements: Set
cpu.cpuCountand optionalmemory.memoryMiB.
Example for a single-node cluster:
controlPlanePool:
nodeCount: 1
workerNodeEligible: true
architecture: amd64
cpu:
cpuCount: 4
memory:
memoryMiB: 16384
Example for a multi-node cluster:
controlPlanePool:
nodeCount: 3
workerNodeEligible: false
architecture: amd64
cpu:
cpuCount: 4
memory:
memoryMiB: 16384
Worker Pool Configuration
Use nodePoolRequirements.workerPools to define worker pools.
-
Create multiple worker pools to separate workload types (for example, CPU-only and GPU-enabled).
-
Each worker pool can have different resource requirements.
-
Set
gpuvalues for GPU-enabled worker pools.
workerPools:
- name: cpu-pool
architecture: amd64
cpu:
cpuCount: 4
- name: gpu-pool
architecture: amd64
cpu:
cpuCount: 8
gpu:
family: NVIDIA-A100
gpuCount: 2
gpuMemory: 40960
GPU fields:
-
family. GPU family name (for example,NVIDIA-A100). -
gpuCount. Number of GPUs per node. -
gpuMemory. GPU memory in MiB.
Edge Configuration
Use edge to configure edge clusters.
VIP Requirements
The Virtual IP (VIP) is the control plane endpoint for the Kubernetes cluster. Proper VIP planning is critical for successful cluster deployment.
VIP Planning Checklist:
Before provisioning a Compute Pool, complete the following VIP requirements:
-
Reserve the VIP from your network team or IPAM system. The VIP must be allocated and documented before cluster creation.
-
Verify network placement:
- The VIP must be in the same Layer 2 network segment as the control plane nodes.
- The VIP must be reachable from all edge hosts that will join the cluster.
- The VIP must not be in a CIDR range routed through a proxy.
-
Confirm VIP availability:
- The VIP must not be assigned to any other device or cluster.
- Verify no IP conflicts exist using Address Resolution Protocol (ARP) or ping tests.
-
VIP format:
- The VIP can be an IPv4 address (for example,
10.10.162.130) or a Fully Qualified Domain Name (FQDN) that resolves to an IP address. - Ensure DNS resolution is configured if using an FQDN.
- The VIP can be an IPv4 address (for example,
-
VIP advertisement mechanism:
- PaletteAI edge clusters use kube-vip to advertise the VIP using Layer 2 ARP or BGP.
- Layer 2 (ARP) mode is the default and most common configuration:
- The VIP is advertised via ARP (Address Resolution Protocol)
- Requires that control plane nodes are on the same L2 network segment
- Confirm that your network allows ARP traffic between control plane nodes
- No additional configuration needed in the ComputePool manifest
- BGP mode (advanced) requires additional Palette cluster profile configuration:
- The VIP is advertised via BGP peering with network routers
- Requires BGP configuration in the Palette cluster profile (not in the ComputePool manifest)
- Use when control plane nodes span multiple L2 segments
- Requires coordination with your network team for BGP peering setup
- How to determine which mode is used:
- By default, kube-vip operates in Layer 2 (ARP) mode
- BGP mode must be explicitly configured in the Palette infrastructure profile
- Check your ProfileBundle's referenced Palette cluster profile for kube-vip BGP settings
- If no BGP configuration exists, Layer 2 mode is active
Pre-flight VIP validation:
Before applying the Compute Pool manifest, validate the VIP from each edge host:
# Verify VIP is not in use (should timeout or fail)
ping -c 3 <vip-address>
# Check ARP table for conflicts (should return no results)
arp -a | grep <vip-address>
# Verify VIP is in same subnet as control plane nodes
ip route get <vip-address>
# If using FQDN, verify DNS resolution
nslookup <vip-fqdn>
VIP constraints:
-
The VIP is immutable after cluster creation. You cannot change the VIP without deleting and recreating the cluster.
-
The VIP must be unique across all Compute Pools in your environment.
-
The VIP must not be used by any other device, service, or load balancer on your network.
Example edge configuration with VIP:
# This configuration goes under:
# spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.edge
edge:
vip: '10.10.162.130'
ntpServers:
- time.google.com
- time.cloudflare.com
networkOverlayConfig:
enabled: true
cidr: '192.168.1.0/24'
overlayNetworkType: VXLAN
Optional Edge Settings
YAML path: spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.edge
Additional edge configuration options (all are optional except VIP):
-
NTP servers (
ntpServers): Configure time synchronization servers. Use this when edge hosts do not have default NTP configuration or when specific time sources are required.edge:
ntpServers:
- time.google.com
- time.cloudflare.com -
Network overlay (
networkOverlayConfig): Configure VXLAN overlay network for pod-to-pod communication across nodes. Use this for advanced networking scenarios or when the underlying network does not support native pod routing.edge:
networkOverlayConfig:
enabled: true
staticIp: false
cidr: '192.168.1.0/24'
overlayNetworkType: VXLANFields:
enabled(boolean) - Enable or disable overlay networkstaticIp(boolean) - Use static IP assignment for overlaycidr(string) - CIDR range for overlay networkoverlayNetworkType(string) - Type of overlay (typicallyVXLAN)
-
SSH keys (
sshKeys): Configure SSH access to cluster nodes. Use this for operational access and troubleshooting. This is underpaletteClusterDeploymentConfig(sibling toedge), not insideedge.paletteClusterDeploymentConfig:
sshKeys:
- 'ssh-rsa AAAAB3Nza... your-key-1'
- 'ssh-rsa AAAAB3Nza... your-key-2' -
Two-node deployment (
isTwoNode): Enable two-node cluster mode. Special configuration for edge deployments with exactly two nodes.edge:
isTwoNode: true
ProfileBundle Reference
profileBundleRef defines the infrastructure stack and, optionally, application stacks for deployment. This is a top-level field under spec.
YAML path: spec.profileBundleRef
Required fields:
-
name- Name of the ProfileBundle resource -
namespace- Namespace of the ProfileBundle (must match the Project namespace)
Optional fields:
cloudType- Cloud provider type (edge-nativeormaas)
ProfileBundle types:
-
infrastructure- Contains only infrastructure profiles (Kubernetes, networking, storage) -
fullstack- Contains both infrastructure and application profiles -
application- Contains only application profiles (for Imported clusters only)
Example:
spec:
profileBundleRef:
name: edge-profilebundle
namespace: my-project
cloudType: edge-native
Deletion Policy
deletionPolicy controls what happens to the Palette cluster when you delete the Compute Pool resource from PaletteAI.
YAML path: spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.deletionPolicy
-
delete(default). Deletes the cluster from Palette when the Compute Pool is deleted. Use this for ephemeral or development clusters where you want complete cleanup. -
orphan. Keeps the cluster running in Palette when the Compute Pool is deleted. The cluster becomes unmanaged by PaletteAI. Use this when you want to preserve the cluster infrastructure but stop managing it through PaletteAI.
Example:
spec:
clusterVariant:
dedicated:
paletteClusterDeploymentConfig:
deletionPolicy: delete
Orphan mode cleanup:
When using deletionPolicy: orphan, you must manually clean up resources:
-
The Kubernetes cluster remains running in Palette and continues to consume infrastructure resources.
-
Any workloads deployed by PaletteAI remain running but are no longer managed.
-
You must manually delete the cluster from Palette if you no longer need it.
-
Edge hosts remain allocated to the cluster until you delete the cluster from Palette.
UI Field to YAML Mapping
The following table maps User Interface wizard fields to their corresponding YAML paths in the ComputePool manifest. Use this when translating UI configurations created in Canvas to Kubernetes manifests for kubectl or GitOps workflows.
| UI Wizard Step | UI Field Name | YAML Path | Notes |
|---|---|---|---|
| General | Compute pool name | metadata.name | Must be unique in namespace |
| General | Description | metadata.annotations.description | Optional annotation |
| General | Labels / Annotations | metadata.labels /metadata.annotations | Key-value pairs |
| Mode | Dedicated resources | spec.clusterVariant.dedicated | Mutually exclusive with shared |
| Profile Bundle | Profile Bundle | spec.profileBundleRef.name | Must exist in same namespace |
| Profile Bundle | Cloud Type | spec.profileBundleRef.cloudType | Typically edge-native |
| Profile Bundle | Version | spec.profileBundleRef.name | To select a specific revision, use the format <profile-bundle-name>@v<revision> (for example, edge-profilebundle@v3). |
| Node Config | Node Count (control plane) | spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.nodePoolRequirements.controlPlanePool.nodeCount | Values: 1, 3, or 5 |
| Node Config | Workload Eligible | spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.nodePoolRequirements.controlPlanePool.workerNodeEligible | Boolean: true for single-node |
| Node Config | Architecture | spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.nodePoolRequirements.controlPlanePool.architecture | Values: amd64, arm64 |
| Node Config | CPU Count | spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.nodePoolRequirements.controlPlanePool.cpu.cpuCount | Integer |
| Node Config | Memory | spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.nodePoolRequirements.controlPlanePool.memory.memoryMiB | Integer in MiB. UI may accept GB notation which is converted. |
| Node Config | Worker Pool Architecture | spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.nodePoolRequirements.workerPools[].architecture | Array element |
| Node Config | Worker Pool CPU Count | spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.nodePoolRequirements.workerPools[].cpu.cpuCount | Integer |
| Node Config | Worker Pool Memory | spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.nodePoolRequirements.workerPools[].memory.memoryMiB | Integer in MiB |
| Node Config | Min Worker Nodes | spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.nodePoolRequirements.workerPools[].minWorkerNodes | Minimum nodes to provision |
| Node Config | GPU Family | spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.nodePoolRequirements.workerPools[].gpu.family | e.g., NVIDIA-A100 |
| Node Config | GPU Count | spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.nodePoolRequirements.workerPools[].gpu.gpuCount | Integer |
| Node Config | GPU Memory | spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.nodePoolRequirements.workerPools[].gpu.gpuMemory | Integer in MiB |
| Deployment | Deletion Policy | spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.deletionPolicy | Values: delete, orphan |
| Deployment | Settings Ref | spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.settingsRef.name | Settings resource name used for Palette integration. |
| Deployment | SSH Keys | spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.sshKeys | Array of SSH public keys |
| Deployment | VIP | spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.edge.vip | Required. Immutable. |
| Deployment | Network Overlay (enabled) | spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.edge.networkOverlayConfig.enabled | Boolean |
| Deployment | Network Overlay CIDR | spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.edge.networkOverlayConfig.cidr | CIDR notation |
| Deployment | Overlay Network Type | spec.clusterVariant.dedicated.paletteClusterDeploymentConfig.edge.networkOverlayConfig.overlayNetworkType | Typically VXLAN |