Install PaletteAI on Kubernetes
This guide covers installing PaletteAI on self-managed Kubernetes clusters where you have full control over the API server configuration. The deployment uses the hub-as-spoke pattern with Zot as the Open Container Initiative (OCI) registry. Use this guide if installing PaletteAI on:
- Self-managed clusters except on AWS EC2 instances (kubeadm, k3s, RKE, etc.)
- Self-hosted (on-premises) Kubernetes deployments
- Edge environments
- Any cluster where you can configure the Kubernetes API server to trust Dex as an OIDC provider
If you are installing PaletteAI on AWS (IaaS or EKS) or GKE, use our dedicated guides instead:
Prerequisites
-
An existing Kubernetes cluster. This is the hub cluster PaletteAI will be installed on.
-
Cluster admin rights to the hub cluster.
-
The following minimum Kubernetes versions:
Cluster Type Kubernetes Version Hub >= 1.32.0 Spoke >= 1.32.0 -
The following minimum resource requests:
Cluster Type CPU Memory Storage Hub 3388m 2732 Mi 10Gi Spoke 1216m 972 Mi 10Gi -
The ability to install the PaletteAI Helm chart, which is hosted publicly via AWS ECR.
-
The following binaries installed locally:
-
curlorwgetto download the Helm chart values file. -
A text editor, such as
vi, to edit the Helm chart values file. -
helm version >= 3.17.0. You must have network access to the hub cluster's Kubernetes API server from the machine where you will issue the
helm installcommand. -
kubectl version >= 1.31.0.
-
The
KUBECONFIGenvironment variable set to the path of the PaletteAI hub cluster'skubeconfigfile.export KUBECONFIG=<kubeconfig-location>
-
-
-
Your hub cluster requires the Kubernetes API server to trust Dex as an identity provider. Dex is deployed as a part of the PaletteAI installation. The requirement to configure the hub cluster's Kubernetes API server to trust Dex is only applicable to the Hub cluster and not the spoke clusters. To learn more about configuring the Kubernetes API server to trust Dex, refer to our Configure Kubernetes API Server to Trust OIDC Provider guide.
- Your hub cluster must be able to provision load balancer services. For on-premises or bare-metal clusters, this requires a load balancer implementation such as MetalLB. For cloud-hosted clusters, ensure the appropriate cloud controller manager is configured.
Enablement
-
Download the latest Helm chart values file. This example uses
curl.curl --output values.yaml --silent https://docs.palette-ai.com/resources/assets/hosted/helm/values.yaml - Open the Helm chart values file in a text editor of your choice and complete the following sections. This example uses
vi. -
The
globalconfiguration is used to configure overarching settings for the PaletteAI deployment. Review and modify the following values as necessary.-
Set
global.dns.domainto the primary domain for the deployment. Do not include a protocol. For example, useexample.org, nothttps://example.org.global:
dns:
domain: 'example.acme.org' -
In
global.auditLogging.basicAuth, change the defaultusernameandpasswordfor audit logging. The session secret is used for encoding and decoding the PaletteAI session cookie. Credentials are not stored in the browser. The cookie is used to map the session to the user so that the server can retrieve the user's credentials.global:
auditLogging:
basicAuth:
username: REPLACE_WITH_YOUR_USERNAME
password: REPLACE_WITH_YOUR_PASSWORD -
Configure the metrics collection settings. By default, the appliance deploys a Prometheus server on the hub cluster at port
30090. Spoke clusters use Prometheus agents to collect metrics and ship them to the Prometheus server via remote_write. Setglobal.metrics.prometheusBaseUrlto the domain or VIP address of your leader node with port30090. Ensure you do not include any API paths, only the protocol, host, and port.global:
metrics:
prometheusBaseUrl: 'https://example.acme.org:30090'
timeout: '5s'
scrapeInterval: '15s'
agentType: 'prometheus-agent-minimal'
username: ''
password: ''The
agentTypeis set toprometheus-agent-minimalby default. This agent collects only spoke cluster CPU and GPU utilization metrics. If you are using an external Prometheus server instead of the hub-based deployment, configureglobal.metrics.prometheusBaseUrlto point to your external Prometheus server's URL (e.g.,https://your-external-prometheus:9090). In this case, you may also changeglobal.metrics.agentTypetoprometheus-agentto ship all node-exporter and dcgm-exporter metrics from spoke clusters for comprehensive observability.If your Prometheus server requires basic authentication, configure the
usernameandpasswordfields. Leave these empty if authentication is not required.tipIf you prefer to use an external Prometheus server, you may find the Deploy Monitoring Stack guide helpful for setting up a comprehensive monitoring solution.
Complete
globalconfiguration sectionCanvas
-
-
Canvas controls the user interface. Review and modify the following values as necessary.
-
To configure the ingress for Canvas, set
canvas.ingress.enabledtotrue. Enter your own domain name forcanvas.ingress.domain, omitting the HTTP/HTTPS prefix.canvas:
ingress:
enabled: true
annotations: {}
ingressClassName: nginx
domain: replace.with.your.domain # No HTTP/HTTPS prefix.
matchAllHosts: false
tls: []
paths:
- path: /ai
pathType: ImplementationSpecific
backend:
service:
name: canvas
port:
number: 2999 - Set
canvas.enableHTTPtotrue. This supports TLS termination at the load balancer.canvas.ingress.tlsremains empty as a result. -
The last portion of the Canvas configuration is the OIDC configuration. If you defer configuring OIDC for Dex, you may do the same for Canvas and configure it later.
In the
canvas.oidcsection, enter a unique string for thesessionSecret. ForredirectURL, replacereplace.with.your.domainwith your domain. Do not remove the/ai/callbackpath.canvas:
oidc:
sessionSecret: 'REPLACE_WITH_A_UNIQUE_STRING'
sessionDir: '/app/sessions'
issuerK8sService: 'https://dex.mural-system.svc.cluster.local:5554/dex'
skipSSLCertificateVerification: true
redirectURL: 'https://replace.with.your.domain/ai/callback'If you did not configure your Kubernetes cluster to trust Dex as an OIDC provider, then you must configure the
canvas.impersonationProxysection to enable user impersonation.The example below shows how to configure the local Dex user
admin@example.comto be mapped to an example Kubernetes groupadmin. Refer to our Configure User Impersonation guide to learn more about how to configure user impersonation for OIDC groups and other use cases.Example user impersonation setupcanvas:
impersonationProxy:
enabled: true,
userMode: 'passthrough',
groupsMode: 'map',
userMap: {},
groupMap: {},
dexGroupMap:
'admin@example.com': [ 'admin' ]Complete
canvasconfiguration section
canvas:
enableHTTP: trueDex
-
-
Dex authenticates users to PaletteAI through SSO. You can configure Dex to connect to an upstream OIDC provider or to a local user database. For this installation, you will configure Dex to connect to an upstream OIDC provider. If you want to configure an OIDC provider later, you can do so; however, Dex still requires some basic configuration.
-
Set
dex.config.issuerto your domain. Do not remove the/dexpath.dex:
config:
issuer: 'https://replace.with.your.domain/dex' -
This next part may be deferred for later, but we strongly recommend configuring at least one connector. Set the
dex.config.connectorsto the connectors you want to use. The Dex documentation has examples for each of the connectors.Below is an example of an OIDC connector that connects to AWS Cognito. The
oidctype can be used for any OIDC provider that does not have a native Dex connector. Different OIDC providers may require different configurations.Example AWS Cognito configurationdex:
config:
connectors:
- type: oidc
id: aws
name: AWS Cognito
config:
issuer: https://cognito-idp.us-east-1.amazonaws.com/us-east-1_xxxxxx
clientID: xxxxxxxxxxxxxxx
clientSecret: xxxxxxxxxxxxxxxxx
redirectURI: https://replace.with.your.domain/dex/callback # Dex's callback url for authorized code flow that will redirect to our application's callback url
getUserInfo: true
userNameKey: email
insecureSkipEmailVerified: true
insecureEnableGroups: true
scopes:
- openid
- email
- profile
promptType: consent
claimMapping:
groups: groups -
Proceed to the
dex.config.staticClientssection. ReplaceREPLACE_WITH_A_UNIQUE_STRINGwith a unique string andreplace.with.your.domainwith your domain. Do not remove the/ai/callbackpath for themuralclient.dex:
config:
staticClients:
- id: mural
redirectURIs:
- 'https://replace.with.your.domain/ai/callback'
name: 'mural'
secret: 'REPLACE_WITH_A_UNIQUE_STRING'
public: false
trustedPeers:
- kubernetes
- id: kubernetes
redirectURIs:
- 'https://replace.with.your.domain'
name: kubernetes
secret: 'REPLACE_WITH_A_UNIQUE_STRING'
public: false
trustedPeers:
- mural -
Next, configure the
dex.config.staticPasswordssection. We strongly recommend changing the default user (admin) and password (password) to strong values. The following example is the default user and password in bcrypt format. Remember to use a bcrypt hash generator to generate the password hash. TheuserIDcan be any unique string.warningIf you did not configure any OIDC connectors, you must configure at least one static user, which is used to access the PaletteAI UI. Static Dex users automatically inherit admin privileges through the service account. Dex does not support groups for local static users. To use groups for local static users, you must use the User Impersonation feature.
dex:
config:
staticPasswords:
- email: 'admin@example.com'
hash: '$2a$12$Ot2dJ0pmdIC2oXUDW/Ez1OIfhkSzLZIbsumsxkByuU3CUr02DtiC.'
username: 'admin'
userID: '08a8684b-db88-4b73-90a9-3cd1661f5466' -
Configure the
dex.ingresssection to expose Dex. Forhost, replacereplace.with.your.domainwith your domain. Do not change theclassNameor thepath. Because TLS is terminated at the load balancer, thetlssection is empty.dex:
ingress:
enabled: true
className: 'nginx'
annotations: {}
hosts:
- host: replace.with.your.domain
paths:
- path: /dex
pathType: ImplementationSpecific
tls: []Complete
dexconfiguration section
Flux2
-
-
Set
flux2.policies.createtofalseto disable the Flux network policies. These policies, if enabled, prevent ingress traffic from reaching their target services.flux2:
policies:
create: falseinfoThis step is not required if the hub and all spoke clusters are configured to use a common, external OCI registry. An external OCI registry is configured in the
fleetConfig.spokes[*].ociRegistryandhue.ociRegistrysections of thevalues.yamlfile.Complete
flux2configuration sectionIngress-Nginx
-
Configure
ingress-nginx.controller.serviceto use the HTTP listener and terminate TLS at the load balancer. Change the value fortargetPorts.httpstohttp.ingress-nginx:
controller:
service:
targetPorts:
http: http
https: httpHelm Install
-
Install the
mural-crdsHelm chart. This chart contains the Custom Resource Definitions (CRDs) required by PaletteAI and must be installed before themuralHelm chart.helm install mural-crds oci://public.ecr.aws/mural/mural-crds --version 0.6.0 \
--namespace mural-system --create-namespace --waitExample outputNAME: mural-crds
LAST DEPLOYED: Tue May 27 09:34:33 2025
NAMESPACE: mural-system
STATUS: deployed
REVISION: 1Next, install PaletteAI using the
muralHelm chart, which is thevalues.yamlfile you configured in the previous steps.helm install mural oci://public.ecr.aws/mural/mural --version 1.0.0 \
--namespace mural-system --create-namespace --values values.yaml --waitExample outputNAME: mural
LAST DEPLOYED: Tue May 27 09:39:48 2025
NAMESPACE: mural-system
STATUS: deployed
REVISION: 1DNS
-
Once PaletteAI is deployed, fetch the
EXTERNAL-IPof the load balancer deployed byingress-nginx-controller.kubectl get service ingress-nginx-controller --namespace mural-systemExample outputNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.104.129.101 a9d221d65b2fd41b3929574458e8ce05-1177779699.us-east-1.elb.amazonaws.com 80:31952/TCP,443:30926/TCP 41m -
Create a DNS record pointing your
canvas.ingress.domainconfigured invalues.yamlto the load balancer created by the Ingress Nginx. Use an A record for IP addresses or a CNAME/alias record for hostnames, depending on your DNS provider's capabilities.
vi values.yaml
Global
You have now deployed PaletteAI on your Kubernetes cluster. The cluster is configured to trust Dex as an identity provider. Assuming you have configured Dex with an OIDC connector, you can now log in to PaletteAI using your Identity Provider (IdP). Alternatively, you can use the default Dex local user to log in to PaletteAI.
If you need to make changes to PaletteAI, review the Helm Chart Configuration page. You can trigger an upgrade to the PaletteAI installation by updating the values.yaml file with the changes you want to make and issuing the following command.
helm upgrade mural oci://public.ecr.aws/mural/mural --version 1.0.0 \
--namespace mural-system --values values.yaml --wait
Validate
Take the following steps to verify that PaletteAI is deployed and configured correctly.
-
Open a browser and navigate to the domain URL you configured for PaletteAI.
-
Log in with the default username and password. If you configured Dex with an OIDC connector, log in with your identity provider.
Next Steps
Once PaletteAI is installed on your cluster, you must integrate Palette with PaletteAI using PaletteAI's Settings resource. This resource requires a Palette tenant, project, and API key in order to communicate with Palette and deploy AI/ML applications and models to the appropriate location.
Proceed to the Integrate with Palette guide to learn how to prepare your Palette environment.