Prepare Helm Chart Values
In this section, you will prepare the necessary Helm chart values for the PaletteAI cluster. The Helm chart is used to deploy PaletteAI on the cluster. Sections that are not covered in this guide can be left as is.
To learn more about each section in the Helm chart values file, refer to our Helm Configuration Reference page.
Assumptions
The following assumptions are made in this section:
-
PaletteAI will use the Zot registry that comes with the appliance cluster to store your application configurations. You may use a different registry, but for this installation, the built-in registry will be used.
-
Hub-spoke mode will be used for the PaletteAI cluster. This is the default mode for the PaletteAI cluster.
-
The PaletteAI cluster trusts Dex as the OIDC provider. This prevents the need to configure user impersonation for PaletteAI.
-
A Prometheus server will be deployed to the hub cluster during the appliance install step for metrics collection. Alternatively, if you have configured an external Prometheus server, you may use that instead.
Prerequisites
-
Completed the Prepare Infrastructure section.
-
The ability to host a file server to store the Helm chart values. This can be a local, cloud, or shared file server.
-
The network domain that you will use for the PaletteAI cluster. You will need to create a DNS record for the domain that points to the leader node's IP address or a VIP address.
-
Information about your OpenID Connect provider OIDC. This is used to authenticate users to PaletteAI through Single Sign-On (SSO). You may skip this section if you do not want to use SSO, but be aware that this is not a recommended practice.
-
curlorwgetto download the Helm chart values file. -
In your OIDC provider, you must configure your application to allow the following callback URLs. Replace
<domain-or-vip-address>with the domain or VIP address you configured for the cluster.https://<domain-or-vip-address>/ai/callbackhttps://<domain-or-vip-address>/dex/callbackhttps://<domain-or-vip-address>
Prepare Helm Chart Values
-
Download the latest Helm chart values file. This example uses
curl.curl --output values.yaml --silent https://docs.palette-ai.com/resources/assets/hosted/helm/values.yaml -
Open the Helm chart values file in a text editor of your choice and complete the following sections. This example uses
vi.vi values.yaml
Global
-
The
globalconfiguration is used to configure overarching settings for the PaletteAI deployment. Review and modify the following values as necessary.warningMake sure the domain has a DNS record that points to the leader node's IP address or a VIP address.
-
Set
global.dns.domainto the primary domain for the deployment. Do not include a protocol. For example, useexample.org, nothttps://example.org.global:
dns:
domain: 'example.acme.org' -
In
global.auditLogging.basicAuth, change the defaultusernameandpasswordfor audit logging. The session secret is used for encoding and decoding the PaletteAI session cookie. Credentials are not stored in the browser. The cookie is used to map the session to the user so that the server can retrieve the user's credentials.global:
auditLogging:
basicAuth:
username: REPLACE_WITH_YOUR_USERNAME
password: REPLACE_WITH_YOUR_PASSWORD -
Configure the metrics collection settings. By default, the appliance deploys a Prometheus server on the hub cluster at port
30090. Spoke clusters use Prometheus agents to collect metrics and ship them to the Prometheus server via remote_write. Setglobal.metrics.prometheusBaseUrlto the domain or VIP address of your leader node with port30090. Ensure you do not include any API paths, only the protocol, host, and port.global:
metrics:
prometheusBaseUrl: 'https://example.acme.org:30090'
timeout: '5s'
scrapeInterval: '15s'
agentType: 'prometheus-agent-minimal'
username: ''
password: ''The
agentTypeis set toprometheus-agent-minimalby default. This agent collects only spoke cluster CPU and GPU utilization metrics. If you are using an external Prometheus server instead of the hub-based deployment, configureglobal.metrics.prometheusBaseUrlto point to your external Prometheus server's URL (e.g.,https://your-external-prometheus:9090). In this case, you may also changeglobal.metrics.agentTypetoprometheus-agentto ship all node-exporter and dcgm-exporter metrics from spoke clusters for comprehensive observability.If your Prometheus server requires basic authentication, configure the
usernameandpasswordfields. Leave these empty if authentication is not required.tipIf you prefer to use an external Prometheus server, you may find the Deploy Monitoring Stack guide helpful for setting up a comprehensive monitoring solution.
Complete
globalconfiguration section
-
FleetConfig
- FleetConfig orchestrates multi-cluster management support through Open Cluster Management (OCM). Review and modify the following values as necessary.
-
Set
fleetConfig.hub.apiServerto the API server of the domain or VIP address for the leader node. Ensure that you include the Kubernetes API server port and the communication protocol.fleetConfig:
hub:
apiServer: 'https://acme.example.org:6443' -
Navigate to the
fleetConfig.spokessection and locate thehub-as-spokespoke. SetociRegistry.endpointto the endpoint of the Zot registry. Set your domain after theoci://protocol and end with port30003. Set theusernamefor the Zot registry toadminand replaceREPLACE_WITH_YOUR_PASSWORDwith the password you want to use.fleetConfig:
spokes:
- name: hub-as-spoke
# OCI registry configuration for the spoke cluster.
ociRegistry:
endpoint: 'oci://acme.example.org:30003'
repository: 'mural-workloads'
insecure: true
timeout: 60s
basicAuth:
username: 'admin'
password: 'REPLACE_WITH_YOUR_PASSWORD'
basicAuthSecretName: ''
certData: {} -
Use the
fleetConfig.spokeValuesOverridessection to configure what PaletteAI components are installed on the spoke cluster and what OCI registry is used. Setenabledtotrue.fleetConfig:
spokeValuesOverrides:
enabled: true -
Copy and paste the following code block below the
fleetConfig.spokeValuesOverrides.enabledsection. Set theociRegistry.endpointto the domain or VIP address for the leader node. Use the same value as theociRegistry.endpointin thefleetConfig.spokessection. ReplaceREPLACE_WITH_YOUR_PASSWORDwith the same value you set earlier.hue: |
enabled: true
clusterType: spoke
defaultDefinitions:
enabled: false
ociRegistry:
enabled: true
endpoint: 'oci://acme.example.org:30003'
repository: 'mural-workloads'
insecure: true
interval: 5m
timeout: 60s
# basicAuthSecretName: "oci-creds"
basicAuth:
username: 'admin'
password: 'REPLACE_WITH_YOUR_PASSWORD'
cert-manager: |
enabled: false
flux2: |
enabled: true
policies:
create: false
zot: |
enabled: falseComplete
fleetConfigconfiguration section
-
Alertmanager
-
Navigate to the
alertmanagersection. Update credentials for thealertmanagerinstance based on the credentials you configured in theglobalsection.You must provide a Base64 encoded string for the
Authorizationheader. Use the interactive encoder below to generate your Base64 encoded string and copy the value to the clipboard.Base64 Encoded String:Alternatively, generate the Base64 encoded string using the following command. Replace
usernameandpasswordwith the username and password you configured in theglobalsection.echo -n "username:password" | base64Following is the
livenessProbeandreadinessProbesections with the Base64 encoded string. ReplaceREPLACE_WITH_YOUR_BASE64_ENCODED_STRINGwith the Base64 encoded string you generated.alertmanager:
livenessProbe:
httpGet:
path: /-/healthy
port: http
scheme: HTTPS
httpHeaders:
- name: Authorization
value: 'Basic REPLACE_WITH_YOUR_BASE64_ENCODED_STRING'
readinessProbe:
httpGet:
path: /-/ready
port: http
scheme: HTTPS
httpHeaders:
- name: Authorization
value: 'Basic REPLACE_WITH_YOUR_BASE64_ENCODED_STRING'Complete
alertmanagerconfiguration section
Canvas
- Canvas controls part of the user interface. Review and modify the following values as necessary.
-
Set
canvas.ingress.enabledtotrue. Set thecanvas.ingress.matchAllHoststotrue.canvas:
ingress:
enabled: true
annotations: {}
ingressClassName: nginx
matchAllHosts: true
tls: []
paths:
- path: /ai
pathType: ImplementationSpecific
backend:
service:
name: canvas
port:
number: 2999 -
Set
canvas.enableHTTPtotrue. This supports TLS termination at the load balancer.canvas.ingress.tlsremains empty as a result.canvas:
enableHTTP: true -
In the
canvas.oidcsection, enter a unique string for thesessionSecret. ForredirectURL, replacereplace.with.your.domainwith the domain or VIP address for the leader node. Do not remove the/ai/callbackpath.canvas:
oidc:
sessionSecret: 'REPLACE_WITH_A_UNIQUE_STRING'
sessionDir: '/app/sessions'
issuerK8sService: 'https://dex.mural-system.svc.cluster.local:5554/dex'
skipSSLCertificateVerification: true
redirectURL: 'https://replace.with.your.domain/ai/callback'Complete
canvasconfiguration section
-
Dex
- Dex authenticates users to PaletteAI through SSO. You can configure Dex to connect to an upstream OIDC provider or to a local user database. For this installation, you will configure Dex to connect to an upstream OIDC provider. If you want to configure an OIDC provider later, you can do so; however, Dex still requires some basic configuration.
-
Set
dex.config.issuerto the domain or VIP address for the leader node. Do not remove the/dexpath.dex:
config:
issuer: 'https://replace.with.your.domain/dex' -
This next part may be deferred for later, but we strongly recommend configuring at least one connector. Set the
dex.config.connectorsto the connectors you want to use. The Dex documentation has examples for each of the connectors.Below is an example of an OIDC connector that connects to AWS Cognito. The
oidctype can be used for any OIDC provider that does not have a native Dex connector. Different OIDC providers may require different configurations.Example AWS Cognito configurationdex:
config:
connectors:
- type: oidc
id: aws
name: AWS Cognito
config:
issuer: https://cognito-idp.us-east-1.amazonaws.com/us-east-1_xxxxxx
clientID: xxxxxxxxxxxxxxx
clientSecret: xxxxxxxxxxxxxxxxx
redirectURI: https://replace.with.your.domain/dex/callback # Dex's callback url for authorized code flow that will redirect to our application's callback url
getUserInfo: true
userNameKey: email
insecureSkipEmailVerified: true
insecureEnableGroups: true
scopes:
- openid
- email
- profile
promptType: consent
claimMapping:
groups: groups -
Once you have configured the connectors, proceed to the
dex.config.staticClientssection. ReplaceREPLACE_WITH_A_UNIQUE_STRINGwith a unique string andreplace.with.your.domainwith the domain or VIP address for the leader node. Do not remove the/ai/callbackpath.dex:
config:
staticClients:
- id: mural
redirectURIs:
- 'https://replace.with.your.domain/ai/callback'
name: 'mural'
secret: 'REPLACE_WITH_A_UNIQUE_STRING'
public: false
trustedPeers:
- kubernetes
- id: kubernetes
redirectURIs:
- 'https://replace.with.your.domain'
name: kubernetes
secret: 'REPLACE_WITH_A_UNIQUE_STRING'
public: false
trustedPeers:
- mural -
Next, configure the
dex.config.staticPasswordssection. We strongly recommend changing the default user and password to strong values. The following example is the default user and password in bcrypt format. Remember to use a bcrypt hash generator to generate the password hash. TheuserIDcan be any unique string.warningIf you did not configure any OIDC connectors, you must configure at least one static user, which is used to access the PaletteAI UI. Static Dex users automatically inherit admin privileges through the service account. Dex does not support groups for local static users. To use groups for local static users, you must use the User Impersonation feature.
dex:
config:
staticPasswords:
- email: 'admin@example.com'
hash: '$2a$12$Ot2dJ0pmdIC2oXUDW/Ez1OIfhkSzLZIbsumsxkByuU3CUr02DtiC.'
username: 'admin'
userID: '08a8684b-db88-4b73-90a9-3cd1661f5466' -
Lastly, configure the
dexanddexIngresssections to expose Dex. PaletteAI replaces the default dex ingress configuration with its own custom configuration to circumvent a known issue. Setdex.ingress.enabledtofalseanddexIngress.enabledtotrue. WithindexIngress, setmatchAllHoststotrue. Do not remove the/dexpath. Because TLS is terminated at the load balancer, thetlssection is empty.dex:
ingress:
enabled: falsedexIngress:
enabled: true
annotations: {}
className: 'nginx'
matchAllHosts: true
hosts:
- host: ''
paths:
- path: /dex
pathType: ImplementationSpecific
tls: []Complete
dexconfiguration section
-
Flux2
-
Disable Flux. The appliance will stand up its own Flux instance.
flux2:
enabled: false
Hue
-
Hue requires OCI registry information to work correctly. Use the same values you provided in the fleetConfig section.
hue:
ociRegistry:
enabled: true
endpoint: 'oci://replace.with.your.domain:30003'
repository: 'mural-workloads'
insecure: true
interval: 5m
timeout: 60s
basicAuth:
username: 'admin'
password: 'REPLACE_WITH_YOUR_PASSWORD'Complete
hueconfiguration section
Ingress-Nginx
-
Disable Ingress Nginx. The appliance will stand up its own Ingress Nginx instance.
ingress-nginx:
enabled: false
Zot
-
Disable Zot. The appliance will stand up its own Zot instance. The
enabledparameter is located at the beginning of thezotsection.zot:
enabled: false
This concludes the Helm chart configuration for the PaletteAI appliance. The next step is to ensure the YAML file is hosted on a file server and made available on the same network as the PaletteAI appliance. For guidance on how to do this, expand the following section.
The file server must be accessible and accessible at all times, including post-installation, in order to support Day-2 Operations, such as upgrading PaletteAI.
Exposing and hosting a file server
There are many options for hosting a file server. Options range from using a Python module to using an Apache HTTP server. A lightweight option is to use Caddy to host the values.yaml file.
-
Download and install Caddy. Refer to the Caddy installation guide for more information.
-
Navigate to the directory where you want to host the
values.yamlfile. Ensure thevalues.yamlfile is in the directory. -
Issue the following command to start the Caddy server and send it to the background. Replace
8080with the port you want to use for the file server.nohup caddy file-server --listen 0.0.0.0:8080 --browse & -
Using the IP address of the machine where Caddy is running, you can now access the
values.yamlfile athttp://<IP_ADDRESS>:8080/values.yaml.
Caddy offers additional capabilities, such as authentication and automatic HTTPS. However, these features require additional configuration and are beyond the scope of this guide.
Validation
-
Verify you have properly configured the following sections:
-
Ensure you have disabled the following components:
-
Lastly, ensure you have a file server that is hosting the
values.yamlfile and that it is accessible at all times, including post-installation.
Next Steps
You are now ready to proceed to the Deploy PaletteAI guide and begin installing self-hosted Palette and PaletteAI.