Prepare Helm Chart Values
In this section, you will prepare the Helm chart values for the PaletteAI cluster. The Helm chart values are used to deploy Mural and PaletteAI on the cluster. Much of this information is reused from Mural's documentation site, specifically the Install Mural on Kubernetes page.
If you want to learn more about each of the sections in the Helm chart values file, refer to the Helm Configuration Reference page.
The guide will navigate you through all the sections in the Helm chart values file that need to be configured. Sections that are not covered in this guide can be left as is and skipped.
Assumptions
The following assumptions are made in this section:
-
Mural will use the Zot registry that comes with the appliance cluster to store workloads. You may use a different registry, but for this installation, the built-in registry will be used.
-
Hub-as-spoke mode will be used for the PaletteAI cluster. This is the default mode for the PaletteAI cluster.
-
The PaletteAI cluster trusts Dex as the OIDC provider. This will prevent the need to configure User Impersonation for Mural and PaletteAI.
Prerequisites
-
Completed the Prepare Infrastructure section.
-
The ability to host a file server that will be used to store the Helm chart values. This can be a local file server, a cloud file server, or a shared file server.
-
The network domain that you will use for the PaletteAI cluster. You will need to create a DNS record for the domain that points to the leader node's IP address or a VIP address.
-
Information about your OpenID Connect provider OIDC. This is used to authenticate users to PaletteAI and Mural through Single Sign-On (SSO). You may skip this section if you do not want to use SSO, but be aware that this is not a recommended practice.
-
curlorwgetto download the Helm chart values file. -
In your OIDC provider, you must configure your application to allow the following callback URLs:
https://<domain-or-vip-address>/mural/callbackhttps://<domain-or-vip-address>/ai/callbackhttps://<domain-or-vip-address>/dex/callbackhttps://<domain-or-vip-address>
Replace <domain-or-vip-address> with the domain or VIP address you configured for the cluster.
Prepare Helm Chart Values
-
Download the latest Helm chart values file from Mural's documentation site.
curl --output values.yaml --silent https://docs.mural.sh/resources/assets/hosted/helm/values.yaml -
Open the Helm chart values file in a text editor.
Global
-
Set
global.dns.domainto the primary domain for the deployment. Do not include a protocol, e.g., useexample.org, nothttps://example.org. Next, setglobal.dns.rootApptopalette-ai.warningMake sure the domain has a DNS record that points to the leader node's IP address or a VIP address.
Click to review the complete global configuration section
global:
dns:
domain: example.acme.org
rootApp: palette-ai
FleetConfig
-
Up next is the FleetConfig section. FleetConfig orchestrates multi-cluster management support through Open Cluster Management (OCM). The following values need to be configured.
Click to review the complete FleetConfig configuration section
Set the
hub.apiServerto the API server of the domain or VIP address for the leader node. Ensure that you include the Kubernetes API server port and the communication protocol.hub:
apiServer: 'https://acme.example.org:6443'Navigate to the
spokessection and locate thehub-as-spokespoke. Set theociRegistry.endpointto the endpoint of the Zot registry. Set your domain after theoci://protocol and end with port:30003. During the installation, you will be able to set a password for the Zot registry. Replace theREPLACE_WITH_YOUR_PASSWORDwith the password you want to use. The Zot user will beadmin.spokes:
- name: hub-as-spoke
# OCI registry configuration for the spoke cluster.
ociRegistry:
endpoint: 'oci://acme.example.org:30003'
repository: 'mural-workloads'
insecure: true
timeout: 60s
basicAuth:
username: 'admin'
password: 'REPLACE_WITH_YOUR_PASSWORD'
basicAuthSecretName:"
certData: {}The last portion of the FleetConfig section is the
spokeValuesOverridessection. Set theenabledvalue totrue.spokeValuesOverrides:
enabled: trueCopy the snippet below and paste it after the
spokeValuesOverrides.enabledsection. Set theociRegistry.endpointto the domain or VIP address for the leader node. Use the same value as theociRegistry.endpointin thespokessection. And replace theREPLACE_WITH_YOUR_PASSWORDwith the same value you set earlier.
spokeValuesOverrides:
enabled: true
hue: |
enabled: true
clusterType: spoke
defaultDefinitions:
enabled: false
ociRegistry:
enabled: true
endpoint: 'oci://acme.example.org:30003'
repository: 'mural-workloads'
insecure: true
interval: 5m
timeout: 60s
# basicAuthSecretName: "oci-creds"
basicAuth:
username: 'admin'
password: 'REPLACE_WITH_YOUR_PASSWORD'
cert-manager: |
enabled: false
flux2: |
enabled: true
policies:
create: false
zot: |
enabled: false
spokeValuesOverrides is how you configure what Mural components will be installed on the spoke cluster and what OCI registry will be used.
Canvas
-
Navigate to the Canvas section. This is the Mural User Interface. The following values need to be configured.
Click to review the complete Canvas configuration section
Set the
ingress.enabledtotrue. Set theingress.domainto the domain or VIP address for the leader node.ingress:
enabled: true
annotations: {}
ingressClassName: nginx
domain: example.acme.org
tls: []
paths:
- path: /mural
pathType: ImplementationSpecific
backend:
service:
name: canvas
port:
number: 2999Set'enableHTTP
totrue`.enableHTTP: trueThe last portion of Canvas to modify is the
oidcsection. ReplaceREPLACE_WITH_A_UNIQUE_STRINGwith a unique string andreplace.with.your.domainwith the domain or VIP address for the leader node.oidc:
sessionSecret: 'REPLACE_WITH_A_UNIQUE_STRING'
sessionDir: '/app/sessions'
issuerK8sService: 'https://dex.mural-system.svc.cluster.local:5554/dex'
skipSSLCertificateVerification: true
redirectURL: 'https://replace.with.your.domain/mural/callback'
Cert-manager
-
Disable cert-manager. The appliance will set up its own cert-manager instance. The
enabledparameter is located at the end of the cert-manager section.cert-manager:
enabled: false
Dex
-
Navigate to the Dex section. Dex authenticates users to PaletteAI and Mural through Single Sign On (SSO). You can configure Dex to connect to an upstream OIDC provider or to a local user database. For this installation, you will configure Dex to connect to an upstream OIDC provider. If you want to configure an OIDC provider later, you can do so; however, Dex still requires some basic configuration.
Click to review the complete Dex configuration section
Set the
config.issuerto the domain or VIP address for the leader node. Do not remove the/dexpath.config:
issuer: 'https://replace.with.your.domain/dex'This next part may be deferred for later, but we strongly recommend configuring at least one connector. Set the
config.connectorsto the connectors you want to use. The Dex documentation has examples for each of the connectors.Below is an example of an OIDC connector that connects to AWS Cognito. The
oidctype can be used for any OIDC provider that does not have a native Dex connector. Different OIDC providers may require different configurations.connectors:
- type: oidc
id: aws
name: AWS Cognito
config:
issuer: https://cognito-idp.us-east-1.amazonaws.com/us-east-1_xxxxxx
clientID: xxxxxxxxxxxxxxx
clientSecret: xxxxxxxxxxxxxxxxx
redirectURI: https://replace.with.your.domain/dex/callback # Dex's callback url for authorized code flow that will redirect to our application's callback url
getUserInfo: true
userNameKey: email
insecureSkipEmailVerified: true
insecureEnableGroups: true
scopes:
- openid
- email
- profile
promptType: consent
claimMapping:
groups: groupsOnce you have configured the connectors, you can move on to the
staticClientssection. ReplaceREPLACE_WITH_A_UNIQUE_STRINGwith a unique string andreplace.with.your.domainwith the domain or VIP address for the leader node. Do not remove the/mural/callbackor the/ai/callbackpaths.staticClients:
- id: mural
redirectURIs:
- 'https://replace.with.your.domain/mural/callback'
name: 'mural'
secret: 'REPLACE_WITH_A_UNIQUE_STRING'
public: false
trustedPeers:
- kubernetes
- palette-ai
- id: kubernetes
redirectURIs:
- 'https://replace.with.your.domain'
name: kubernetes
secret: 'REPLACE_WITH_A_UNIQUE_STRING'
public: false
trustedPeers:
- mural
- palette-ai
- id: palette-ai
redirectURIs:
- 'https://replace.with.your.domain/ai/callback'
name: palette-ai
secret: 'REPLACE_WITH_A_UNIQUE_STRING'
public: false
trustedPeers:
- mural
- kubernetes`Lastly, configure the
staticPasswordssection. We recommend setting the password to a strong value. Below is the default user and password in bcrypt format. We strongly recommend changing the default user and password. Remember to use a bcrypt hash generator to generate the password hash. TheuserIDcan be any unique string.warningIf you did not configure any OIDC connectors, you must configure at least one static user. This is how you will access the PaletteAI UI and Mural UI. These static Dex users automatically inherit admin privileges through the service account. Dex does not support groups for local static users. There is a workaround by using the User Impersonation feature.
staticPasswords:
- email: 'admin@example.com'
hash: '$2a$12$Ot2dJ0pmdIC2oXUDW/Ez1OIfhkSzLZIbsumsxkByuU3CUr02DtiC.'
username: 'admin'
userID: '08a8684b-db88-4b73-90a9-3cd1661f5466'Lastly, configure the
ingresssection. Replacereplace.with.your.domainwith the domain or VIP address for the leader node. Do not remove the/dexpath.ingress:
enabled: true
className: 'nginx'
annotations: {}
hosts:
- host: replace.with.your.domain
paths:
- path: /dex
pathType: ImplementationSpecific
Flux
-
Disable Flux. The appliance will stand up its own Flux instance.
flux2:
enabled: false
Hue
-
Hue requires OCI registry information to work correctly. Use the same values you provided in the FleetConfig section.
hue:
ociRegistry:
enabled: true
endpoint: 'oci://replace.with.your.domain:30003'
repository: 'mural-workloads'
insecure: true
interval: 5m
timeout: 60s
basicAuth:
username: 'admin'
password: 'REPLACE_WITH_YOUR_PASSWORD'
Ingress-nginx
-
Disable Ingress-nginx. The appliance will stand up its own Ingress-nginx instance.
ingress-nginx:
enabled: false
PaletteAI
-
The PaletteAI section will require some configuration.
Click to review the complete PaletteAI configuration section
Next, move on down to the
uisection and configure theingresssection. Set theenabledtotrue. Set thedomainto the domain or VIP address for the leader node. Do not remove the/aipath.ingress:
enabled: true
annotations: {}
ingressClassName: nginx
domain: replace.with.your.domain
tls: []
paths:
- path: /ai
pathType: ImplementationSpecific
backend:
service:
name: palette-ai
port:
number: 3999Set the
enableHTTPtotrue.enableHTTP: trueThe last portion of the PaletteAI section is the
oidcsection. ReplaceREPLACE_WITH_A_UNIQUE_STRINGwith a unique string andreplace.with.your.domainwith the domain or VIP address for the leader node. Do not remove the/ai/callbackpath.oidc:
sessionSecret: 'REPLACE_WITH_A_UNIQUE_STRING'
sessionDir: '/app/sessions'
issuerK8sService: 'https://dex.mural-system.svc.cluster.local:5554/dex'
skipSSLCertificateVerification: true
redirectURL: 'https://replace.with.your.domain/ai/callback'
Zot
-
Disable Zot. The appliance will stand up its own Zot instance. The
enabledparameter is located all the way at the end of the Zot section.zot:
enabled: falseThat concludes the Helm chart configuration for the PaletteAI appliance. The next step is to ensure the YAML file is hosted and made available on the same network as the PaletteAI appliance will be hosted, through a file server. If you need guidance on how to do this, expand the following section.
Exposing and Hosting a File Server
There are many options for hosting a file server. Options range from using a Python module to using an Apache HTTP server. A lightweight option is to use Caddy to host the values.yaml file.
If you download and install Caddy, you can get started in a few steps.
-
Download and install Caddy. Refer to the Caddy installation guide for more information.
-
Navigate to the directory where you want to host the values.yaml file. Ensure the values.yaml file is in the directory.
-
Issue the following command to start the Caddy server and send it to the background. Replace
8080with the port you want to use for the file server.nohup caddy file-server --listen 0.0.0.0:8080 --browse & -
Using the IP address of the machine where Caddy is actively running, you can now access the values.yaml file at
http://<IP_ADDRESS>:8080/values.yaml.
You can do much more with Caddy, such as authentication and automatic HTTPS. However, these features require additional configuration and are beyond the scope of this guide.
warningThe file server must be accessible and accessible at all times, including post installation. The reason for this is to support Day-2 Operations such as upgrading PaletteAI.
-
Validation
-
Verify you completed configuration for the following sections:
- Global
- FleetConfig
- Canvas
- Dex
- Hue
- PaletteAI
-
Ensure you disabled the following components:
- Cert-manager
- Flux
- Ingress-nginx
- Zot
-
Lastly, ensure you have a file server that is hosting the values.yaml file and that it is accessible at all time, including post installation.
Next Steps
You are now ready to proceed to the Deploy Cluster and Install guide and kick off the installation.