Skip to main content

Prepare Helm Chart Values

In this section, you will prepare the Helm chart values for the PaletteAI cluster. The Helm chart values are used to deploy Mural and PaletteAI on the cluster. Much of this information is reused from Mural's documentation site, specifically the Install Mural on Kubernetes page.

tip

If you want to learn more about each of the sections in the Helm chart values file, refer to the Helm Configuration Reference page.

The guide will navigate you through all the sections in the Helm chart values file that need to be configured. Sections that are not covered in this guide can be left as is and skipped.

Assumptions

The following assumptions are made in this section:

  • Mural will use the Zot registry that comes with the appliance cluster to store workloads. You may use a different registry, but for this installation, the built-in registry will be used.

  • Hub-as-spoke mode will be used for the PaletteAI cluster. This is the default mode for the PaletteAI cluster.

  • The PaletteAI cluster trusts Dex as the OIDC provider. This will prevent the need to configure User Impersonation for Mural and PaletteAI.

Prerequisites

  • Completed the Prepare Infrastructure section.

  • The ability to host a file server that will be used to store the Helm chart values. This can be a local file server, a cloud file server, or a shared file server.

  • The network domain that you will use for the PaletteAI cluster. You will need to create a DNS record for the domain that points to the leader node's IP address or a VIP address.

  • Information about your OpenID Connect provider OIDC. This is used to authenticate users to PaletteAI and Mural through Single Sign-On (SSO). You may skip this section if you do not want to use SSO, but be aware that this is not a recommended practice.

  • curl or wget to download the Helm chart values file.

  • In your OIDC provider, you must configure your application to allow the following callback URLs:

    • https://<domain-or-vip-address>/mural/callback
    • https://<domain-or-vip-address>/ai/callback
    • https://<domain-or-vip-address>/dex/callback
    • https://<domain-or-vip-address>

Replace <domain-or-vip-address> with the domain or VIP address you configured for the cluster.

Prepare Helm Chart Values

  1. Download the latest Helm chart values file from Mural's documentation site.

    curl --output values.yaml --silent https://docs.mural.sh/resources/assets/hosted/helm/values.yaml
  2. Open the Helm chart values file in a text editor.

Global

  1. Set global.dns.domain to the primary domain for the deployment. Do not include a protocol, e.g., use example.org, not https://example.org. Next, set global.dns.rootApp to palette-ai.

    warning

    Make sure the domain has a DNS record that points to the leader node's IP address or a VIP address.

    Click to review the complete global configuration section
    global:
    dns:
    domain: example.acme.org
    rootApp: palette-ai

FleetConfig

  1. Up next is the FleetConfig section. FleetConfig orchestrates multi-cluster management support through Open Cluster Management (OCM). The following values need to be configured.

    Click to review the complete FleetConfig configuration section

    Set the hub.apiServer to the API server of the domain or VIP address for the leader node. Ensure that you include the Kubernetes API server port and the communication protocol.

    hub:
    apiServer: 'https://acme.example.org:6443'

    Navigate to the spokes section and locate the hub-as-spoke spoke. Set the ociRegistry.endpoint to the endpoint of the Zot registry. Set your domain after the oci:// protocol and end with port :30003. During the installation, you will be able to set a password for the Zot registry. Replace the REPLACE_WITH_YOUR_PASSWORD with the password you want to use. The Zot user will be admin.

    spokes:
    - name: hub-as-spoke
    # OCI registry configuration for the spoke cluster.
    ociRegistry:
    endpoint: 'oci://acme.example.org:30003'
    repository: 'mural-workloads'
    insecure: true
    timeout: 60s
    basicAuth:
    username: 'admin'
    password: 'REPLACE_WITH_YOUR_PASSWORD'
    basicAuthSecretName:"
    certData: {}

    The last portion of the FleetConfig section is the spokeValuesOverrides section. Set the enabled value to true.

    spokeValuesOverrides:
    enabled: true

    Copy the snippet below and paste it after the spokeValuesOverrides.enabled section. Set the ociRegistry.endpoint to the domain or VIP address for the leader node. Use the same value as the ociRegistry.endpoint in the spokes section. And replace the REPLACE_WITH_YOUR_PASSWORD with the same value you set earlier.

spokeValuesOverrides:
enabled: true
hue: |
enabled: true
clusterType: spoke
defaultDefinitions:
enabled: false
ociRegistry:
enabled: true
endpoint: 'oci://acme.example.org:30003'
repository: 'mural-workloads'
insecure: true
interval: 5m
timeout: 60s
# basicAuthSecretName: "oci-creds"
basicAuth:
username: 'admin'
password: 'REPLACE_WITH_YOUR_PASSWORD'
cert-manager: |
enabled: false
flux2: |
enabled: true
policies:
create: false
zot: |
enabled: false
info

spokeValuesOverrides is how you configure what Mural components will be installed on the spoke cluster and what OCI registry will be used.

Canvas

  1. Navigate to the Canvas section. This is the Mural User Interface. The following values need to be configured.

    Click to review the complete Canvas configuration section

    Set the ingress.enabled to true. Set the ingress.domain to the domain or VIP address for the leader node.

    ingress:
    enabled: true
    annotations: {}
    ingressClassName: nginx
    domain: example.acme.org
    tls: []
    paths:
    - path: /mural
    pathType: ImplementationSpecific
    backend:
    service:
    name: canvas
    port:
    number: 2999

    Set'enableHTTPtotrue`.

    enableHTTP: true

    The last portion of Canvas to modify is the oidc section. Replace REPLACE_WITH_A_UNIQUE_STRING with a unique string and replace.with.your.domain with the domain or VIP address for the leader node.

    oidc:
    sessionSecret: 'REPLACE_WITH_A_UNIQUE_STRING'
    sessionDir: '/app/sessions'
    issuerK8sService: 'https://dex.mural-system.svc.cluster.local:5554/dex'
    skipSSLCertificateVerification: true
    redirectURL: 'https://replace.with.your.domain/mural/callback'

Cert-manager

  1. Disable cert-manager. The appliance will set up its own cert-manager instance. The enabled parameter is located at the end of the cert-manager section.

    cert-manager:
    enabled: false

Dex

  1. Navigate to the Dex section. Dex authenticates users to PaletteAI and Mural through Single Sign On (SSO). You can configure Dex to connect to an upstream OIDC provider or to a local user database. For this installation, you will configure Dex to connect to an upstream OIDC provider. If you want to configure an OIDC provider later, you can do so; however, Dex still requires some basic configuration.

    Click to review the complete Dex configuration section

    Set the config.issuer to the domain or VIP address for the leader node. Do not remove the /dex path.

    config:
    issuer: 'https://replace.with.your.domain/dex'

    This next part may be deferred for later, but we strongly recommend configuring at least one connector. Set the config.connectors to the connectors you want to use. The Dex documentation has examples for each of the connectors.

    Below is an example of an OIDC connector that connects to AWS Cognito. The oidc type can be used for any OIDC provider that does not have a native Dex connector. Different OIDC providers may require different configurations.

    connectors:
    - type: oidc
    id: aws
    name: AWS Cognito
    config:
    issuer: https://cognito-idp.us-east-1.amazonaws.com/us-east-1_xxxxxx
    clientID: xxxxxxxxxxxxxxx
    clientSecret: xxxxxxxxxxxxxxxxx
    redirectURI: https://replace.with.your.domain/dex/callback # Dex's callback url for authorized code flow that will redirect to our application's callback url
    getUserInfo: true
    userNameKey: email
    insecureSkipEmailVerified: true
    insecureEnableGroups: true
    scopes:
    - openid
    - email
    - profile
    promptType: consent
    claimMapping:
    groups: groups

    Once you have configured the connectors, you can move on to the staticClients section. Replace REPLACE_WITH_A_UNIQUE_STRING with a unique string and replace.with.your.domain with the domain or VIP address for the leader node. Do not remove the /mural/callback or the /ai/callback paths.

    staticClients:
    - id: mural
    redirectURIs:
    - 'https://replace.with.your.domain/mural/callback'
    name: 'mural'
    secret: 'REPLACE_WITH_A_UNIQUE_STRING'
    public: false
    trustedPeers:
    - kubernetes
    - palette-ai
    - id: kubernetes
    redirectURIs:
    - 'https://replace.with.your.domain'
    name: kubernetes
    secret: 'REPLACE_WITH_A_UNIQUE_STRING'
    public: false
    trustedPeers:
    - mural
    - palette-ai
    - id: palette-ai
    redirectURIs:
    - 'https://replace.with.your.domain/ai/callback'
    name: palette-ai
    secret: 'REPLACE_WITH_A_UNIQUE_STRING'
    public: false
    trustedPeers:
    - mural
    - kubernetes`

    Lastly, configure the staticPasswords section. We recommend setting the password to a strong value. Below is the default user and password in bcrypt format. We strongly recommend changing the default user and password. Remember to use a bcrypt hash generator to generate the password hash. The userID can be any unique string.

    warning

    If you did not configure any OIDC connectors, you must configure at least one static user. This is how you will access the PaletteAI UI and Mural UI. These static Dex users automatically inherit admin privileges through the service account. Dex does not support groups for local static users. There is a workaround by using the User Impersonation feature.

    staticPasswords:
    - email: 'admin@example.com'
    hash: '$2a$12$Ot2dJ0pmdIC2oXUDW/Ez1OIfhkSzLZIbsumsxkByuU3CUr02DtiC.'
    username: 'admin'
    userID: '08a8684b-db88-4b73-90a9-3cd1661f5466'

    Lastly, configure the ingress section. Replace replace.with.your.domain with the domain or VIP address for the leader node. Do not remove the /dex path.

    ingress:
    enabled: true
    className: 'nginx'
    annotations: {}
    hosts:
    - host: replace.with.your.domain
    paths:
    - path: /dex
    pathType: ImplementationSpecific

Flux

  1. Disable Flux. The appliance will stand up its own Flux instance.

    flux2:
    enabled: false

Hue

  1. Hue requires OCI registry information to work correctly. Use the same values you provided in the FleetConfig section.

    hue:
    ociRegistry:
    enabled: true
    endpoint: 'oci://replace.with.your.domain:30003'
    repository: 'mural-workloads'
    insecure: true
    interval: 5m
    timeout: 60s
    basicAuth:
    username: 'admin'
    password: 'REPLACE_WITH_YOUR_PASSWORD'

Ingress-nginx

  1. Disable Ingress-nginx. The appliance will stand up its own Ingress-nginx instance.

    ingress-nginx:
    enabled: false

PaletteAI

  1. The PaletteAI section will require some configuration.

    Click to review the complete PaletteAI configuration section

    Next, move on down to the ui section and configure the ingress section. Set the enabled to true. Set the domain to the domain or VIP address for the leader node. Do not remove the /ai path.

    ingress:
    enabled: true
    annotations: {}
    ingressClassName: nginx
    domain: replace.with.your.domain
    tls: []
    paths:
    - path: /ai
    pathType: ImplementationSpecific
    backend:
    service:
    name: palette-ai
    port:
    number: 3999

    Set the enableHTTP to true.

    enableHTTP: true

    The last portion of the PaletteAI section is the oidc section. Replace REPLACE_WITH_A_UNIQUE_STRING with a unique string and replace.with.your.domain with the domain or VIP address for the leader node. Do not remove the /ai/callback path.

    oidc:
    sessionSecret: 'REPLACE_WITH_A_UNIQUE_STRING'
    sessionDir: '/app/sessions'
    issuerK8sService: 'https://dex.mural-system.svc.cluster.local:5554/dex'
    skipSSLCertificateVerification: true
    redirectURL: 'https://replace.with.your.domain/ai/callback'

Zot

  1. Disable Zot. The appliance will stand up its own Zot instance. The enabled parameter is located all the way at the end of the Zot section.

    zot:
    enabled: false

    That concludes the Helm chart configuration for the PaletteAI appliance. The next step is to ensure the YAML file is hosted and made available on the same network as the PaletteAI appliance will be hosted, through a file server. If you need guidance on how to do this, expand the following section.

    Exposing and Hosting a File Server

    There are many options for hosting a file server. Options range from using a Python module to using an Apache HTTP server. A lightweight option is to use Caddy to host the values.yaml file.

    If you download and install Caddy, you can get started in a few steps.

    1. Download and install Caddy. Refer to the Caddy installation guide for more information.

    2. Navigate to the directory where you want to host the values.yaml file. Ensure the values.yaml file is in the directory.

    3. Issue the following command to start the Caddy server and send it to the background. Replace 8080 with the port you want to use for the file server.

      nohup caddy file-server --listen 0.0.0.0:8080 --browse &

    4. Using the IP address of the machine where Caddy is actively running, you can now access the values.yaml file at http://<IP_ADDRESS>:8080/values.yaml.

    You can do much more with Caddy, such as authentication and automatic HTTPS. However, these features require additional configuration and are beyond the scope of this guide.

    warning

    The file server must be accessible and accessible at all times, including post installation. The reason for this is to support Day-2 Operations such as upgrading PaletteAI.

Validation

  1. Verify you completed configuration for the following sections:

    • Global
    • FleetConfig
    • Canvas
    • Dex
    • Hue
    • PaletteAI
  2. Ensure you disabled the following components:

    • Cert-manager
    • Flux
    • Ingress-nginx
    • Zot
  3. Lastly, ensure you have a file server that is hosting the values.yaml file and that it is accessible at all time, including post installation.

Next Steps

You are now ready to proceed to the Deploy Cluster and Install guide and kick off the installation.