Deploy Cluster and Install
This is the final guide in the PaletteAI appliance installation process. In this guide, you will trigger the actual cluster creation process and begin the installation of PaletteAI.
Prerequisites
-
Completed the Prepare Infrastructure section.
-
Completed the Prepare Helm Chart Values section.
-
The OCI password you configured in the Prepare Helm Chart Values section.
-
The domain or VIP address you configured in the Prepare Helm Chart Values section.
-
kubectl installed and available in your PATH.
-
k9s is installed and available in your PATH. This is optional, but it helps with visualizing deployed resources.
Deploy Cluster and Install
-
Log in to Local UI on the leader host. Use the IP address of the leader host and port
5080to access the Local UI. For example, if the IP address of the leader host is10.10.10.10, the Local UI will be accessible athttps://10.10.10.10:5080. -
Verify all linked Edge hosts are ready and the content is synced.

-
From the left main Menu, select Cluster.
-
Click Create Cluster.
-
Assign the cluster a name and provide tags if desired. Click Next.
-
The next screen will display the cluster configuration. Review the configuration and click Next.
-
In the following, a set of fields will be displayed. Use the table below to help you fill out the required fields.
Field Description Example Ingress Domain Enter the domain you provided in Prepare Helm Chart Values section. Do not include the network protocol (http/https). This value will be added to apiServer'scertSANfield. Do not add any paths to the URL.example.acme.orgOCI Pack Registry Password The password for the OCI registry. Use the OCI password you configured in the Prepare Helm Chart Values section. Replac3MeWithaGoodPassword!Value File URL The URL of the file server that is hosting the values.yaml file. https://mycompany.com:8443/mural-values.yaml
Take a moment and verify the desired Mural version is specified in the Mural Install Version field. Use the latest version of Mural. You can find the latest version of Mural by visiting the Mural documentation landing page.
-
Click Next.
-
In the Cluster Config screen, you have the opportunity to specify the following:
- Provide a Network Time Protocol (NTP) server
- Any SSH keys
- The VIP address to assign to the cluster
warningDo not assign the leader node's IP address as the VIP address. It must be a different IP address.
Click Next to proceed to the Node configuration screen.
-
In Node Config, you must add three control plane nodes and three worker nodes. There are additional options that you can configure. The tables below provide the options that you can configure.
Node Pool Options
- Control Plane Pool Options
- Worker Pool Options
Option Description Type Default Node pool name The name of the control plane node pool. This will be used to identify the node pool in . String control-plane-poolAllow worker capability Whether to allow workloads to be scheduled on this control plane node pool. Ensure that this is enabled. Boolean True Additional Kubernetes Node Labels (Optional) Tags for the node pool in key:valueformat. These tags can be used to filter and search for node pools in .String No default Taints Taints for the node pool in key=value:effectformat. Taints are used to prevent pods from being scheduled on the nodes in this pool unless they tolerate the taint.- Key = string
- Value = string
- Effect = string (enum)No default Option Description Type Default Node pool name The name of the worker node pool. This will be used to identify the node pool in . String worker-poolAdditional Kubernetes Node Labels (Optional) Tags for the node pool in key:valueformat. These tags can be used to filter and search for node pools in .String No default Taints Taints for the node pool in key=value:effectformat. Taints are used to prevent pods from being scheduled on the nodes in this pool unless they tolerate the taint.- Key = string
- Value = string
- Effect = string (enum)No default Pool Configuration
The following options are available for both the control plane and worker node pools. You can configure these options to your requirements. You can also remove worker pools if not needed.
Option Description Type Default Architecture The CPU architecture of the nodes. This is used to ensure compatibility with the applications operating on the nodes. String (enum) amd64Add Edge Hosts Click Add Item and select the other hosts that you installed using the ISO. These hosts will be added to the node pool. Each pool must contain at least one node. N/A - Control Plane Pool = Current host selected
- Worker Pool = No host selectedNIC Name The name of the network interface card (NIC) to use for the nodes. Leave on Auto to let the system choose the appropriate NIC, or select one manually from the drop-down menu. N/A Auto Host Name (Optional) The hostname for the nodes. This is used to identify the nodes in the cluster. A generated hostname is provided automatically, which you can adjust to your requirements. String edge-* -
Click Next when you are done adding three control plane nodes and three worker nodes.

-
In Review, check that your configuration is correct. If you need to make changes, click on any of the sections in the left sidebar to go back and edit the configuration.
When you are satisfied with your configuration, click Deploy Cluster. This will start the cluster creation process.
The cluster creation process will take 30 to 45 minutes to complete. You can monitor progress from the Overview tab on the Cluster page in the left Main Menu. The cluster is fully provisioned when the status changes to Running and the health status is Healthy.
Validation
-
Log in to Local UI on the leader host.
-
Verify the cluster is running and the health status is Healthy. Download the kubeconfig file for the cluster.
-
Set up the kubeconfig file to access the cluster. This will allow you to use
kubectlto access the cluster.export KUBECONFIG=path/to/kubeconfig -
Verify all pods are in the Running state.
kubectl get pods --all-namespaces -
Verify you can access the Palette system console using the virtual IP address (VIP) you configured earlier. Open your web browser and go to
https://<vip-address>/system. Replace<vip-address>with the VIP you configured for the cluster. -
Verify you can access PaletteAI using the virtual IP address (VIP) you configured earlier. Open your web browser and go to
https://<vip-address>/. Replace<vip-address>with the VIP you configured for the cluster. -
Verify you can access Mural using the domain or VIP address you configured earlier. Open your web browser and go to
https://<domain-or-vip-address>/mural. Replace<domain-or-vip-address>with the domain or VIP you configured for the cluster.
Next Steps
Once you have completed validation, we recommend that you follow the steps outlined below for the respective platform.
Palette
Start by activating the Palette license. There are many additional settings and configurations available in the system console. We recommend you check out the Appliance Installation Next Steps page for more information.
From an end-user perspective, you will need to create a tenant and an API key to access the Palette API. You will need to create a settingRef for your PaletteAI tenant and project.