Deploy PaletteAI
This is the final guide in the PaletteAI appliance installation process. In this guide, you will trigger the actual cluster creation process and install PaletteAI on the cluster.
Prerequisites
-
Completed the Prepare Infrastructure section.
-
Completed the Prepare Helm Chart Values section.
-
The OCI password you configured in the Prepare Helm Chart Values section.
-
The domain or VIP address you configured in the Prepare Helm Chart Values section.
-
kubectl installed and available in your PATH.
-
(Optional) k9s installed and available in your PATH to help visualize deployed resources.
Deploy Cluster and Install PaletteAI
-
Log in to Local UI on the leader host. Use the IP address of the leader host and port
5080to access the Local UI. For example, if the IP address of the leader host is10.10.10.10, access Local UI athttps://10.10.10.10:5080. -
Verify all linked Edge hosts are ready and the content is synced.

-
From the left main menu, select Cluster > Create cluster.
-
Assign the cluster a name and provide tags, if desired. Select Next.
-
The Cluster Profile screen displays the imported cluster configuration. Review the configuration and select Next.
-
Complete all applicable fields on the Profile Config screen. Refer to the following table for guidance on certain fields. Select Next when finished.
Field Description Example Root Domain The domain you provided in Prepare Helm Chart Values section. Do not include the network protocol (http/https). This value will be added to apiServer'scertSANfield. Do not add any paths to the URL.example.acme.orgIngress Domain The same value as Root Domain. example.acme.orgOCI Pack Registry Password The password for the OCI registry. Use the OCI password you configured in the fleetConfig and hue sections of the PaletteAI Helm chart values.yamlfile.Replac3MeWithaGoodPassword!Value File URL The URL of the file server hosting the values.yamlfile.https://mycompany.com:8443/mural-values.yamlMural Install Version The desired PaletteAI version. This must match value used for global.muralVersionin your Helm chart. Use semantic versioning.0.5.0Admin Grafana Password The password for the Grafana administrator user. This is used to access the Grafana dashboard. "" -
On the Cluster Config screen, specify the any of the following, as necessary. Select Next when finished.
- Network Time Protocol (NTP) server
- SSH keys
- VIP address to assign to the cluster
warningDo not assign the leader node's IP address as the VIP address. It must be a different IP address.
-
On the Node Config screen, add three control plane nodes and three worker nodes. There are additional options that you can configure. The following tables describe additional configuration options. Select Next when finished.
Node Pool Options
- Control Plane Pool Options
- Worker Pool Options
Option Description Type Default Node pool name The name of the control plane node pool. This is used to identify the node pool in . String control-plane-poolAllow worker capability Allow applications to be scheduled on this control plane node pool. This option must be enabled. Boolean trueAdditional Kubernetes Node Labels (Optional) Tags for the node pool in key:valueformat. These tags are used to filter and search for node pools in .String No default Taints Taints for the node pool in key=value:effectformat. Taints are used to prevent pods from being scheduled on the nodes in this pool unless they tolerate the taint.- Key - String
- Value - String
- Effect - String (enum)No default Option Description Type Default Node pool name The name of the worker node pool. This is used to identify the node pool in . String worker-poolAdditional Kubernetes Node Labels (Optional) Tags for the node pool in key:valueformat. These tags are used to filter and search for node pools in .String No default Taints Taints for the node pool in key=value:effectformat. Taints are used to prevent pods from being scheduled on the nodes in this pool unless they tolerate the taint.- Key - String
- Value - String
- Effect - String (enum)No default Pool Configuration
The following options are available for both the control plane and worker node pools. Configure these options to your requirements. You can also remove worker pools if not needed.
Option Description Type Default Architecture The CPU architecture of the nodes. This is used to ensure compatibility with the applications operating on the nodes. String (enum) amd64Add Edge Hosts Select Add Item and choose the other hosts that you installed using the ISO. These hosts will be added to the node pool. Each pool must contain at least one node. N/A - Control Plane Pool - Current host selected
- Worker Pool - No host selectedNIC Name The name of the network interface card (NIC) to use for the nodes. Leave on Auto to let the system choose the appropriate NIC, or select one manually from the drop-down menu. N/A Auto Host Name (Optional) The hostname for the nodes. This is used to identify the nodes in the cluster. A generated hostname is provided automatically, which you can adjust to fit your requirements. String edge-*
-
On the Review screen, ensure that your configuration is correct. If you need to make changes, select the applicable section on the left sidebar to go back and edit the configuration.
When you are satisfied with your configuration, select Deploy Cluster to begin the cluster creation process.
The cluster creation process takes approximately 30 to 45 minutes to complete. You can monitor the cluster's progress by navigating to the left main menu and selecting Cluster > Overview. The cluster is fully provisioned when the status changes to Running and the health status is Healthy.
Validation
-
Log in to Local UI on the leader host.
-
Verify the cluster is running and the health status is Healthy. Download the kubeconfig file for the cluster.
-
Set up the kubeconfig file to access the cluster. This will allow you to use
kubectlto access the cluster.export KUBECONFIG=path/to/kubeconfig -
Verify all pods are in the Running state.
kubectl get pods --all-namespaces -
Verify you can access the following components. Replace
<domain-or-vip-address>with the domain or VIP you configured for the cluster.- Palette system console -
https://<domain-or-vip-address>/system. - Palette -
https://<domain-or-vip-address> - PaletteAI -
https://<domain-or-vip-address>/ai
- Palette system console -
Next Steps
Once Palette and PaletteAI are installed on your new cluster, you must integrate Palette with PaletteAI using PaletteAI's Settings resource. This resource requires a Palette tenant, project, and API key in order to communicate with Palette and deploy AI/ML applications to the appropriate location.
Proceed to the Integrate with Palette guide to learn how to prepare your Palette environment.