Skip to main content

Set Up GKE Spokes

info

This guide is only required if you are deploying PaletteAI with dedicated spoke clusters separate from your hub cluster. If using the default hub-as-spoke pattern, skip this guide and proceed to Install PaletteAI on GKE.

To deploy PaletteAI on GKE with using the dedicated spoke pattern, you must configure additional ClusterRoles and ClusterRoleBindings in each spoke cluster. These grant the hub cluster's FleetConfig controller the permissions it needs to:

  • Create namespaces, service accounts, and secrets
  • Install Open Cluster Management (OCM) CRDs
  • Deploy and manage the Klusterlet agent (OCM's spoke-side component)
  • Configure RBAC for OCM components

Without these permissions, the hub cluster cannot bootstrap OCM on the spoke clusters, and they cannot join the fleet.

Prerequisites

  • An existing GKE spoke cluster.
  • Access to the GKE spoke cluster's kubeconfig.
  • kubectl version >= 1.31.0
  • curl version >= 8.5.0
  • A text editor of your choice (e.g., vi, nano, etc.)

Enablement

  1. Download the scripts required for this setup to a gke-scripts directory by issuing the commands below.

    mkdir -p gke-scripts
    curl --output gke-scripts/spoke.sh https://docs.palette-ai.com/resources/assets/hosted/scripts/gke/spoke.sh
  2. Open ./gke-scripts/spoke.sh in a text editor of your choice and configure all environment variables that are required for the spoke.sh script. The following table explains each environment variable and its intended use.

    Environment VariableDescriptionExample Value
    GCP_PROJECT_IDThe GCP project ID of the spoke GKE cluster.mural
    SPOKE_CLUSTER_NAMEThe name of the spoke GKE cluster.spoke-1
    SPOKE_KUBECONFIGThe path to the spoke GKE cluster's kubeconfig file./path/to/spoke-1.kubeconfig
    export GCP_PROJECT_ID=<gcp-project-id>
    export SPOKE_CLUSTER_NAME=<spoke-cluster-name>
    export SPOKE_KUBECONFIG=<spoke-kubeconfig>
  3. After confirming all environment variables required for spoke.sh are configured, open a shell session with access to the spoke GKE cluster and execute the below command.

    ./gke-scripts/spoke.sh

    The following output confirms that the ClusterRoles and ClusterRoleBindings have been successfully configured in your selected spoke GKE cluster.

    Example output
    [spoke.sh] ✅ FleetConfig Spoke GKE setup complete for <SPOKE_CLUSTER_NAME>.
  4. Repeat steps 2 - 3 for each spoke GKE cluster you want to join the hub GKE cluster.

Validate

  1. Set your KUBECONFIG environment variable to the path of your GKE spoke cluster's kubeconfig.

    export KUBECONFIG=<path-to-GKE-spoke>
  2. Use the following kubectl auth command to validate that the correct ClusterRoles and ClusterRoleBindings have been created in the spoke GKE cluster.

    kubectl auth can-i create klusterlets.operator.open-cluster-management.io --as=system:serviceaccount:mural-system:fleetconfig-controller-manager --all-namespaces

    An output of yes confirms that the fleetconfig-controller-manager service account has the necessary permissions to create and manage PaletteAI resources in the spoke GKE cluster.

  3. Repeat steps 1 - 2 for each spoke GKE cluster.

Next Steps

After configuring all spoke clusters, proceed to the Install PaletteAI on GKE guide. Ensure you follow the steps specific to dedicated spokes in the FleetConfig section.