Create and Manage Projects
A Project is a team workspace within a Tenant. Projects are where most day-to-day work happens; they give teams their own space, controlled access, and dedicated resources to build and run AI/ML applications and models.
A default Tenant and Project are automatically created when you install PaletteAI with global.featureFlags.systemDefaultResources: true in your Helm values. For production workloads, we recommend creating a separate Project per team, product, or business use case. This prevents teams from accidentally modifying AI/ML applications and models, as well as important configurations outside of their purview.
Projects cannot be created in the mural-system namespace. This namespace is reserved for system resources shared across all projects.
You can create Projects using the PaletteAI user interface (UI) or using YAML Kubernetes manifests.
Create Project
Create a new Project to give your team a dedicated workspace with controlled access and isolated resources for building and deploying AI/ML applications and models.
Prerequisites
- UI Workflow
- YAML Workflow
-
An existing Tenant.
-
A user with Tenant admin permissions for the Tenant the Project will belong to.
-
kubectl installed and available in your
$PATH. -
The
KUBECONFIGenvironment variable set to the path of the PaletteAI hub cluster'skubeconfigfile.export KUBECONFIG=<kubeconfig-location> -
A Tenant namespace for the Tenant the Project will belong to.
Enablement
- UI Workflow
- YAML Workflow
-
Log in to the PaletteAI Tenant the Project will belong to.
-
From the Projects Menu in the top-left, select All Projects.
-
From the left main menu, select Projects.
-
In the top-right, select Create Project.
-
Use the following table to enter Basic information about the Project. Select Next when finished.
Basic Information
Parameter Description Required Project name The Project display name. Only letters, numbers, and spaces are permitted. ✅ Description A brief description of the Project. ❌ Tags Key-value pairs used to organize and filter Projects. ❌ -
To use the default Tenant Settings resource for this Project, proceed to the Next screen.
To use a custom Settings resource for this Project instead of the default Tenant Settings, toggle the Define custom settings switch and add one or more integrations. A Palette integration is required for custom Settings. You can have one integration of each type (Palette, Hugging Face, and NVIDIA NGC). For more information on integration types, refer to the Settings concept page.
Select Add integration to add an integration. Choose the integration type and provide the required information.
Palette Integration
The Palette integration connects PaletteAI to the Spectro Cloud Palette platform for cluster lifecycle management. A Palette integration is required to deploy Compute Pools.
Parameter Description Required Integration Name The name for this Palette integration. Each integration name must be unique within the scope of the Project. ✅ Host URL The endpoint of your Palette instance. ✅ Tenant The name of the Palette Tenant that contains the Palette Project to deploy Compute Pools on. This value is stored for display purposes in the Settings status and is not used for authentication or API calls. ✅ Project ID The ID of your Palette Project to deploy Compute Pools on. ✅ Skip SSL Certificate Verification If enabled, the server's SSL certificate is not verified when making API calls to the host URL. Enable only for servers that use a self-signed SSL certificate. Defaults to disabled. Enabling hides all Cert Secret Ref fields. ❌ Cert Secret Type Choose between Provide CA or TLS certificates or Use Existing Secret. Depending on your selection, you either provide certificate details and a name, allowing PaletteAI to create the secret for you, or point to an existing secret within the same namespace as the tenant. ❌ API Key The Palette API key that belongs to the Palette user under which Compute Pools will be deployed. ✅ Annotations (Reserved for future use) The Kubernetes annotations assigned to the integration. ❌ Labels (Reserved for future use) The Kubernetes labels assigned to the integration. ❌ Once all required information is entered, select Validate to test your Palette credentials. Once validated, Save your Palette integration.
Hugging Face Integration
The Hugging Face integration provides API access to Hugging Face Hub for model management. This integration enables per-Project control over which Hugging Face model repositories are available to teams.
Parameter Description Required Integration Name The name for this Hugging Face integration. Each integration name must be unique within the scope of the Project. ✅ API Key A Hugging Face API token with readorwriteaccess.✅ Annotations (Reserved for future use) The Kubernetes annotations assigned to the integration. ❌ Labels (Reserved for future use) The Kubernetes labels assigned to the integration. ❌ Once all required information is entered, select Validate to test your Hugging Face API key. Once validated, Save your integration.
NVIDIA NGC Integration
The NVIDIA NGC integration provides credentials for pulling NIM (NVIDIA Inference Microservice) container images from the NVIDIA container registry (
nvcr.io). This integration enables per-Project control over which NIM images are available to teams.Parameter Description Required Integration Name The name for this NVIDIA NGC integration. Each integration name must be unique within the scope of the Project. ✅ API Key An NVIDIA NGC API key for authenticating with the nvcr.iocontainer registry.✅ Annotations (Reserved for future use) The Kubernetes annotations assigned to the integration. ❌ Labels (Reserved for future use) The Kubernetes labels assigned to the integration. ❌ Once all required information is entered, select Validate to test your NVIDIA NGC API key. Once validated, Save your integration.
All configured integrations appear on the Project settings screen. If any changes are needed, select Edit; otherwise, proceed to the Next screen.
-
Use the Model Management screen to control which models and container images are available for deployment within the Project. This step is optional and only applies if Hugging Face or NVIDIA NGC integrations are configured in the previous step.
Model as a Service Mappings
The Model as a Service Mappings section allows you to map model sources to Profile Bundles. Configure source filters to define which models should use a specific Profile Bundle. When a model deployment matches all specified filters, the system automatically selects the targeted Profile Bundle.
Select Add Mapping to create a mapping:
- Choose a Source Type (Hugging Face or NVIDIA NGC).
- Add Model Match Filters as key-value pairs. A model must match all specified filters for the mapping to apply (for example,
app: vllm). - Select a Target Profile Bundle by name and version from the available system-level Profile Bundles.
- The system automatically generates selector labels from your source filters in the format
<source>-<filterKey>-<filterValue>(lowercased, truncated to 63 characters if needed) and applies them to the selected Profile Bundle.
Parameter Description Required Source Type The model provider. Choose Hugging Face or NVIDIA NGC. ✅ Model Match Filters Key-value pairs that define conditions on the model's attributes. A model must match all specified filters for this mapping to apply. ✅ Target Profile Bundle The Profile Bundle to use for models matching the filters. Select by name and version (semantic versioning) or revision (basic versioning). ✅ Models List
The Models List section contains tabs for Hugging Face and NVIDIA NGC. Each tab allows you to define an access control list (ACL) that controls which models or container images can be used within the Project. If an integration is not configured, its tab is disabled.
For each integration, you can configure allow and disallow lists:
- Allow all (default) — All models are allowed by default. Add entries to the Deny list to block specific models.
- Deny all — No models are allowed by default. Add entries to the Allow list to permit specific models.
For Hugging Face, entries are model repository names (for example,
moonshotai/Kimi-K2-Thinking). For NVIDIA NGC, entries are container image references (for example,nvcr.io/nvidia/pytorch:24.01-py3).infoHugging Face model repository entries must follow the format
owner/model-name. Entries that do not match this pattern will show a validation error.Select Next when finished.
-
Use the Access control screen to map OpenID Connect (OIDC) groups to Project roles. At least one viewer, editor, and admin group is required per Project. Select Add group to add groups or the Delete icon to remove them. Proceed to the Next screen when finished.
infoThe OIDC groups must match the groups from your identity provider (Azure AD, Okta, etc.). For static Dex users, these groups must be configured in the
dexGroupMapin your Helm values. Refer to User Impersonation for additional information on configuring static Dex users. -
The Compute Config screen determines the default settings for all Compute Pools deployed through the Project. Select each menu item, making changes as necessary. Select Next when finished.
Compute Config - General
Parameter Description Required Compute Config Name The name for this Compute Config resource. Each Compute Config name must be unique within the scope of the Project. ✅ Deletion Policy Choose whether to Delete or Orphan the cluster in Palette when the Compute Pool is deleted in PaletteAI.
- Delete - (default) Delete the cluster from Palette when you delete the Compute Pool in PaletteAI.
- Orphan - Retain the cluster in Palette when you delete the Compute Pool in PaletteAI.✅ SSH Keys Add public Secure Shell (SSH) keys to allow users to access Compute Pool nodes via SSH. ❌
Compute Config - Edge Configuration
Parameter Description Required NTP Servers Network Time Protocol (NTP) servers used to synchronize time across all nodes in the Compute Pool. ❌ Enable Network Overlay Creates a virtual network on top of the physical network of the Compute Pool, allowing cluster components within the Compute Pool to communicate using stable, virtual IP addresses, regardless of underlying physical IP address changes. The fields in the following Compute Config - Enable Network Overlay table appear when you enable the overlay network.
For more information on overlay networks, refer to Palette's Enable Network Overlay guide.❌
Compute Config - Enable Network Overlay
Parameter Description Required Enable static IP Use an IP allocation type of static instead of Dynamic Host Configuration Protocol (DHCP) for the overlay VIP. ❌ CIDR The CIDR range for the overlay network. The first IP address in the overlay CIDR range is used as the overlay VIP. This VIP is the internal overlay VIP used by the Compute Pool. ✅ Overlay Network Type The type of overlay network protocol to use. Only VXLANis supported.❌
Compute Config - Control Plane Defaults
Parameter Description Required Node Count The number of control plane nodes per Compute Pool. Choose 1, 3, or 5 nodes. If 1 is selected, Worker node eligible must be enabled. ✅ Worker node eligible Allow AI/ML applications and models to be deployed on control plane nodes. If Node Count is 1, this option cannot be disabled. ❌ CPU Count The minimum number of CPU cores required (requested) for each control plane node in the Compute Pool. ❌ Memory The minimum amount of memory required (requested) for each control plane node in the Compute Pool. A measurement unit is required (for example, GB, GiB, MB, MiB, KB, KiB, etc.). ❌ Architecture The architecture of the control plane nodes in the Compute Pool. Choose between AMD64 and ARM64. ✅ Annotations The Kubernetes annotations assigned to each control plane node in the Compute Pool. ❌ Labels The Kubernetes labels assigned to each control plane node in the Compute Pool. ❌ Taints The Kubernetes taints assigned to each control plane node in the Compute Pool. ❌
Compute Config - Worker Pool Defaults
Use the Add Worker Pool button to add additional worker pools to the Compute Pool. Each worker pool can have its own defaults. Worker pools are optional.
Parameter Description Architecture The architecture of the worker nodes in the worker pool. Choose between AMD64 and ARM64. Min Worker Nodes The minimum number of worker nodes required in the worker pool. CPU Count The minimum number of CPU cores required (requested) for each worker node in the worker pool. Memory The minimum amount of memory required (requested) for each worker node in the worker pool. A measurement unit is required (for example, GB, GiB, MB, MiB, KB, KiB, etc.). GPU Family The GPU compute family for the worker pool (for example, NVIDIA A100).GPU Count The total number of GPUs across all worker nodes in the pool for the given architecture and GPU family. GPU Memory The total GPU memory across all worker nodes in the pool. A measurement unit is required (for example, GB, GiB, MB, MiB, KB, KiB, etc.). Annotations The Kubernetes annotations assigned to each worker node in the Compute Pool. Labels The Kubernetes labels assigned to each worker node in the Compute Pool. Taints The Kubernetes taints assigned to each worker node in the worker pool. Taints can only be configured on worker pools when control plane defaults have no taints configured. -
(Optional) Use the next screen to set GPU Limits and Requests across all Compute Pools deployed in this Project. The Key parameter defines the GPU family, whereas the Value parameter specifies the GPU limit. To set a GPU limit for all other GPU families that are not defined, enter
Defaultin the Key field. Select Next when finished.When configuring GPU resource limits and requests, the following validation rules apply:
- Keys must be unique. Duplicate keys are not allowed.
- Both key and value fields must be filled in. Empty entries are not allowed.
- Values must be positive numbers (greater than 0).
For more information on GPU limits, refer to the Tenants and Projects concept page.
-
Review your Project setup. Expand the Access Control, Compute Config, and Limits sections for additional details. If changes are required, use the Previous button to return to the applicable screen or select the appropriate step from the progress menu on the left side of the screen. If no changes are needed, select Submit to create the Project.
-
Create a directory for the PaletteAI Project and navigate to the directory. Use this directory to consolidate all Project-scoped manifests.
mkdir <project-name>
cd <project-name> -
Create a Namespace resource for your Project. Use the following command to create a basic Namespace manifest with the required parameters.
cat << EOF > namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: <project-namespace>
EOF -
Create a ComputeConfig resource that defines the default configurations to apply to nodes provisioned to Compute Pools. Use the following command to create a basic ComputeConfig manifest with the required parameters. Replace all placeholders with the necessary
metadataandspec.paletteClusterConfigvalues.For a complete list of parameters, refer to the ComputeConfig resource spec.
cat << EOF > compute-config.yaml
apiVersion: spectrocloud.com/v1alpha1
kind: ComputeConfig
metadata:
name: <compute-config-name>
namespace: <project-namespace>
spec:
paletteClusterConfig:
defaults:
controlPlanePool:
nodeCount: <node-count>
EOFComputeConfig - metadata
Parameter Description Example Value nameThe name of the ComputeConfig resource. The ComputeConfig name must be unique within the Project namespace. edge-compute-confignamespaceThe namespace under which to create ComputeConfig resource. The ComputeConfig must be created in the Project namespace. project-docsComputeConfig - spec.paletteClusterConfig
Parameter Description Example Value defaults.controlPlanePool.nodeCountThe number of control plane nodes deployed per Compute Pool. Must be 1,3, or5to maintain quorum.1 -
(Optional) Create a Settings resource to configure integration credentials for your Project. If you skip this step, the Project inherits the Tenant's default Settings.
A Settings resource requires at least a Palette integration. You can optionally include Hugging Face and NVIDIA NGC integrations to enable model management features. Each integration references a Kubernetes Secret that you must create in the Project namespace.
For details on secret formats and all available fields, refer to the Settings concept page. For a complete list of parameters, refer to the Settings resource spec.
-
Create the Project resource. Use the following command to create a basic Project manifest with the required
metadataandspecparameters.For a complete list of parameters, refer to the Project resource spec.
cat << EOF > project.yaml
apiVersion: spectrocloud.com/v1alpha1
kind: Project
metadata:
name: <project-name>
namespace: <project-namespace>
spec:
displayName: '<project-display-name>'
tenantRef:
name: '<tenant-name>'
roleMapping:
viewer:
- '<viewer-role>'
editor:
- '<editor-role>'
admin:
- '<admin-role>'
computeConfigRef:
name: '<compute-config-name>'
namespace: '<project-namespace>'
EOFProject - metadata
Parameter Description Example Value nameThe name of the Project resource. The Project name must be unique within the Project namespace. project-docsnamespaceThe namespace of the Project resource. The Project must be created in the Project namespace. project-docsProject - spec
infoThe
roleMappinggroups must match the OIDC groups from your identity provider (Azure AD, Okta, etc.). For static Dex users, these groups must be configured in thedexGroupMapin your Helm values. Refer to User Impersonation for additional information on configuring static Dex users.Parameter Description Example Value displayNameThe display name of your Project. This will be the name of the Project as viewed from the Projects Menu. Only letters, numbers, and spaces are permitted. DocstenantRef.nameThe metadata.nameof the Tenant resource the Project will belong to. For a list of all Tenant names, issue the commandkubectl get tenants --all-namespaces.defaultroleMapping.viewerA list of OIDC groups with read-only access to all Project resources. At least one group is required. vision-readroleMapping.editorA list of OIDC groups that can deploy and manage AI/ML applications and models within the Project. At least one group is required. vision-writeroleMapping.adminA list of OIDC groups with full control over all Project resources and configurations. At least one group is required. platform-admincomputeConfigRef.nameThe metadata.nameof the ComputeConfig resource created in the previous step.edge-compute-configcomputeConfigRef.namespaceThe metadata.namespaceof the ComputeConfig created in the previous step. The ComputeConfig resource must be created in the Project namespace.project-docs
To use a custom Settings resource for this Project instead of the default Tenant Settings, you must reference an existing Settings resource within the Project spec. The referenced Settings resource must be in the Project's namespace. Refer to the Settings concept page for more information on integration types and secret formats.
spec:
integrations:
palette:
name: palette-api-secret
namespace: primary-dev
nvidia:
# ... additional fields omitted
huggingFace:
# ... additional fields omitted
-
Create your PaletteAI Project and associated resources using the correct dependency order. Omit the Settings resource and associated secrets if using the default Tenant Settings.
kubectl apply --filename namespace.yaml
kubectl apply --filename settings.yaml
kubectl apply --filename compute-config.yaml
kubectl apply --filename project.yamlExample outputnamespace/project-docs created
settings.spectrocloud.com/docs-settings created
computeconfig.spectrocloud.com/edge-compute-config created
project.spectrocloud.com/project-docs created
Validate
- UI Workflow
- YAML Workflow
-
Log in to PaletteAI as a Tenant or Project admin.
-
Navigate to your Project using either of the following methods:
- (Tenant or Project admin) From the Projects Menu, select your Project.
- (Tenant admin) From the Projects Menu, select All Projects. Next, from the left main menu, select Projects. Your Project is visible as a tile. Select it to change to the Project scope.
-
From the left main menu, select Project Settings.
-
Navigate through the menu items on the left to verify all Project configurations are correct.
-
Verify the Project's Namespace resource exists. In the following example, we created the namespace
project-docs.kubectl get namespacesExample outputNAME STATUS AGE
# ... additional output omitted for readability
open-cluster-management-hub Active 21d
palette-system Active 21d
piraeus-system Active 21d
project-docs Active 19m
spectro-mgmt-plane Active 21d
spectro-system Active 21d
spectro-task-693f654a6dd7fb8a0a2a6e3d Active 21d
ui-system Active 21d
zot-system Active 21d -
Verify the dependent resources exist.
infoIf you created the Project referencing an alternate Settings resource, the
SETTINGScolumn in the Project output is populated, and an additional Settings resource is created.kubectl get computeconfig,projects,settings --all-namespacesExample outputNAMESPACE NAME AGE
default computeconfig.spectrocloud.com/default 21d
project-docs computeconfig.spectrocloud.com/edge-compute-config 19m
NAMESPACE NAME DISPLAYNAME READY TENANT SETTINGS COMPUTE CONFIG ROLES CREATED AGE
default project.spectrocloud.com/default Default Project true default default true 21d
project-docs project.spectrocloud.com/project-docs Docs true default docs-settings edge-compute-config true 19m
NAMESPACE NAME READY AGE
default settings.spectrocloud.com/default true 21d
project-docs docs-settings true 19m -
Confirm a resource has the expected configuration.
kubectl describe <resource-type> <resource-name> --namespace <project-namespace>The following example fetches the current status of the ComputeConfig resource. Note the value
nodeCount: 1.Example commandkubectl describe computeconfig edge-compute-config --namespace project-docsExample outputName: edge-compute-config
Namespace: project-docs
Labels: <none>
Annotations: <none>
API Version: spectrocloud.com/v1alpha1
Kind: ComputeConfig
Metadata:
Creation Timestamp: 2026-01-06T15:26:04Z
Finalizers: spectrocloud.com/computeconfig-finalizer
Generation: 1
Resource Version: 20819572
UID: 6bf27d65-b998-4503-923a-34ef602814d6
Spec:
Palette Cluster Config:
Defaults:
Control Plane Defaults:
Architecture: AMD64
Node Count: 1
Single Node Cluster: false
Worker Node Eligible: false
Deletion Policy: delete
Events: <none>
Modify Project
As your team's needs evolve, you can adjust Project configurations to grant new team members access, modify default Compute Pool settings, enforce GPU limits, or integrate with a different Palette instance.
Prerequisites
- UI Workflow
- YAML Workflow
- A user with Tenant admin or Project admin permissions for the Project.
-
kubectl installed and available in your
$PATH. -
The
KUBECONFIGenvironment variable set to the path of the PaletteAI hub cluster'skubeconfigfile.export KUBECONFIG=<kubeconfig-location> -
A text editor to modify existing manifests. This example uses Vi.
Enablement
- UI Workflow
- YAML Workflow
-
Log in to the PaletteAI Tenant the Project belongs to.
-
Navigate to your Project using either of the following methods:
- (Tenant or Project admin) From the Projects Menu, select your Project.
- (Tenant admin only) From the Projects Menu, select All Projects. Next, from the left main menu, select Projects. Your Project is visible as a tile. Select it to change to the Project scope.
-
From the left main menu, select Project Settings.
-
Navigate through the menu items on the left. Here, you can modify existing project configurations, as well as create new ones or swap those currently in use.
Menu Item Description General Modify the Project's display name, description, and tags. Access control Map OIDC groups to Project roles. At least one viewer, editor, and admin group is required per Project. Select Add group to add groups or the Delete icon to remove them.
The OIDC groups must match the groups from your identity provider (Azure AD, Okta, etc.). For static Dex users, these groups must be configured in thedexGroupMapin your Helm values. Refer to User Impersonation for additional information on configuring static Dex users.Compute View the list of control-plane- and worker-node-eligible Compute resources available for deploying Compute Pools and AI/ML applications and models. PaletteAI populates this list based on the Palette integration defined in the Settings and the Edge hosts with the appropriate tags. Compute Config Select the default Compute Config to use when deploying Compute Pools in this Project. Use the Create Compute Config button to create additional Compute Configs, which can be used to override the default Compute Config during Compute Pool creation. Use the three-dot menu to Edit or Delete existing Compute Configs. Limits Set GPU Limits and Requests across all Compute Pools deployed in this Project. Model as a Service Mappings Map model sources to Profile Bundles. When a model from a matching source and filters is deployed, the system automatically selects the targeted Profile Bundle. Only applicable for Hugging Face and NVIDIA NGC integrations. Repositories List Configure which models and container images are available for deployment within the Project. Define allow/disallow lists for Hugging Face model repositories and NVIDIA NGC NIM images. Settings Ref Select the default Settings used to locate Compute resources. The title above the search bar indicates if the Settings resource is sourced from the Tenant or Project scope.
To use a different Settings resource, select Create Project Settings or Change Settings Ref. To modify existing Project Settings, reference it as the default Settings resource, and use the three-dot menu to Edit the Settings. Tenant Settings cannot be modified at the Project scope, and all Settings must have at least one integration configured.
- Open the applicable resource manifest using an editor of your choice and make any necessary changes.
Use the following table for an overview of changes you can make to required parameters for different resources. For a complete list of parameters per resource, refer to the applicable Resource page.
| Resource | Modifications | Additional Information |
|---|---|---|
| Project | Update displayName, description, tags, and roleMapping groups to modify basic information and access control. Update the Project's gpuResources to modify GPU limits and requests across all Compute Pools. Update modelSettings to enable or disable Hugging Face and NVIDIA NGC integrations, modify model repository and NIM ACLs, or configure profile bundle mappings. | - Project - metadata - Project - spec - Tenants and Projects - GPU Quotas |
| ComputeConfig | Update paletteClusterConfig to modify Edge host configuration, control plane defaults, and worker pool defaults. Create additional ComputeConfig resources and update the Project's default computeConfigRef. | - ComputeConfig - metadata - ComputeConfig - spec.paletteClusterConfig |
| Settings | Create a new Settings resource in the Project namespace and update the Project's settingsRef to use custom integration Settings. A Settings can include Palette, Hugging Face, and NVIDIA NGC integrations. | - Settings |
In this example, we used Vi to modify the nodeCount from 1 to 3.
vi compute-config.yaml
apiVersion: spectrocloud.com/v1alpha1
kind: ComputeConfig
metadata:
name: edge-compute-config
namespace: project-docs
spec:
paletteClusterConfig:
defaults:
controlPlanePool:
nodeCount: 3
-
When you are finished, save the file and apply your changes.
kubectl apply --filename <manifest-location>Example commandkubectl apply --filename compute-config.yaml
Validate
- UI Workflow
- YAML Workflow
-
Log in to PaletteAI as a Tenant or Project admin.
-
Navigate to your Project using either of the following methods:
- (Tenant or Project admin) From the Projects Menu, select your Project.
- (Tenant admin) From the Projects Menu, select All Projects. Next, from the left main menu, select Projects. Your Project is visible as a tile. Select it to change to the Project scope.
-
From the left main menu, select Project Settings.
-
Navigate through the menu items on the left. Verify the your updated project configurations are displayed and set to default, where applicable.
Verify your changes were applied by inspecting the updated resource.
kubectl describe <resource-type> <resource-name> --namespace <project-namespace>
The following example verifies the ComputeConfig was modified with the new value of nodeCount: 3.
kubectl describe computeconfig edge-compute-config --namespace project-docs
Name: edge-compute-config
Namespace: project-docs
Labels: <none>
Annotations: <none>
API Version: spectrocloud.com/v1alpha1
Kind: ComputeConfig
Metadata:
Creation Timestamp: 2026-01-06T15:26:04Z
Finalizers: spectrocloud.com/computeconfig-finalizer
Generation: 2
Resource Version: 21630027
UID: 6bf27d65-b998-4503-923a-34ef602814d6
Spec:
Palette Cluster Config:
Defaults:
Control Plane Defaults:
Architecture: AMD64
Node Count: 3
Single Node Cluster: false
Worker Node Eligible: false
Deletion Policy: delete
Events: <none>
Delete Project
If you no longer need a Project, you can remove it, along with all Project-scoped configurations and resources, such as Compute Configs and Settings.
To reuse certain Project configurations in another Project, change the namespace referenced in the resource's manifest to point to the other Project's namespace. For details, refer to the YAML Workflow - Enablement section.
Prerequisites
- UI Workflow
- YAML Workflow
- A user with Tenant admin permissions for the Tenant the Project belongs to.
-
kubectl installed and available in your
$PATH. -
The
KUBECONFIGenvironment variable set to the path of the PaletteAI hub cluster'skubeconfigfile.export KUBECONFIG=<kubeconfig-location>
Enablement
- UI Workflow
- YAML Workflow
-
Log in to the PaletteAI Tenant the Project belongs to.
-
From the Projects Menu in the top-left, select All Projects.
-
From the left main menu, select Projects.
-
Select the three-dot menu beside the applicable Project tile and Delete the Project.
To delete the Project and all resources within the Project, such as the Project's Compute Config, delete the Project's Namespace resource.
kubectl delete --filename <namespace-location>
kubectl delete --filename namespace.yaml
If you do not have access to the Namespace manifest, delete the Namespace resource directly.
kubectl delete namespace <project-namespace>
kubectl delete namespace project-docs
After you delete the Project's namespace, you can migrate existing resources to other Projects. For example, to migrate a ComputeConfig resource, change the namespace referenced in metadata.namespace to the alternate Project's namespace, and apply your changes with kubectl apply --filename <manifest-location>.
apiVersion: spectrocloud.com/v1alpha1
kind: ComputeConfig
metadata:
name: edge-compute-config
namespace: <other-project-namespace>
spec:
paletteClusterConfig:
defaults:
controlPlanePool:
nodeCount: 3
Validate
- UI Workflow
- YAML Workflow
-
Log in to the PaletteAI Tenant the Project belongs to.
-
From the Projects Menu in the top-left, select All Projects.
-
From the left main menu, select Projects.
-
Verify the applicable Project tile is no longer available.
-
Verify the Project's namespace no longer exists. In the following example, we removed the namespace
project-docs.kubectl get namespacesExample outputNAME STATUS AGE
# ... additional output omitted for readability
open-cluster-management-hub Active 21d
palette-system Active 21d
piraeus-system Active 21d
spectro-mgmt-plane Active 21d
spectro-system Active 21d
spectro-task-693f654a6dd7fb8a0a2a6e3d Active 21d
ui-system Active 21d
zot-system Active 21d -
Verify the dependent resources no longer exist. In the following example, the ComputeConfig, Project, and Settings resources for the
project-docsnamespace were removed as a result of deleting the Namespace resource.Example outputkubectl get computeconfig,projects,settings --all-namespacesExample outputNAMESPACE NAME AGE
default computeconfig.spectrocloud.com/default 21d
NAMESPACE NAME DISPLAYNAME READY TENANT SETTINGS COMPUTE CONFIG ROLES CREATED AGE
default project.spectrocloud.com/default Default Project true default default true 21d
NAMESPACE NAME READY AGE
default settings.spectrocloud.com/default true 21d
Next Steps
Once you have a Project, you can create or import a Profile Bundle. Profile Bundles are used to configure reusable infrastructure stacks for Compute Pools and application stacks for App Deployments.