Skip to main content

Create and Manage Projects

A Project is a team workspace within a Tenant. Projects are where most day-to-day work happens; they give teams their own space, controlled access, and dedicated resources to build and run AI/ML applications and models.

A default Tenant and Project are automatically created when you install PaletteAI with global.featureFlags.systemDefaultResources: true in your Helm values. For production workloads, we recommend creating a separate Project per team, product, or business use case. This prevents teams from accidentally modifying AI/ML applications and models, as well as important configurations outside of their purview.

info

Projects cannot be created in the mural-system namespace. This namespace is reserved for system resources shared across all projects.

You can create Projects using the PaletteAI user interface (UI) or using YAML Kubernetes manifests.

Create Project

Create a new Project to give your team a dedicated workspace with controlled access and isolated resources for building and deploying AI/ML applications and models.

Prerequisites

  • An existing Tenant.

  • A user with Tenant admin permissions for the Tenant the Project will belong to.

Enablement

  1. Log in to the PaletteAI Tenant the Project will belong to.

  2. From the Projects Menu in the top-left, select All Projects.

  3. From the left main menu, select Projects.

  4. In the top-right, select Create Project.

  5. Use the following table to enter Basic information about the Project. Select Next when finished.

    Basic Information

    ParameterDescriptionRequired
    Project nameThe Project display name. Only letters, numbers, and spaces are permitted.
    DescriptionA brief description of the Project.
    TagsKey-value pairs used to organize and filter Projects.
  6. To use the default Tenant Settings resource for this Project, proceed to the Next screen.

    To use a custom Settings resource for this Project instead of the default Tenant Settings, toggle the Define custom settings switch and add one or more integrations. A Palette integration is required for custom Settings. You can have one integration of each type (Palette, Hugging Face, and NVIDIA NGC). For more information on integration types, refer to the Settings concept page.

    Select Add integration to add an integration. Choose the integration type and provide the required information.

    Palette Integration

    The Palette integration connects PaletteAI to the Spectro Cloud Palette platform for cluster lifecycle management. A Palette integration is required to deploy Compute Pools.

    ParameterDescriptionRequired
    Integration NameThe name for this Palette integration. Each integration name must be unique within the scope of the Project.
    Host URLThe endpoint of your Palette instance.
    TenantThe name of the Palette Tenant that contains the Palette Project to deploy Compute Pools on. This value is stored for display purposes in the Settings status and is not used for authentication or API calls.
    Project IDThe ID of your Palette Project to deploy Compute Pools on.
    Skip SSL Certificate VerificationIf enabled, the server's SSL certificate is not verified when making API calls to the host URL. Enable only for servers that use a self-signed SSL certificate. Defaults to disabled. Enabling hides all Cert Secret Ref fields.
    Cert Secret TypeChoose between Provide CA or TLS certificates or Use Existing Secret. Depending on your selection, you either provide certificate details and a name, allowing PaletteAI to create the secret for you, or point to an existing secret within the same namespace as the tenant.
    API KeyThe Palette API key that belongs to the Palette user under which Compute Pools will be deployed.
    Annotations(Reserved for future use) The Kubernetes annotations assigned to the integration.
    Labels(Reserved for future use) The Kubernetes labels assigned to the integration.

    Once all required information is entered, select Validate to test your Palette credentials. Once validated, Save your Palette integration.

    Hugging Face Integration

    The Hugging Face integration provides API access to Hugging Face Hub for model management. This integration enables per-Project control over which Hugging Face model repositories are available to teams.

    ParameterDescriptionRequired
    Integration NameThe name for this Hugging Face integration. Each integration name must be unique within the scope of the Project.
    API KeyA Hugging Face API token with read or write access.
    Annotations(Reserved for future use) The Kubernetes annotations assigned to the integration.
    Labels(Reserved for future use) The Kubernetes labels assigned to the integration.

    Once all required information is entered, select Validate to test your Hugging Face API key. Once validated, Save your integration.

    NVIDIA NGC Integration

    The NVIDIA NGC integration provides credentials for pulling NIM (NVIDIA Inference Microservice) container images from the NVIDIA container registry (nvcr.io). This integration enables per-Project control over which NIM images are available to teams.

    ParameterDescriptionRequired
    Integration NameThe name for this NVIDIA NGC integration. Each integration name must be unique within the scope of the Project.
    API KeyAn NVIDIA NGC API key for authenticating with the nvcr.io container registry.
    Annotations(Reserved for future use) The Kubernetes annotations assigned to the integration.
    Labels(Reserved for future use) The Kubernetes labels assigned to the integration.

    Once all required information is entered, select Validate to test your NVIDIA NGC API key. Once validated, Save your integration.

    All configured integrations appear on the Project settings screen. If any changes are needed, select Edit; otherwise, proceed to the Next screen.

  7. Use the Model Management screen to control which models and container images are available for deployment within the Project. This step is optional and only applies if Hugging Face or NVIDIA NGC integrations are configured in the previous step.

    Model as a Service Mappings

    The Model as a Service Mappings section allows you to map model sources to Profile Bundles. Configure source filters to define which models should use a specific Profile Bundle. When a model deployment matches all specified filters, the system automatically selects the targeted Profile Bundle.

    Select Add Mapping to create a mapping:

    1. Choose a Source Type (Hugging Face or NVIDIA NGC).
    2. Add Model Match Filters as key-value pairs. A model must match all specified filters for the mapping to apply (for example, app: vllm).
    3. Select a Target Profile Bundle by name and version from the available system-level Profile Bundles.
    4. The system automatically generates selector labels from your source filters in the format <source>-<filterKey>-<filterValue> (lowercased, truncated to 63 characters if needed) and applies them to the selected Profile Bundle.
    ParameterDescriptionRequired
    Source TypeThe model provider. Choose Hugging Face or NVIDIA NGC.
    Model Match FiltersKey-value pairs that define conditions on the model's attributes. A model must match all specified filters for this mapping to apply.
    Target Profile BundleThe Profile Bundle to use for models matching the filters. Select by name and version (semantic versioning) or revision (basic versioning).

    Models List

    The Models List section contains tabs for Hugging Face and NVIDIA NGC. Each tab allows you to define an access control list (ACL) that controls which models or container images can be used within the Project. If an integration is not configured, its tab is disabled.

    For each integration, you can configure allow and disallow lists:

    • Allow all (default) — All models are allowed by default. Add entries to the Deny list to block specific models.
    • Deny all — No models are allowed by default. Add entries to the Allow list to permit specific models.

    For Hugging Face, entries are model repository names (for example, moonshotai/Kimi-K2-Thinking). For NVIDIA NGC, entries are container image references (for example, nvcr.io/nvidia/pytorch:24.01-py3).

    info

    Hugging Face model repository entries must follow the format owner/model-name. Entries that do not match this pattern will show a validation error.

    Select Next when finished.

  8. Use the Access control screen to map OpenID Connect (OIDC) groups to Project roles. At least one viewer, editor, and admin group is required per Project. Select Add group to add groups or the Delete icon to remove them. Proceed to the Next screen when finished.

    info

    The OIDC groups must match the groups from your identity provider (Azure AD, Okta, etc.). For static Dex users, these groups must be configured in the dexGroupMap in your Helm values. Refer to User Impersonation for additional information on configuring static Dex users.

  9. The Compute Config screen determines the default settings for all Compute Pools deployed through the Project. Select each menu item, making changes as necessary. Select Next when finished.

    Compute Config - General
    ParameterDescriptionRequired
    Compute Config NameThe name for this Compute Config resource. Each Compute Config name must be unique within the scope of the Project.
    Deletion PolicyChoose whether to Delete or Orphan the cluster in Palette when the Compute Pool is deleted in PaletteAI.

    - Delete - (default) Delete the cluster from Palette when you delete the Compute Pool in PaletteAI.
    - Orphan - Retain the cluster in Palette when you delete the Compute Pool in PaletteAI.
    SSH KeysAdd public Secure Shell (SSH) keys to allow users to access Compute Pool nodes via SSH.

    Compute Config - Edge Configuration

    ParameterDescriptionRequired
    NTP ServersNetwork Time Protocol (NTP) servers used to synchronize time across all nodes in the Compute Pool.
    Enable Network OverlayCreates a virtual network on top of the physical network of the Compute Pool, allowing cluster components within the Compute Pool to communicate using stable, virtual IP addresses, regardless of underlying physical IP address changes. The fields in the following Compute Config - Enable Network Overlay table appear when you enable the overlay network.

    For more information on overlay networks, refer to Palette's Enable Network Overlay guide.

    Compute Config - Enable Network Overlay
    ParameterDescriptionRequired
    Enable static IPUse an IP allocation type of static instead of Dynamic Host Configuration Protocol (DHCP) for the overlay VIP.
    CIDRThe CIDR range for the overlay network. The first IP address in the overlay CIDR range is used as the overlay VIP. This VIP is the internal overlay VIP used by the Compute Pool.
    Overlay Network TypeThe type of overlay network protocol to use. Only VXLAN is supported.

    Compute Config - Control Plane Defaults

    ParameterDescriptionRequired
    Node CountThe number of control plane nodes per Compute Pool. Choose 1, 3, or 5 nodes. If 1 is selected, Worker node eligible must be enabled.
    Worker node eligibleAllow AI/ML applications and models to be deployed on control plane nodes. If Node Count is 1, this option cannot be disabled.
    CPU CountThe minimum number of CPU cores required (requested) for each control plane node in the Compute Pool.
    MemoryThe minimum amount of memory required (requested) for each control plane node in the Compute Pool. A measurement unit is required (for example, GB, GiB, MB, MiB, KB, KiB, etc.).
    ArchitectureThe architecture of the control plane nodes in the Compute Pool. Choose between AMD64 and ARM64.
    AnnotationsThe Kubernetes annotations assigned to each control plane node in the Compute Pool.
    LabelsThe Kubernetes labels assigned to each control plane node in the Compute Pool.
    TaintsThe Kubernetes taints assigned to each control plane node in the Compute Pool.

    Compute Config - Worker Pool Defaults

    Use the Add Worker Pool button to add additional worker pools to the Compute Pool. Each worker pool can have its own defaults. Worker pools are optional.

    ParameterDescription
    ArchitectureThe architecture of the worker nodes in the worker pool. Choose between AMD64 and ARM64.
    Min Worker NodesThe minimum number of worker nodes required in the worker pool.
    CPU CountThe minimum number of CPU cores required (requested) for each worker node in the worker pool.
    MemoryThe minimum amount of memory required (requested) for each worker node in the worker pool. A measurement unit is required (for example, GB, GiB, MB, MiB, KB, KiB, etc.).
    GPU FamilyThe GPU compute family for the worker pool (for example, NVIDIA A100).
    GPU CountThe total number of GPUs across all worker nodes in the pool for the given architecture and GPU family.
    GPU MemoryThe total GPU memory across all worker nodes in the pool. A measurement unit is required (for example, GB, GiB, MB, MiB, KB, KiB, etc.).
    AnnotationsThe Kubernetes annotations assigned to each worker node in the Compute Pool.
    LabelsThe Kubernetes labels assigned to each worker node in the Compute Pool.
    TaintsThe Kubernetes taints assigned to each worker node in the worker pool. Taints can only be configured on worker pools when control plane defaults have no taints configured.
  10. (Optional) Use the next screen to set GPU Limits and Requests across all Compute Pools deployed in this Project. The Key parameter defines the GPU family, whereas the Value parameter specifies the GPU limit. To set a GPU limit for all other GPU families that are not defined, enter Default in the Key field. Select Next when finished.

    When configuring GPU resource limits and requests, the following validation rules apply:

    • Keys must be unique. Duplicate keys are not allowed.
    • Both key and value fields must be filled in. Empty entries are not allowed.
    • Values must be positive numbers (greater than 0).

    For more information on GPU limits, refer to the Tenants and Projects concept page.

  11. Review your Project setup. Expand the Access Control, Compute Config, and Limits sections for additional details. If changes are required, use the Previous button to return to the applicable screen or select the appropriate step from the progress menu on the left side of the screen. If no changes are needed, select Submit to create the Project.

Validate

  1. Log in to PaletteAI as a Tenant or Project admin.

  2. Navigate to your Project using either of the following methods:

    • (Tenant or Project admin) From the Projects Menu, select your Project.
    • (Tenant admin) From the Projects Menu, select All Projects. Next, from the left main menu, select Projects. Your Project is visible as a tile. Select it to change to the Project scope.
  3. From the left main menu, select Project Settings.

  4. Navigate through the menu items on the left to verify all Project configurations are correct.

Modify Project

As your team's needs evolve, you can adjust Project configurations to grant new team members access, modify default Compute Pool settings, enforce GPU limits, or integrate with a different Palette instance.

Prerequisites

Enablement

  1. Log in to the PaletteAI Tenant the Project belongs to.

  2. Navigate to your Project using either of the following methods:

    • (Tenant or Project admin) From the Projects Menu, select your Project.
    • (Tenant admin only) From the Projects Menu, select All Projects. Next, from the left main menu, select Projects. Your Project is visible as a tile. Select it to change to the Project scope.
  3. From the left main menu, select Project Settings.

  4. Navigate through the menu items on the left. Here, you can modify existing project configurations, as well as create new ones or swap those currently in use.

    Menu ItemDescription
    GeneralModify the Project's display name, description, and tags.
    Access controlMap OIDC groups to Project roles. At least one viewer, editor, and admin group is required per Project. Select Add group to add groups or the Delete icon to remove them.

    The OIDC groups must match the groups from your identity provider (Azure AD, Okta, etc.). For static Dex users, these groups must be configured in the dexGroupMap in your Helm values. Refer to User Impersonation for additional information on configuring static Dex users.
    ComputeView the list of control-plane- and worker-node-eligible Compute resources available for deploying Compute Pools and AI/ML applications and models. PaletteAI populates this list based on the Palette integration defined in the Settings and the Edge hosts with the appropriate tags.
    Compute ConfigSelect the default Compute Config to use when deploying Compute Pools in this Project. Use the Create Compute Config button to create additional Compute Configs, which can be used to override the default Compute Config during Compute Pool creation. Use the three-dot menu to Edit or Delete existing Compute Configs.
    LimitsSet GPU Limits and Requests across all Compute Pools deployed in this Project.
    Model as a Service MappingsMap model sources to Profile Bundles. When a model from a matching source and filters is deployed, the system automatically selects the targeted Profile Bundle. Only applicable for Hugging Face and NVIDIA NGC integrations.
    Repositories ListConfigure which models and container images are available for deployment within the Project. Define allow/disallow lists for Hugging Face model repositories and NVIDIA NGC NIM images.
    Settings RefSelect the default Settings used to locate Compute resources. The title above the search bar indicates if the Settings resource is sourced from the Tenant or Project scope.

    To use a different Settings resource, select Create Project Settings or Change Settings Ref. To modify existing Project Settings, reference it as the default Settings resource, and use the three-dot menu to Edit the Settings. Tenant Settings cannot be modified at the Project scope, and all Settings must have at least one integration configured.

Validate

  1. Log in to PaletteAI as a Tenant or Project admin.

  2. Navigate to your Project using either of the following methods:

    • (Tenant or Project admin) From the Projects Menu, select your Project.
    • (Tenant admin) From the Projects Menu, select All Projects. Next, from the left main menu, select Projects. Your Project is visible as a tile. Select it to change to the Project scope.
  3. From the left main menu, select Project Settings.

  4. Navigate through the menu items on the left. Verify the your updated project configurations are displayed and set to default, where applicable.

Delete Project

If you no longer need a Project, you can remove it, along with all Project-scoped configurations and resources, such as Compute Configs and Settings.

tip

To reuse certain Project configurations in another Project, change the namespace referenced in the resource's manifest to point to the other Project's namespace. For details, refer to the YAML Workflow - Enablement section.

Prerequisites

  • A user with Tenant admin permissions for the Tenant the Project belongs to.

Enablement

  1. Log in to the PaletteAI Tenant the Project belongs to.

  2. From the Projects Menu in the top-left, select All Projects.

  3. From the left main menu, select Projects.

  4. Select the three-dot menu beside the applicable Project tile and Delete the Project.

Validate

  1. Log in to the PaletteAI Tenant the Project belongs to.

  2. From the Projects Menu in the top-left, select All Projects.

  3. From the left main menu, select Projects.

  4. Verify the applicable Project tile is no longer available.

Next Steps

Once you have a Project, you can create or import a Profile Bundle. Profile Bundles are used to configure reusable infrastructure stacks for Compute Pools and application stacks for App Deployments.