Skip to main content

3 docs tagged with "architecture"

View all tags

Architecture

PaletteAI abstracts away the complexity of deploying AI and ML application stacks on Kubernetes. Built on proven orchestration technologies, PaletteAI enables data science teams to deploy and manage their own AI and ML application stacks while platform engineering teams maintain control over infrastructure, security, and more.

Hub and Spoke Model

PaletteAI uses a hub-spoke architecture to separate the control plane from the data plane. The hub cluster is where you manage and configure applications. Spoke clusters are where your AI/ML applications actually run. This separation allows a single control plane to orchestrate workloads across many environments.

OCI Registries

PaletteAI uses OCI (Open Container Initiative) registries to store and distribute workload artifacts between hub and spoke clusters. When you deploy an application using the App Deployment workflow, PaletteAI renders your Workload Profile into Kubernetes manifests, packages them as OCI artifacts, and stores them in a registry. Flux controllers, which exist on each spoke cluster, then pull these artifacts and apply them.