Skip to main content

Topology OCM

TypePolicy

The Open Cluster Management (OCM) topology policy enables multi-cluster-aware Workload deployment by creating OCM resources. This policy will automatically create a Placement and a ManifestWorkReplicaSet that targets the Placement and contains the Workload. The topology-ocm policy also supports automatic namespace creation and other workload lifecycle settings.

Example Usage

The following examples display the different ways to use the topology-ocm policy.


Number of Clusters

In the example the number of clusters to target is specified as 3. If the property numberOfClusters is not specified, the policy will target all clusters in the clustersetbinding specified in the hubNamespace property.

apiVersion: spectrocloud.com/v1beta1
kind: Environment
metadata:
name: demo
namespace: demo-workload
spec:
topology:
type: topology-ocm
properties:
manifestWorkReplicaSet:
manifestWorkTemplate:
lifecycle:
spokeNamespace:
create: true
orphan: false
workload:
orphan: false
rolloutStrategy:
all: {}
type: All
hubNamespace: managed-cluster-set-global
placement:
numberOfClusters: 3

warning

The environment is considered to be "Ready" if the environment Placement can reports back with the same number of clusters as the number specified in the numberOfClusters property. Otherwise, the environment will be "Not Ready" until environment Placement can report back with the same number of clusters as the number specified in the numberOfClusters property.


ClusterSets

An example of a cluster selection strategy that targets all clusters in the cluster set spokes. The hubNamespace references the namespace on the hub that contains the clusterset binding.

apiVersion: spectrocloud.com/v1beta1
kind: Environment
metadata:
name: demo
namespace: demo-workload
spec:
topology:
type: topology-ocm
properties:
manifestWorkReplicaSet:
manifestWorkTemplate:
lifecycle:
spokeNamespace:
create: true
orphan: false
workload:
orphan: false
rolloutStrategy:
all: {}
type: All
hubNamespace: managed-cluster-set-spokes
placement:
clusterSets:
- spokes

Label Filtering

An example of a Cluster Selection Strategy based on label filtering that targets managed clusters that have both the environment: dev and region: south-america-dev labels. The numberOfClusters property is set to 1 to ensure that only one cluster is targeted. Take note of the hubNamespace property that references the namespace on the hub that contains the clusterset binding. The example uses the managed-cluster-set-global which makes all spokes eligible for the placement if they meet the label filtering criteria.

apiVersion: spectrocloud.com/v1beta1
kind: Environment
metadata:
name: demo
namespace: demo-workload
spec:
topology:
type: topology-ocm
properties:
hubNamespace: managed-cluster-set-global
manifestWorkReplicaSet:
rolloutStrategy:
all: {}
type: All
placement:
predicates:
- requiredClusterSelector:
labelSelector:
matchLabels:
environment: dev
region: south-america-dev
numberOfClusters: 1

Parameters

Parameter
Type
Required
DefaultDescription
hubNamespacestringYes

"managed-cluster-set-global"

Enter hub namespace for Placement and ManifestWorkReplicaSet [[Tooltip: ManagedClusterSets referenced by the Placement must exist in this namespace]]

manifestWorkReplicaSetobjectYes

Defines resources and lifecycle configuration for applying workloads to selected clusters

placementobjectYes

Configure managed cluster selection for workload distribution

Template

The following tabs display the definition's Cue template and the rendered YAML. The rendered YAML is the output of the Cue template when the definition is applied to a cluster.

import (
"encoding/yaml"
"strings"
)

"topology-ocm": {
description: "The OCM topology policy enables multi-cluster-aware Workload deployment via the creation of OCM resources: a Placement and a ManifestWorkReplicaSet that targets the Placement and contains the Workload."
annotations: {}
labels: {
"policydefinition.spectrocloud.com/type": "topology"
}
attributes: {}
type: "policy"
}

template: {
output: {
apiVersion: "cluster.open-cluster-management.io/v1beta1"
kind: "Placement"
metadata: {
labels: {
if parameter.placement.labels != _|_ {
parameter.placement.labels
}
"wl.spectrocloud.com/name": context.workloadName
"wl.spectrocloud.com/component": context.name
}
if parameter.placement.annotations != _|_ {
annotations: parameter.placement.annotations
}
namespace: parameter.hubNamespace
}
spec: {
if parameter.placement.clusterSets != _|_ {
clusterSets: parameter.placement.clusterSets
}
if parameter.placement.numberOfClusters != _|_ {
numberOfClusters: parameter.placement.numberOfClusters
}
if parameter.placement.predicates != _|_ {
predicates: parameter.placement.predicates
}
if parameter.placement.decisionStrategy != _|_ {
decisionStrategy: parameter.placement.decisionStrategy
}
if parameter.placement.prioritizerPolicy != _|_ {
prioritizerPolicy: parameter.placement.prioritizerPolicy
}
if parameter.placement.spreadPolicy != _|_ {
spreadPolicy: parameter.placement.spreadPolicy
}
if parameter.placement.tolerations != _|_ {
tolerations: parameter.placement.tolerations
}
}
}

outputs: {
_lifecycle: parameter.manifestWorkReplicaSet.manifestWorkTemplate.lifecycle
_template: parameter.manifestWorkReplicaSet.manifestWorkTemplate

// Pre-compute orphaning logic to avoid optional field reference issues
_spokeNsOrphan: *false | bool
if _lifecycle.spokeNamespace.orphan != _|_ {
_spokeNsOrphan: _lifecycle.spokeNamespace.orphan
}
_workloadOrphan: *false | bool
if _lifecycle.workload.orphan != _|_ {
_workloadOrphan: _lifecycle.workload.orphan
}
_spokeNsCreate: *true | bool
if _lifecycle.spokeNamespace.create != _|_ {
_spokeNsCreate: _lifecycle.spokeNamespace.create
}
_needsOrphaning: _workloadOrphan == true || _spokeNsOrphan == true
_orphanNamespace: _spokeNsOrphan == true || (_spokeNsCreate == true && _workloadOrphan == true)

manifestWorkReplicaSet: {
apiVersion: "work.open-cluster-management.io/v1alpha1"
kind: "ManifestWorkReplicaSet"
metadata: {
labels: {
"wl.spectrocloud.com/name": context.workloadName
"wl.spectrocloud.com/component": context.name
}
namespace: parameter.hubNamespace
}
spec: {
cascadeDeletionPolicy: "Foreground"
manifestWorkTemplate: {
manifestConfigs: [
{
resourceIdentifier: {
group: "spectrocloud.com"
name: context.workloadName
namespace: context.namespace
resource: "workloads"
}
feedbackRules: [
{
type: "JSONPaths"
jsonPaths: [
{
name: "phase"
path: ".status.phase"
},
{
name: "priorityPhases"
path: ".status.priorityPhases[*]"
},
{
name: "conditionReasons"
path: ".status.conditions[*].reason"
},
{
name: "conditionStatuses"
path: ".status.conditions[*].status"
},
{
name: "conditionTypes"
path: ".status.conditions[*].type"
},
{
name: "conditionTransitionTimes"
path: ".status.conditions[*].lastTransitionTime"
},
{
name: "components"
path: ".status.components"
},
{
name: "definitionOutputs"
path: ".status.definitionOutputs[*]"
},
{
name: "objectOutputs"
path: ".status.objectOutputs"
},
]
},
]
},
if _template.manifestConfigs != _|_ for mc in _template.manifestConfigs {
mc
},
]
if _needsOrphaning {
deleteOption: {
propagationPolicy: "SelectivelyOrphan"
selectivelyOrphans: orphaningRules: [
if _orphanNamespace {
{
name: context.namespace
resource: "namespaces"
}
},
if _workloadOrphan == true {
group: "spectrocloud.com"
name: context.workloadName
namespace: context.namespace
resource: "workloads"
},
]
}
}
if _template.executor != _|_ {
executor: _template.executor
}
if _lifecycle.spokeNamespace.create == false {
workload: manifests: [for v in context.workloadYamls {yaml.Unmarshal(v)}]
}
if _lifecycle.spokeNamespace.create == true {
workload: manifests: [
{#ns},
for v in context.workloadYamls {
yaml.Unmarshal(v)
},
]
}
}
placementRefs: [{
name: context.name
rolloutStrategy: parameter.manifestWorkReplicaSet.rolloutStrategy
}]
}
}
}

#ns: {
apiVersion: *"v1" | string
kind: *"Namespace" | string
metadata: {
// +usage=Enter namespace name
name: *context.namespace | string
if context.workloadNamespaceAnnotations != _|_ {
annotations: context.workloadNamespaceAnnotations
}
if context.workloadNamespaceLabels != _|_ {
labels: context.workloadNamespaceLabels
}
}
}

#matchExpression: {
// +usage=Enter label key or cluster claim key for selector
key: string

// +usage=Select how key relates to a set of values
operator: *"In" | "NotIn" | "Exists" | "DoesNotExist"

// +usage=Add string values for the selector [[ShowIf: .operator=="In" || .operator=="NotIn"]] [[RequiredIf: .operator=="In" || .operator=="NotIn"]] [[Tooltip: The array will be fully replaced during a strategic merge patch.]]
values?: [...string]
}

#clusterSelector: {
// +usage=Configure cluster claim selector for ManagedClusters
claimSelector?: {
// +usage=Add cluster claim selector requirements [[ListType: LogicalAND]] [[Tooltip: All requirements must be satisfied]] [[ItemTitleFrom: $.key]]
matchExpressions: [...#matchExpression]
}

// +usage=Configure label selector for selecting ManagedClusters
labelSelector?: {
// +usage=Map key-value pairs to match cluster labels [[Tooltip: Each pair is equivalent to a matchExpression with operator "In" and a single value. All conditions are ANDed.]]
matchLabels?: [string]: string

// +usage=Add label selector requirements [[ListType: LogicalAND]] [[Tooltip: All requirements must be satisfied]] [[ItemTitleFrom: $.key]]
matchExpressions?: [...#matchExpression]
}
}

// +usage=Define rules to select a set of ManagedClusters for workload placement [[Tooltip: Placements select ManagedClusters from ManagedClusterSets bound to the placement's namespace. Selection involves: (1) clusters are registered as ManagedClusters; (2) grouped into ManagedClusterSets; (3) cluster sets are bound to namespaces; (4) the Placement selects clusters from those sets using label and claim selectors; (5) PlacementDecisions are created to record selected clusters. A cluster must be part of a bound ManagedClusterSet to be considered. Workloads may be deployed to selected clusters and are removed if the cluster is no longer in PlacementDecisions.]]
#placement: {
// +usage=Add annotations for the Placement resource
annotations?: [string]: string

// +usage=Add labels for the Placement resource
labels?: [string]: string

// +usage=Select ManagedClusterSets to use for cluster selection [[Tooltip: If this list is empty, all ManagedClusterSets bound to the placement's namespace are used. If you specify values, only ManagedClusters from the intersection of this list and the bound sets will be considered.]]
clusterSets?: *["global"] | [...string]

// +usage=Set desired number of clusters to select from matching clusters [[EmptyTip: Selects all matching clusters]] [[Tooltip: If more clusters match, a random subset is selected. If an equal number match, all are selected. If fewer match, all are selected and PlacementConditionSatisfied is set to False]]
numberOfClusters?: int

// +usage=Define cluster selection predicates [[ListType: LogicalOR]] [[Tooltip: One or more predicates may match; clusters satisfying any predicate will be selected]]
predicates?: [...{
// +usage=Configure required cluster selector using cluster claim and/or label [[Tooltip: Selects clusters using claim and/or label selectors. If specified: 1. ManagedClusters that do not match the selector will not be selected by this ClusterPredicate; 2. If a selected ManagedCluster later stops matching (e.g., due to an update), it will eventually be removed from placement decisions; 3. If an unselected ManagedCluster starts matching, it may be selected or considered for selection if NumberOfClusters is specified.]]
requiredClusterSelector: #clusterSelector
}]

// +usage=Configure decision grouping strategy [[Tooltip: Divides placement decisions into groups, each with a defined number of clusters]]
decisionStrategy?: {
// +usage=Define strategy to divide selected clusters into decision groups
groupStrategy: {
// +usage=Define the number or percentage of clusters per decision group [[Tooltip: If a number is provided, clusters are split into groups with that maximum size. If a percentage is used, group size is based on total selected clusters (e.g., 20% of 100 = 5 groups of 20). Default is one group. If a predefined decision group exceeds this limit, it is split into multiple groups using the same GroupName and different GroupIndex values.]]
clustersPerDecisionGroup: string

// +usage=Define a list of predefined decision groups for placing selected clusters [[Tooltip: Clusters listed here are assigned to these groups first. Remaining clusters are divided into additional groups. Each group must not exceed ClustersPerDecisionGroup.]] [[ItemTitleFrom: $.groupName]]
decisionGroups: [...{
// +usage=Specify the group name to set as the value for the label key cluster.open-cluster-management.io/decision-group-name on PlacementDecisions
groupName: string

// +usage=Configure cluster selector using claims or labels [[Tooltip: Selects a subset of clusters using claim and/or label selectors]]
groupClusterSelector: #clusterSelector
}]
}
}

// +usage=Configure how clusters are ranked using prioritizers [[Tooltip: If this field is unset, default prioritizers (Balance and Steady) are used in Additive mode with weight 1. All other built-in prioritizers are disabled unless explicitly configured.]]
prioritizerPolicy?: {
// +usage=Define one or more prioritizers to score and rank clusters [[Tooltip: Each configuration specifies a prioritizer (built-in or add-on) and its weight, which influences the final cluster score]]
configurations: [...{
// +usage=Configure prioritizer score source and type
scoreCoordinate: {
// +usage=Configure AddOn prioritizer settings [[ShowIf: .type=="AddOn"]]
addOn?: {
// +usage=Enter the resource name of the AddOnPlacementScore [[Tooltip: The placement prioritizer uses this name to select the AddOnPlacementScore resource]]
resourceName: string

// +usage=Specify the score name to use from AddOnPlacementScore [[Tooltip: AddOnPlacementScore contains a list of score names and values; this field selects which score the prioritizer should use]]
scoreName: string
}
// +usage=Select the name of a built-in prioritizer [[ShowIf: .type=="BuiltIn"]] [[Tooltip: Valid values include: Balance – balances decisions across clusters; Steady – keeps existing decisions stable; ResourceAllocatableCPU and ResourceAllocatableMemory – sort clusters based on available resources; Spread – distributes workloads evenly across topologies.]]
builtIn?: string

// +usage=Select the type of prioritizer score [[Tooltip: BuiltIn uses predefined prioritizers like Balance or Steady. AddOn uses custom prioritizers from AddOnPlacementScore resources.]]
type: *"BuiltIn" | "Addon" | ""
}

// +usage=Set the weight of the prioritizer (-10 to 10) [[Tooltip: Prioritizer scores range from -100 to 100. The final cluster score is computed as the weighted sum across all prioritizers, sum(weight * prioritizer_score). Higher weights increase influence; 0 disables the prioritizer; negative weights favor lower scores.]]
weight: int
}]

// +usage=Select the prioritization mode: "Additive" or "Exact" [[Tooltip: "" defaults to Additive. In Additive mode, the prioritizers you specify are used along with built-in Steady and Balance (if not already listed), each with default weight 1. Other built-in prioritizers are disabled unless explicitly configured. In Exact mode, only the prioritizers you list are used. Use Exact for full control and to avoid changes across releases.]]
mode: *"Additive" | "Exact" | ""
}

// +usage=Configure how placement decisions are distributed across a set of ManagedClusters
spreadPolicy?: {
// +usage=Defines how the placement decision should be distributed among a set of ManagedClusters [[Tooltip: SpreadConstraints are evaluated in order. The scheduler first considers constraints with a lower index before evaluating those with a higher index.]] [[ItemTitleFrom: $.topologyKey]]
spreadConstraints: [...{
// +usage=Set the degree to which the workload may be unevenly distributed [[Tooltip: MaxSkew is the maximum difference between the number of selected ManagedClusters in one topology domain and the global minimum across all domains for the same topologyKey. Minimum value is 1. Default is 1.]]
maxSkew: *1 | int

// +usage=Enter topology key (label key or cluster claim name)
topologyKey: string

// +usage=Select topology key type
topologyKeyType: *"Label" | "Claim"

// +usage=Select action when MaxSkew cannot be satisfied [[Tooltip: DoNotSchedule - instructs the scheduler not to schedule more ManagedClusters when MaxSkew is not satisfied. ScheduleAnyway - instructs the scheduler to keep scheduling even if MaxSkew is not satisfied.]]
whenUnsatisfiable: *"ScheduleAnyway" | "DoNotSchedule"
}]
}

// +usage=Configure cluster taint tolerations [[Tooltip: Tolerations applied to placements allow (but don't require) selecting clusters with matching taints]]
tolerations?: [...{
// +usage=Select taint effect to tolerate [[EmptyTip: tolerates all effects]]
effect?: "NoSelect" | "PreferNoSelect" | "NoSelectIfNew"

// +usage=Enter taint key to tolerate [[EmptyTip: match all taint keys]] [[EmptyIf: .operator=="Exists"]] [[Tooltip: Must be empty if operator is "Exists". If key is empty and operator is "Exists", the toleration matches all keys and values.]]
key?: string

// +usage=Select toleration operator [[Tooltip: Defines how the key relates to the value. Equal requires an exact match. Exists matches any value for the given key. If key is empty with Exists, all taints are tolerated.]]
operator: *"Equal" | "Exists"

// +usage=Enter taint value to match [[ShowIf: .operator=="Equal"]]
value?: string

// +usage=Set toleration duration in seconds [[ShowIf: .effect=="NoSelect" || .effect=="PreferNoSelect"]] [[EmptyTip: tolerates forever]] [[Tooltip: Leave empty to tolerate taint indefinitely. Duration is counted from the taint's TimeAdded field, not from when the toleration is set or the cluster is scheduled.]]
tolerationSeconds?: int
}]
}

#lifecycle: {
// +usage=Configure workload and namespace lifecycle on spoke clusters [[Tooltip: Controls creation and deletion behavior for the workload and its namespace on spoke clusters]]
spokeNamespace: {
// +usage=Set whether to create workload namespace on spoke clusters [[Tooltip: If false, namespace must already exist on spoke clusters or deployment will fail. When true, Mural includes the namespace in ManifestWork for automatic management and the orphan setting controls deletion behavior. When false, Mural excludes the namespace from ManifestWork, preventing accidental deletion regardless of orphan setting]]
create: *true | bool

// +usage=Set whether to keep the auto-created namespace on the spoke cluster after the workload is deleted from the hub [[ShowIf: .create==true]] [[RequiredIf: .create==true]] [[Tooltip: If true, namespace will remain after workload deletion. If false, the automatically created namespace will be deleted when the workload deployment is deleted on the hub cluster]]
orphan?: *false | bool
}

// +usage=Configure workload lifecycle
workload: {
// +usage=Set whether to retain workloads on spoke after hub deletion [[Tooltip: If true, the workload will not be deleted from the spoke cluster when removed from the hub. The workload namespace will also be retained, regardless of spokeNamespace.orphan.]]
orphan: *false | bool
}
}

#resourceId: {
// +usage=Enter API group name of the Kubernetes resource [[EmptyTip: Core group]]
group?: string

// +usage=Enter the name of the Kubernetes resource
name: string

// +usage=Enter the namespace of the Kubernetes resource [[EmptyTip: Cluster-scoped resource]]
namespace?: string

// +usage=Enter the Kubernetes resource type
resource: string
}

#feedbackRule: {
// +usage=Define JSON paths to retrieve selected fields from the resource's status [[RequiredIf: .type=="JSONPaths"]] [[ShowIf: .type=="JSONPaths"]] [[ItemTitleFrom: $.name]]
jsonPaths?: [...{
// +usage=Enter alias name for this feedback field
name: string

// +usage=Enter JSON path to a field under status [[Tooltip: The path must resolve to a valid field. If it points to a non-existent field, no feedback is reported and the StatusFeedbackSynced condition is set to false. See kubectl JSONPath documentation (https://kubernetes.io/docs/reference/kubectl/jsonpath).]]
path: string

// +usage=Enter the API version of the Kubernetes resource [[EmptyTip: semantically latest API version]]
version?: string
}]

// +usage=Select the type of feedback to collect from the resource status [[Tooltip: WellKnownStatus publishes common status fields for specific resource types, including Kubernetes resources like Deployment, Job, Pod, and DaemonSet, and Open Cluster Management resources like ManifestWork. If the expected status fields are not present, no values will be reported. JSONPaths collects and publishes status fields based on one or more specified JSON paths.]]
type: *"WellKnownStatus" | "JSONPaths"
}

#rolloutConfig: {
// +usage=Set maximum failure threshold as a percentage or number (e.g., 5, 25%, 100) [[Tooltip: Rollout stops when the number of failed clusters meets or exceeds this threshold. For Progressive: threshold is based on total clusters. For ProgressivePerGroup: based on current group size. This does not apply to MandatoryDecisionGroups, which always tolerate zero failures. A failure means the cluster reaches failed or timeout status (i.e., does not become successful within the ProgressDeadline). The default is that no failures are tolerated. Pattern: ^((100|[0-9]{1,2})%|[0-9]+)$]]
maxFailures: *"0" | =~"^((100|[0-9]{1,2})%|[0-9]+)$"

// +usage=Set minimum success duration before proceeding (e.g., 2h, 90m, 360s) [[Tooltip: "Soak time" — minimum wait from the start of each rollout before moving to the next phase. Applies only if a successful state is reached and MaxFailures is not breached. Default is 0, meaning proceed immediately after a successful state is reached. Pattern: ^(([0-9])+[h|m|s])$]]
minSuccessTime: *"0" | =~"^(([0-9])+[h|m|s])$"

// +usage=Set progress timeout duration (e.g., 2h, 90m, 360s) [[Tooltip: Defines how long workload applier controller will wait for the workload to reach a successful state in the cluster. If the workload does not reach a successful state after ProgressDeadline, will stop waiting and workload will be treated as "timeout" and be counted into MaxFailures. Once the MaxFailures is breached, the rollout will stop. ProgressDeadline default value is "None", meaning the workload applier will wait for a successful state indefinitely. Pattern: ^(([0-9])+[h|m|s])|None$]]
progressDeadline: *"None" | =~"^(([0-9])+[h|m|s])|None$"
}

#mandatoryDecisionGroup: {
// +usage=Enter decision group index [[Tooltip: Must match an existing placementDecision's label value for key cluster.open-cluster-management.io/decision-group-index]]
groupIndex: int

// +usage=Enter decision group name [[Tooltip: Must match an existing placementDecision's label value for key cluster.open-cluster-management.io/decision-group-name]]
groupName: string
}

// +usage=Configure ManifestWork replication across clusters [[Tooltip: This resource creates ManifestWorks in the namespaces of selected ManagedClusters based on PlacementDecisions. When the ManifestWorkReplicaSet is deleted, the associated ManifestWorks are also removed. It continuously updates the per-cluster ManifestWorks to reflect changes in PlacementDecisions. Supports 0 to many ManagedClusters.]]
#manifestWorkReplicaSet: {
// +usage=Configure the ManifestWorkSpec used to generate a ManifestWork for each cluster [[Tooltip: This template defines the structure of each per-cluster ManifestWork based on the ManifestWorkSpec schema]]
manifestWorkTemplate: {
// +usage=Configure workload and namespace lifecycle on spoke clusters [[Tooltip: Controls creation and deletion behavior for the workload and its namespace on spoke clusters]]
lifecycle: #lifecycle

// +usage=Configure executor settings for the work agent [[Tooltip: The executor identity allows the work agent to perform pre-request processing, such as verifying it has permission to apply workloads to the local managed cluster. If not set, no additional actions are performed before applying resources (supported for backward compatibility).]]
executor?: {
// +usage=Configure the subject identity used by the work agent to apply resources to the local cluster
subject: {
// +usage=Configure the service account used by the work agent [[ShowIf: .type=="ServiceAccount"]] [[RequiredIf: .type=="ServiceAccount"]]
serviceAccount?: {
// +usage=Enter the name of the service account [[Tooltip: Must consist of lower case alphanumeric characters, hyphens, or periods. Must start and end with an alphanumeric character. Maximum 253 characters. Pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)$]]
name: =~"^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)$" & strings.MinRunes(1) & strings.MaxRunes(253)

// +usage=Enter the namespace of the service account [[Tooltip: Must consist of lower case alphanumeric characters, hyphens, or periods. Must start and end with an alphanumeric character. Maximum 253 characters. Pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)$]]
namespace: =~"^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*)$" & strings.MinRunes(1) & strings.MaxRunes(253)
}

// +usage=Select subject identity type
type: "ServiceAccount"
}
}

// +usage=Configure manifest resource settings [[Tooltip: Defines feedback rules and update strategies for workload resources. Each configuration must specify either feedbackRules, updateStrategy, or both to be meaningful.]] [[ItemTitleFrom: $.resourceIdentifier.name]]
manifestConfigs?: [...{
// +usage=Specify the group, resource, name, and namespace of a resource [[Tooltip: Rules will only be executed if the resource was created by this manifest work]]
resourceIdentifier: #resourceId

// +usage=Define resource status feedback rules [[Tooltip: Feedback rules determine which status fields are reported from the managed resource. If not set, no feedback will be collected.]]
feedbackRules?: [...#feedbackRule]

// +usage=Configure how this manifest should be updated on the cluster [[Tooltip: If not set, no update strategy will be applied]]
updateStrategy?: {
// +usage=Configure server-side apply settings [[ShowIf: .type=="ServerSideApply"]]
serverSideApply?: {
// +usage=Enter the name of the field manager used to apply the resource [[Tooltip: Defaults to work-agent, but can be any name with work-agent as the prefix. Pattern: ^work-agent(-[a-zA-Z0-9]+)*$]]
fieldManager: *"work-agent" | =~"^work-agent(-[a-zA-Z0-9]+)*$"

// +usage=Set whether to force apply the resource
force: bool
}

// +usage=Select update strategy type [[Tooltip: Update – Updates the resource using a standard update call. CreateOnly – Creates the resource once; no updates after creation. ReadOnly – Checks only for the existence of the resource via metadata; does not create or update. Feedback via statusFeedbackRules is still allowed. ServerSideApply – Uses server-side apply with work-controller as the field manager. On conflict, the Applied condition will be False with reason ApplyConflict.]]
type: *"Update" | "CreateOnly" | "ReadOnly" | "ServerSideApply"
}
}]
}

// +usage=Configure workload rollout strategy
rolloutStrategy: {
// +usage=Configure rollout for all clusters simultaneously [[ShowIf: .type=="All"]]
all?: {
// +usage=Set progress timeout duration (e.g., 2h, 90m, 360s) [[Tooltip: Defines how long workload applier controller will wait for the workload to reach a successful state in the cluster. If the workload does not reach a successful state after ProgressDeadline, will stop waiting and workload will be treated as "timeout" and be counted into MaxFailures. Once the MaxFailures is breached, the rollout will stop. ProgressDeadline default value is "None", meaning the workload applier will wait for a successful state indefinitely. Pattern: ^(([0-9])+[h|m|s])|None$]]
progressDeadline: *"None" | =~"^(([0-9])+[h|m|s])|None$"
}

// +usage=Configure progressive rollout strategy [[ShowIf: .type=="Progressive"]]
progressive?: {
#rolloutConfig

// +usage=Define mandatory decision groups that must succeed first [[Tooltip: These groups are applied before others. If they don't reach a successful state, the rollout fails. GroupName or GroupIndex must match the decisionGroups defined in the placement's decisionStrategy.]] [[ItemTitleFrom: $.groupName]]
mandatoryDecisionGroups: [...#mandatoryDecisionGroup]

// +usage=Set maximum concurrent cluster deployments as a number or percentage (e.g., 5, 25%, 100) [[DefaultFrom: parameter.placement.decisionStrategy.groupStrategy.clustersPerDecisionGroup]] [[Tooltip: Maximum number of clusters to deploy the workload to at the same time. If not set, the value is determined from clustersPerDecisionGroup in the placement's DecisionStrategy. Pattern: ^((100|[0-9]{1,2})%|[0-9]+)$]]
maxConcurrency?: =~"^((100|[0-9]{1,2})%|[0-9]+)$"
}

// +usage=Configure progressive per group rollout strategy [[ShowIf: .type=="ProgressivePerGroup"]]
progressivePerGroup?: {
#rolloutConfig

// +usage=Specify decision groups that must succeed before others [[Tooltip: These groups are rolled out first. If any group fails to reach a successful state, the rollout is halted. GroupName or GroupIndex must match decisionGroups defined in the placement's decisionStrategy.]] [[ItemTitleFrom: $.groupName]]
mandatoryDecisionGroups: [...#mandatoryDecisionGroup]
}

// +usage=Select rollout strategy type [[Tooltip: All: Apply workload to all clusters at once; Progressive: Apply workload progressively per cluster; ProgressivePerGroup: Apply workload to clusters progressively per group]]
type: *"All" | "Progressive" | "ProgressivePerGroup"
}
}

parameter: {
// +usage=Configure managed cluster selection for workload distribution
placement: #placement

// +usage=Defines resources and lifecycle configuration for applying workloads to selected clusters
manifestWorkReplicaSet: #manifestWorkReplicaSet

// +usage=Enter hub namespace for Placement and ManifestWorkReplicaSet [[Tooltip: ManagedClusterSets referenced by the Placement must exist in this namespace]]
hubNamespace: *"managed-cluster-set-global" | string
}
}