General Resource Requirements for Kubernetes and Default Port Configuration

The following sections outline the various resources required to run Solace Cloud in a Customer-Controlled Cluster. The resources consumed by event broker services are provided by worker nodes, which are part of the Kubernetes cluster. Your Kubernetes cluster must provide at least enough resources to host an Enterprise 1K event broker service and will require additional capacity if you plan to deploy larger service classes.

The required resources for each pod must be sufficient for the type of service it is hosting:

  • The Enterprise-100 Standalone and Standalone service classes use a single messaging node (requires one pod for messaging). Enterprise 250 and larger Service Classes are HA groups that require two messaging pods (primary and backup) and one monitoring pod. Standalone event broker services are only available after you have added them as a service class to your Service Limits. To use standalone event broker services, contact Solace or request a limit change.

For more information about the general resource requirements for your Solace Cloud deployment in Kubernetes and the default port configurations, see:

If you are deploying Solace Cloud to an on-premises Kubernetes cluster, you must review the Resource Requirements for Kubernetes for On-Premises Deployments in Customer-Controlled Clusters.

You can deploy additional Solace services alongside your event broker services, including:

You must make sure your Kubernetes cluster meets the additional resource requirements for these features. For more information, see Resource Requirements for Additional Solace Service Deployments.

After you have reviewed the general resource requirements for Solace Cloud, you can view more detailed instructions on deploying Solace Cloud to specific Kubernetes platforms:

Mission Control Agent Pod Requirements

The Mission Control Agent has the following resource requirements:

Type Request Limit
CPU 750m 750m
Memory 1024 MiB 1024 MiB

Approximately once per week, Solace upgrades the Mission Control Agent using a rolling upgrade. The upgrade operation requires double the resources listed above to run successfully. When deployed in an auto-scaling environment, Kubernetes provides these resources as required. Deployments to non-auto-scaling environments must ensure they account for the additional resources required during the upgrade process. This includes ensuring that resource quotas applied to the namespace account for the rolling upgrade requirements.

Multiple Mission Control Agents per Kubernetes Cluster

It is possible to run more than one Mission Control Agent in a Kubernetes cluster. You may want to have a single cluster serve multiple environments such as development, QA, production, and so on. Installing multiple Mission Control Agents in the same Kubernetes cluster allows you to have these environments reside together in the same cluster.

The Mission Control Agent requires a dedicated namespace. Multiple Mission Control Agents require multiple namespaces.

Each Mission Control Agent represents one data center in Solace Cloud, which means that a Kubernetes cluster with multiple Mission Control Agents is hosting multiple data centers from the Solace Cloud Console point of view. The worker nodes are shared amongst these Mission Control Agents, therefore it is important to have enough resources provided by the worker nodes to be able to schedule all the services created by the different Mission Control Agents. When worker nodes are sized so they can run multiple software broker pods, it is also possible for pods from different Mission Control Agents to get scheduled on the same node.