diff --git a/.prow.yaml b/.prow.yaml index 7ac9b0d80..053287fe8 100644 --- a/.prow.yaml +++ b/.prow.yaml @@ -56,6 +56,6 @@ presubmits: resources: requests: cpu: 200m - memory: 128Mi + memory: 512Mi limits: - memory: 1Gi + memory: 2Gi diff --git a/content/kubermatic-virtualization/_index.md b/content/kubermatic-virtualization/_index.md new file mode 100644 index 000000000..b9b17e94d --- /dev/null +++ b/content/kubermatic-virtualization/_index.md @@ -0,0 +1,7 @@ ++++ +title = "Kubermatic Virtualization Docs" +description = "Seamlessly modernize your infrastructure by building your private cloud entirely with Kubernetes" +sitemapexclude = true ++++ + +Seamlessly modernize your infrastructure by building your private cloud entirely with Kubernetes \ No newline at end of file diff --git a/content/kubermatic-virtualization/main/_index.en.md b/content/kubermatic-virtualization/main/_index.en.md new file mode 100644 index 000000000..81ee5fd83 --- /dev/null +++ b/content/kubermatic-virtualization/main/_index.en.md @@ -0,0 +1,32 @@ ++++ +title = "" +date = 2025-07-18T16:06:34+02:00 ++++ + +## What is Kubermatic Virtualization (Kube-V)? +Kubermatic Virtualization (Kube-V) provides a unified platform that enables organizations to seamlessly orchestrate and manage both traditional virtual machines (VMs) and modern containerized applications. + +It extends the powerful automation and operational benefits of Kubernetes to your VM-based workloads, allowing for a more consistent and efficient approach to infrastructure management. + +Kubermatic Virtualization leverages Kubernetes-native management by unifying VM and container orchestration as it integrates virtual machines (VMs) directly into Kubernetes as native, first-class objects by managing, scaling, and deploying VMs using the same familiar Kubernetes tools, APIs, and workflows you already use for your containerized applications. +## Features +Kubermatic Virtualization offers a comprehensive set of features designed to modernize infrastructure and streamline operations by converging virtual machine and container management. + +### Streamlined Transition and Unified Control + +* Effortless Migration: Tools are provided to simplify the migration of existing VMs from diverse environments to the unified platform, making infrastructure modernization more accessible. +* Centralized Operations: Gain single-pane-of-glass management for the entire lifecycle of both VMs and containers. This includes everything from creation, networking, and storage to scaling and monitoring, all accessible from a centralized interface or command-line tools. + +### Infrastructure Modernization and Efficiency + +* Gradual Modernization Path: Integrate VMs into a cloud-native environment, offering a practical pathway to modernize legacy applications without the immediate need for extensive refactoring into containers. You can run new containerized applications alongside existing virtualized ones. +* Optimized Resource Use: By running VMs and containers on the same underlying physical infrastructure, organizations can achieve better hardware resource utilization and significantly reduce operational overhead. + +### Enhanced Development and Reliability + +* Improved Developer Experience: Developers can leverage familiar, native Kubernetes tools and workflows for managing both VMs and containers, which minimizes learning curves and speeds up development cycles. +* Automated Workflows (CI/CD): Integrate VMs seamlessly into Kubernetes-native CI/CD pipelines, enabling automated testing and deployment processes. +* Built-in Resilience: Benefit from the platform's inherent high availability and fault tolerance features, including automated restarts and live migration of VMs between nodes, ensuring continuous application uptime. +* Integrated Networking and Storage: VMs natively use the platform's software-defined networking (SDN) and storage capabilities, providing consistent network policies, enhanced security, and streamlined storage management. + +See [kubermatic.com](https://www.kubermatic.com/). diff --git a/content/kubermatic-virtualization/main/architecture/_index.en.md b/content/kubermatic-virtualization/main/architecture/_index.en.md new file mode 100644 index 000000000..7f70fdd10 --- /dev/null +++ b/content/kubermatic-virtualization/main/architecture/_index.en.md @@ -0,0 +1,38 @@ ++++ +title = "Architecture" +date = 2025-07-18T16:06:34+02:00 +weight = 5 + ++++ + +## Architecture Overview +Kubermatic-Virtualization (Kube-V) is an advanced platform engineered to construct private cloud infrastructures founded +entirely on Kubernetes. Its core design principle is the seamless integration of Kubernetes-native workloads (containers) +and traditional virtualized workloads (Virtual Machines - VMs) under a unified management umbrella. Kube-V achieves this +by building upon Kubernetes as its foundational layer and incorporating KubeVirt to orchestrate and manage VMs alongside +containerized applications. + +Here's a breakdown of the architecture and how these components interact: +### Host Nodes +Host nodes can operate on any popular Linux-based operating system such as Ubuntu and RockyLinux where nested virtualization +is enabled to run KVM based virtual machines. + +### Kubernetes +The foundation, providing the orchestration, scheduling, and management plane for all workloads. In addition to introduce +declarative API and custom resources (CRDs). + +### KubeVirt +An extension to Kubernetes that enables running and managing VMs as native Kubernetes objects. It utilizes Kubernetes pods +as the execution unit each running VM is encapsulated within a standard Kubernetes pod, specifically a virt-launcher pod. + +### OVN (Open Virtual Network) +The network fabric, providing advanced SDN (Software-Defined Networking) capabilities for VMs and Pods, replacing or +augmenting the default CNI (Container Network Interface). The network fabric introduces VPCs(Virtual Private Cloud) as +an operational and isolated ecosystem, through subnets and network policies. + +### CSI Drivers +A standardized interface that allows Kubernetes to connect to various storage systems, providing persistent storage for +VMs and containers. Kube-V is agnostic about the storage of the underlying infrastructure where any CSI driver can be +used to enabling dynamic provisioning, attachment, and management of persistent volumes for VMs and Pods. + +![Kubermatic-Virtualization](kube-v-architecture.png) \ No newline at end of file diff --git a/content/kubermatic-virtualization/main/architecture/compatibility/_index.en.md b/content/kubermatic-virtualization/main/architecture/compatibility/_index.en.md new file mode 100644 index 000000000..451b0ea62 --- /dev/null +++ b/content/kubermatic-virtualization/main/architecture/compatibility/_index.en.md @@ -0,0 +1,5 @@ ++++ +title = "Compatibility" +date = 2025-07-18T16:06:34+02:00 +weight = 5 ++++ diff --git a/content/kubermatic-virtualization/main/architecture/compatibility/kubev-components-versioning/_index.en.md b/content/kubermatic-virtualization/main/architecture/compatibility/kubev-components-versioning/_index.en.md new file mode 100644 index 000000000..d5c64a8a8 --- /dev/null +++ b/content/kubermatic-virtualization/main/architecture/compatibility/kubev-components-versioning/_index.en.md @@ -0,0 +1,21 @@ ++++ +title = "Kubermatic Virtualization Components" +date = 2025-07-18T16:06:34+02:00 +weight = 5 ++++ + +The following list is only applicable for the Kube-V version that is currently available. Kubermatic has a strong emphasis +on security and reliability of provided software and therefore releases updates regularly that also include component updates. + + +| Kube-V Component | Version | +|:---------------------------------:|:-------:| +| Kubernetes | v1.33.0 | +| KubeVirt | v1.5.2 | +| Containerized Data Importer (CDI) | v1.62.0 | +| KubeOVN | v1.14.4 | +| KubeOne | v1.11.1 | +| Kyverno | v1.14.4 | +| Cert Manager | v1.18.2 | +| MetalLB | v0.15.2 | + diff --git a/content/kubermatic-virtualization/main/architecture/compatibility/operating-system/_index.en.md b/content/kubermatic-virtualization/main/architecture/compatibility/operating-system/_index.en.md new file mode 100644 index 000000000..d4ea5d2a6 --- /dev/null +++ b/content/kubermatic-virtualization/main/architecture/compatibility/operating-system/_index.en.md @@ -0,0 +1,21 @@ ++++ +title = "Operating Systems" +date = 2025-07-18T16:06:34+02:00 +weight = 3 ++++ + +## Supported Operating Systems + +The following operating systems are supported: + +* Ubuntu 20.04 (Focal) +* Ubuntu 22.04 (Jammy Jellyfish) +* Ubuntu 24.04 (Noble Numbat) +* Rocky Linux 8 +* RHEL 8.0, 8.1, 8.2, 8.3, 8.4 +* Flatcar + +{{% notice warning %}} +The minimum kernel version for Kubernetes 1.32 clusters is 4.19. Some operating system versions, such as RHEL 8, +do not meet this requirement and therefore do not support Kubernetes 1.32 or newer. +{{% /notice %}} \ No newline at end of file diff --git a/content/kubermatic-virtualization/main/architecture/concepts/_index.en.md b/content/kubermatic-virtualization/main/architecture/concepts/_index.en.md new file mode 100644 index 000000000..1ea21ecb8 --- /dev/null +++ b/content/kubermatic-virtualization/main/architecture/concepts/_index.en.md @@ -0,0 +1,7 @@ ++++ +title = "Concepts" +date = 2025-07-18T16:06:34+02:00 +weight = 1 ++++ + +Get to know the concepts behind Kubermatic Virtualization (KubeV). diff --git a/content/kubermatic-virtualization/main/architecture/concepts/compute/_index.en.md b/content/kubermatic-virtualization/main/architecture/concepts/compute/_index.en.md new file mode 100644 index 000000000..989f60e28 --- /dev/null +++ b/content/kubermatic-virtualization/main/architecture/concepts/compute/_index.en.md @@ -0,0 +1,5 @@ ++++ +title = "Compute" +date = 2025-07-18T16:06:34+02:00 +weight = 15 ++++ diff --git a/content/kubermatic-virtualization/main/architecture/concepts/compute/virtual-machines/_index.en.md b/content/kubermatic-virtualization/main/architecture/concepts/compute/virtual-machines/_index.en.md new file mode 100644 index 000000000..a912c3f48 --- /dev/null +++ b/content/kubermatic-virtualization/main/architecture/concepts/compute/virtual-machines/_index.en.md @@ -0,0 +1,241 @@ ++++ +title = "VirtualMachines Resources" +date = 2025-07-18T16:06:34+02:00 +weight = 15 ++++ + +## VirtualMachines +As the name suggests, a VirtualMachine(VM) represents a long-running, stateful virtual machine. It's similar to a +Kubernetes Deployment for Pods, meaning you define the desired state (e.g., "this VM should be running," "it should +have 2 CPUs and 4GB RAM") and Kubermatic-Virtualization ensures that state is maintained. It allows you to start, stop, and configure VMs. + +Here is an example of how users can create a VM: +```yaml +apiVersion: kubevirt.io/v1 +kind: VirtualMachine +metadata: + name: my-vm-with-http-data-volume +spec: + runStrategy: RerunOnFailure + template: + metadata: + labels: + app: my-vm-with-http-data-volume + annotations: + kubevirt.io/allow-pod-bridge-network-live-migration: "true" + spec: + domain: + cpu: + cores: 1 + memory: + guest: 2Gi + devices: + disks: + - name: rootdisk + disk: + bus: virtio + interfaces: + - name: default + masquerade: {} + volumes: + - name: rootdisk + dataVolume: + name: my-http-data-volume + networks: + - name: default + pod: {} + dataVolumeTemplates: + - metadata: + name: my-http-data-volume + spec: + sourceRef: + kind: DataSource + name: my-http-datasource + apiGroup: cdi.kubevirt.io + pvc: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi # <--- IMPORTANT: Adjust to your desired disk size + # storageClassName: my-storage-class # <--- OPTIONAL: Uncomment and replace with your StorageClass name if needed +--- +apiVersion: cdi.kubevirt.io/v1beta1 +kind: DataSource +metadata: + name: my-http-datasource +spec: + source: + http: + url: "http://example.com/path/to/your/image.qcow2" # <--- IMPORTANT: Replace with the actual URL of your disk image + # certConfig: # <--- OPTIONAL: Uncomment and configure if your HTTP server uses a custom CA + # caBundle: "base64encodedCABundle" + # secretRef: + # name: "my-http-cert-secret" + # cert: + # secretRef: + # name: "my-http-cert-secret" + # key: + # secretRef: + # name: "my-http-key-secret" +``` +### 1. `VirtualMachine` (apiVersion: `kubevirt.io/v1`) + +This is the main KubeVirt resource that defines your virtual machine. + +- **`spec.template.spec.domain.devices.disks`**: + Defines the disk attached to the VM. We reference `rootdisk` here, which is backed by our DataVolume. + +- **`spec.template.spec.volumes`**: + Links the `rootdisk` to a `dataVolume` named `my-http-data-volume`. + +- **`spec.dataVolumeTemplates`**: + This is the crucial part. It defines a template for a DataVolume that will be created automatically when the VM is started. + +--- + +### 2. `DataVolumeTemplate` (within `VirtualMachine.spec.dataVolumeTemplates`) + +- **`metadata.name`**: + The name of the DataVolume that will be created (referenced in `spec.template.spec.volumes`). + +- **`spec.sourceRef`**: + Points to a `DataSource` resource that defines the actual source of the disk image. A `DataSource` is used here to encapsulate HTTP details. + +- **`spec.pvc`**: + Defines the characteristics of the PersistentVolumeClaim (PVC) that will be created for this DataVolume: + + - **`accessModes`**: Typically `ReadWriteOnce` for VM disks. + - **`resources.requests.storage`**: + ⚠️ **Crucially, set this to the desired size of your VM's disk.** It should be at least as large as your source image. + - **`storageClassName`**: *(Optional)* Specify a StorageClass if needed; otherwise, the default will be used. + +--- + +### 3. `DataSource` (apiVersion: `cdi.kubevirt.io/v1beta1`) + +This is a CDI (Containerized Data Importer) resource that encapsulates the details of where your disk image comes from. + +- **`metadata.name`**: + The name of the `DataSource` (referenced in `dataVolumeTemplate.spec.sourceRef`). + +- **`spec.source.http.url`**: + 🔗 This is where you put the direct URL to your disk image (e.g., a `.qcow2`, `.raw`, etc. file). + +- **`spec.source.http.certConfig`**: *(Optional)* + If your HTTP server uses a custom CA or requires client certificates, configure them here. + +--- + +### VirtualMachinePools +KubeVirt's VirtualMachinePool is a powerful resource that allows you to manage a group of identical Virtual Machines (VMs) +as a single unit, similar to how a Kubernetes Deployment manages a set of Pods. It's designed for scenarios where you need +multiple, consistent, and often ephemeral VMs that can scale up or down based on demand. + +Here's a breakdown of the key aspects of KubeVirt VirtualMachinePools: + + +```yaml +apiVersion: kubevirt.io/v1alpha1 +kind: VirtualMachinePool +metadata: + name: my-vm-http-pool +spec: + replicas: 3 # <--- IMPORTANT: Number of VMs in the pool + selector: + matchLabels: + app: my-vm-http-pool-member + virtualMachineTemplate: + metadata: + labels: + app: my-vm-http-pool-member + annotations: + kubevirt.io/allow-pod-bridge-network-live-migration: "true" + spec: + runStrategy: RerunOnFailure # Or Always, Halted, Manual + domain: + cpu: + cores: 1 + memory: + guest: 2Gi + devices: + disks: + - name: rootdisk + disk: + bus: virtio + interfaces: + - name: default + masquerade: {} + volumes: + - name: rootdisk + dataVolume: + name: my-pool-vm-data-volume # This name will have a unique suffix appended by KubeVirt + networks: + - name: default + pod: {} + dataVolumeTemplates: + - metadata: + name: my-pool-vm-data-volume # This name will be the base for the unique DataVolume names + spec: + sourceRef: + kind: DataSource + name: my-http-datasource + apiGroup: cdi.kubevirt.io + pvc: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi # <--- IMPORTANT: Adjust to your desired disk size for each VM + # storageClassName: my-storage-class # <--- OPTIONAL: Uncomment and replace with your StorageClass name if needed +--- +apiVersion: cdi.kubevirt.io/v1beta1 +kind: DataSource +metadata: + name: my-http-datasource +spec: + source: + http: + url: "http://example.com/path/to/your/image.qcow2" # <--- IMPORTANT: Replace with the actual URL of your disk image + # certConfig: # <--- OPTIONAL: Uncomment and configure if your HTTP server uses a custom CA + # caBundle: "base64encodedCABundle" + # secretRef: + # name: "my-http-cert-secret" + # cert: + # secretRef: + # name: "my-http-cert-secret" + # key: + # secretRef: + # name: "my-http-key-secret" + +``` +### VirtualMachinePool (apiVersion: `kubevirt.io/v1alpha1`) + +1. **`API Version`** + - Use `apiVersion: kubevirt.io/v1alpha1` for `VirtualMachinePool`. + - This is a slightly different API version than `VirtualMachine`. + +2. **`spec.replicas`** + - Specifies how many `VirtualMachine` instances the pool should maintain. + +3. **`spec.selector`** + - Essential for the `VirtualMachinePool` controller to manage its VMs. + - `matchLabels` must correspond to the `metadata.labels` within `virtualMachineTemplate`. + +4. **spec.virtualMachineTemplate** + - This section contains the full `VirtualMachine` spec that serves as the template for each VM in the pool. + +5. **`dataVolumeTemplates` Naming in a Pool** + - `VirtualMachinePool` creates `DataVolumes` from `dataVolumeTemplates`. + - A unique suffix is appended to the `metadata.name` of each `DataVolume` (e.g., `my-pool-vm-data-volume-abcde`), ensuring each VM gets a distinct PVC. + +--- + +### How It Works (Similar to Deployment for Pods) + +1. Apply the `VirtualMachinePool` manifest. KubeVirt ensures the `my-http-datasource` `DataSource` exists. +2. The `VirtualMachinePool` controller creates the defined number of `VirtualMachine` replicas. +3. Each `VirtualMachine` triggers the creation of a `DataVolume` using the specified `dataVolumeTemplate` and `my-http-datasource`. +4. CDI (Containerized Data Importer) downloads the image into a new unique `PersistentVolumeClaim` (PVC) for each VM. +5. Each `VirtualMachine` then starts using its dedicated PVC. + diff --git a/content/kubermatic-virtualization/main/architecture/concepts/networks/_index.en.md b/content/kubermatic-virtualization/main/architecture/concepts/networks/_index.en.md new file mode 100644 index 000000000..662b0d8ee --- /dev/null +++ b/content/kubermatic-virtualization/main/architecture/concepts/networks/_index.en.md @@ -0,0 +1,5 @@ ++++ +title = "Networking" +date = 2025-07-18T16:06:34+02:00 +weight = 15 ++++ diff --git a/content/kubermatic-virtualization/main/architecture/concepts/networks/vms-networks-assignment/_index.en.md b/content/kubermatic-virtualization/main/architecture/concepts/networks/vms-networks-assignment/_index.en.md new file mode 100644 index 000000000..99285fd7d --- /dev/null +++ b/content/kubermatic-virtualization/main/architecture/concepts/networks/vms-networks-assignment/_index.en.md @@ -0,0 +1,159 @@ ++++ +title = "VMs Network Assignment" +date = 2025-07-18T16:06:34+02:00 +weight = 15 ++++ + +Assigning a Virtual Machine (VM) to a VPC and Subnet typically involves integrating VM’s network interface using +Multus CNI with a Kube-OVN network attachment definition (NAD). Assigning a Virtual Machine (VM) to a VPC and +Subnet involves a few key steps: + +### 1. Define or use an existing VPC: + +If you require isolated network spaces for different tenants or environments, you'll first define a Vpc resource. +This acts as a logical router for your Subnets. +```yaml +apiVersion: kubeovn.io/v1 +kind: Vpc +metadata: + name: my-vpc # Name of your VPC +spec: + # Optional: You can specify which namespaces are allowed to use this VPC. + # If left empty, all namespaces can use it. + # namespaces: + # - my-namespace + # - my-namespace-1 +``` +--- + +### 2. Define or use an existing Subnet: + +Next, you create a Subnet resource, associating it with your Vpc (or the default ovn-cluster VPC if you're not using a +custom VPC). You also define the CIDR range and, crucially, the Namespaces that will use this Subnet. +```yaml +apiVersion: kubeovn.io/v1 +kind: Subnet +metadata: + name: my-vm-subnet # Name of your Subnet +spec: + # Associate this subnet with your VPC. If omitted, it defaults to 'ovn-cluster'. + vpc: my-vpc + cidrBlock: 10.10.0.0/24 # The IP range for this subnet + gateway: 10.10.0.1 # The gateway IP for this subnet (Kube-OVN often sets this automatically) + namespaces: + - vm-namespace # The Namespace where your VMs will reside +``` + +--- +### 3. Create a Kubernetes Namespace (if it doesn't exist): + +Ensure the Namespace you defined in your Subnet exists. +```yaml +apiVersion: v1 +kind: Namespace +metadata: + name: vm-namespace +``` + +--- + +### 4. Define a NetworkAttachmentDefinition: + +While Kube-OVN can work directly by binding a Namespace to a Subnet, using a NetworkAttachmentDefinition (NAD) with +Multus provides more explicit control, especially if your VM needs multiple network interfaces or a specific CNI configuration. + +```yaml +apiVersion: k8s.cni.cncf.io/v1 +kind: NetworkAttachmentDefinition +metadata: + name: vm-network # Name of the NAD + namespace: vm-namespace # Must be in the same namespace as the VMs using it +spec: + config: | + { + "cniVersion": "0.3.1", + "name": "vm-network", + "type": "kube-ovn", + "server_socker": "/run/openvswitch/kube-ovn-daemon.sock", + "netAttachDefName": "vm-namespace/vm-network" + } +``` +{{% notice note %}} +Note: For a VM to automatically pick up the correct Subnet via the Namespace binding, you often don't strictly +need a `NetworkAttachmentDefinition` for the primary interface if the Namespace is directly linked to the Subnet. However, +it's crucial for secondary interfaces or explicit network definitions. +{{% /notice %}} + +--- + +### 5. Assign the KubeVirt Virtual Machine to the Subnet/VPC: + +When defining your `VirtualMachine` (or `VirtualMachinePool`), you ensure it's created in the `vm-namespace` that is +bound to your `my-vm-subnet`. + +#### Option 1: Relying on Namespace-Subnet Binding (Simplest) + +If your `vm-namespace` is explicitly listed in the `spec.namespaces` of `my-vm-subnet`, any `VM` (or `Pod`) created in +`vm-namespace` will automatically get an IP from `my-vm-subnet`. + +#### Option 2: Explicitly Specifying the Subnet/NAD via Annotations (For Multiple NICs or Specificity) + +If you're using a `NetworkAttachmentDefinition` (`NAD`) or need to explicitly control which subnet is used, especially +for secondary interfaces, you'd use Multus annotations on your `VM` definition. + +```yaml +apiVersion: kubevirt.io/v1 +kind: VirtualMachine +metadata: + name: my-kubeovn-vm-multus + namespace: vm-namespace + annotations: + # Reference the NetworkAttachmentDefinition for the primary interface + # The format is / + k8s.v1.cni.cncf.io/networks: vm-network + # Optional: For static IP assignment from the subnet + # ovn.kubernetes.io/ip_address: 10.10.0.10 +spec: + runStrategy: Always + template: + spec: + domain: + devices: + disks: + - name: containerdisk + disk: + bus: virtio + - name: cloudinitdisk + disk: + bus: virtio + interfaces: + - name: primary-nic + # This interface will use the network defined by the NAD + bridge: {} # Or masquerade: {} + # Example for a secondary NIC on a different Kube-OVN Subnet/NAD + # - name: secondary-nic + # bridge: {} + resources: + requests: + memory: 2Gi + volumes: + - name: containerdisk + containerDisk: + image: kubevirt/fedora-cloud-container-disk-demo + - name: cloudinitdisk + cloudInitNoCloud: + userData: | + #cloud-config + +``` +Important Kube-OVN Annotations for VMs/Pods: + +- `ovn.kubernetes.io/logical_switch`: Explicitly assigns the workload to a specific Kube-OVN logical switch (which +corresponds to a Subnet). This overrides the Namespace's default subnet. + +- `ovn.kubernetes.io/ip_address`: Assigns a specific static IP address from the subnet. Make sure this IP is excluded from +the subnet's dynamic IP range (excludeIps in the Subnet definition) to avoid conflicts. + +- `ovn.kubernetes.io/network_attachment`: When using Multus, this annotation on the `NetworkAttachmentDefinition`'s config +can specify the Kube-OVN provider or other details if you have multiple Kube-OVN deployments or specific requirements. + diff --git a/content/kubermatic-virtualization/main/architecture/concepts/networks/vpc-subnets/_index.en.md b/content/kubermatic-virtualization/main/architecture/concepts/networks/vpc-subnets/_index.en.md new file mode 100644 index 000000000..9707271e9 --- /dev/null +++ b/content/kubermatic-virtualization/main/architecture/concepts/networks/vpc-subnets/_index.en.md @@ -0,0 +1,107 @@ ++++ +title = "Networking" +date = 2025-07-18T16:06:34+02:00 +weight = 15 ++++ +Kubermatic-Virtualization uses KubeOVN as a software defined network(SDN) and it supercharges Kubernetes networking by +integrating it with Open Virtual Network (OVN) and Open vSwitch (OVS). These aren't new players; OVN and OVS are long-standing, +industry-standard technologies in the Software-Defined Networking (SDN) space, predating Kubernetes itself. By leveraging +their robust, mature capabilities, Kube-OVN significantly expands what Kubernetes can do with its network. + +## VPC +A VPC (Virtual Private Cloud) in Kube-OVN represents an isolated layer-3 network domain that contains one or more subnets. +Each VPC provides its own routing table and default gateway, allowing you to logically separate network traffic between +tenants or workloads. + +Kubermatic Virtualization simplifies network setup by providing a default Virtual Private Cloud (VPC) and a default Subnet +right out of the box. These are pre-configured to connect directly to the underlying node network, offering a seamless link +to your existing infrastructure. This means you don't need to attach external networks to get started. + +This design is a huge win for new users. It allows customers to dive into Kubermatic Virtualization and quickly establish +network connectivity between their workloads and the hypervisor without wrestling with complex network configurations, +external appliances, or advanced networking concepts. It's all about making the initial experience as straightforward +and efficient as possible, letting you focus on your applications rather than network plumbing. + + +Here is an example of a VPC definition: +```yaml +apiVersion: kubeovn.io/v1 +kind: Vpc +metadata: + name: custom-vpc +spec: + cidr: 10.200.0.0/16 + enableNAT: false + defaultGateway: "" + staticRoutes: + - cidr: 0.0.0.0/0 + nextHopIP: 10.200.0.1 +``` + +| Field | Description | +| ---------------- | --------------------------------------------------------------------------------------- | +| `metadata.name` | Name of the VPC. Must be unique within the cluster. | +| `spec.cidr` | The overall IP range for the VPC. Subnets under this VPC should fall within this range. | +| `enableNAT` | Whether to enable NAT for outbound traffic. Useful for internet access. | +| `defaultGateway` | IP address used as the default gateway for this VPC. Usually left blank for automatic. | +| `staticRoutes` | List of manually defined routes for the VPC. | + +## Subnet + +Subnets are the fundamental building blocks for network and IP management. They serve as the primary organizational unit +for configuring network settings and IP addresses. + +- Namespace-Centric: Each Kubernetes Namespace can be assigned to a specific Subnet. +- Automatic IP Allocation: Pods deployed within a Namespace automatically receive their IP addresses from the Subnet that +Namespace is associated with. +- Shared Network Configuration: All Pods within a Namespace inherit the network configuration defined by their Subnet. This includes: + - CIDR (Classless Inter-Domain Routing): The IP address range for the Subnet. + - Gateway Type: How traffic leaves the Subnet. + - Access Control: Network policies and security rules. + - NAT Control: Network Address Translation settings. + +Here is an example of a VPC definition: +```yaml +apiVersion: kubeovn.io/v1 +kind: Subnet +metadata: + name: my-custom-subnet + namespace: kube-system +spec: + cidrBlock: 10.16.0.0/16 + gateway: 10.16.0.1 + gatewayType: distributed + excludeIps: + - 10.16.0.1 + - 10.16.0.2..10.16.0.10 + protocol: IPv4 + natOutgoing: true + private: false + vpc: custom-vpc + enableDHCP: true + allowSubnets: [] + vlan: "" + namespaces: + - default + - dev + subnetType: overlay +``` +| Field | Description | +|----------------------|---------------------------------------------------------------------------------------| +| `apiVersion` | Must be `kubeovn.io/v1`. | +| `kind` | Always set to `Subnet`. | +| `metadata.name` | Unique name for the subnet resource. | +| `metadata.namespace` | Namespace where the subnet object resides. Usually `kube-system`. | +| `spec.cidrBlock` | The IP range (CIDR notation) assigned to this subnet. | +| `spec.gateway` | IP address used as the gateway for this subnet. | +| `spec.gatewayType` | `centralized` or `distributed`. `distributed` allows egress from local node gateways. | +| `spec.excludeIps` | IPs or IP ranges excluded from dynamic allocation. | +| `spec.protocol` | Can be `IPv4`, `IPv6`, or `Dual`. | +| `spec.natOutgoing` | If true, pods using this subnet will have outbound NAT enabled. | +| `spec.private` | If true, pod traffic is restricted to this subnet only. | +| `spec.vpc` | Is the name of the VPC that the subnet belongs to. | +| `spec.enableDHCP` | Enables DHCP services in the subnet. | +| `spec.allowSubnets` | List of subnets allowed to communicate with this one (used with private=true). | +| `spec.vlan` | Optional VLAN name (empty string means no VLAN). | +| `spec.namespaces` | Namespaces whose pods will be assigned IPs from this subnet. | +| `spec.subnetType` | Can be `overlay`, `underlay`, `VLAN`, or `external`. | diff --git a/content/kubermatic-virtualization/main/architecture/concepts/storage/_index.en.md b/content/kubermatic-virtualization/main/architecture/concepts/storage/_index.en.md new file mode 100644 index 000000000..760102919 --- /dev/null +++ b/content/kubermatic-virtualization/main/architecture/concepts/storage/_index.en.md @@ -0,0 +1,133 @@ ++++ +title = "Storage" +date = 2025-07-18T16:06:34+02:00 +weight = 15 ++++ + +At its heart, Kubermatic Virtualization uses KubeVirt, a Kubernetes add-on. KubeVirt allows you to run virtual machines +(VMs) right alongside your containers, and it's built to heavily use Kubernetes' existing storage model. The Container +Storage Interface (CSI) driver is a crucial component in this setup because it allows KubeVirt to leverage the vast and +diverse storage ecosystem of Kubernetes for its VMs. + +The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage systems to containerized +workloads on Container Orchestration Systems (COs) like Kubernetes. Before CSI, storage integrations were tightly coupled +with Kubernetes' core code. CSI revolutionized this by providing a pluggable architecture, allowing storage vendors to +develop drivers that can integrate with Kubernetes without modifying Kubernetes itself. + +# KubeVirt + CSI Drivers: How It Works + +KubeVirt’s integration with CSI (Container Storage Interface) drivers is fundamental to how it manages VM storage. This document explains how CSI enables dynamic volume provisioning, image importing, and advanced VM disk features in KubeVirt. + +--- + +## 1. Dynamic Volume Provisioning for VM Disks + +### PersistentVolumeClaims (PVCs) +KubeVirt does not directly interact with the underlying storage backend (e.g., SAN, NAS, cloud block storage). Instead, it uses Kubernetes’ PVC abstraction. When a VM is defined, KubeVirt requests a PVC. + +### StorageClasses +PVCs reference a `StorageClass`, which is configured to use a specific CSI driver as its "provisioner". + +### Driver’s Role +The CSI driver associated with the `StorageClass` handles the provisioning of persistent storage by interfacing with external systems (e.g., vCenter, Ceph, cloud providers). + +### VM Disk Attachment +Once the PV is bound, KubeVirt uses the `virt-launcher` pod to attach the volume as a virtual disk to the VM. + +--- + +## 2. Containerized Data Importer (CDI) Integration + +### Importing VM Images +KubeVirt works with the CDI project to import disk images (e.g., `.qcow2`, `.raw`) from HTTP, S3, and other sources into PVCs. + +### CSI Uses CSI +CDI relies on CSI drivers to provision the PVCs that will store the imported images. After import, KubeVirt consumes the PVC as a disk. + +### DataVolume Resource +KubeVirt’s `DataVolume` custom resource simplifies image importing and ties CDI with PVC creation in a declarative way. + +--- + +## 3. Advanced Storage Features (via CSI Capabilities) + +CSI drivers allow powerful features previously complex for VM setups: + +- **Snapshots**: If supported, KubeVirt can create `VolumeSnapshot` objects for point-in-time backups. +- **Cloning**: Allows fast provisioning of VM disks from existing PVCs without re-importing. +- **Volume Expansion**: Resize VM disks dynamically with `allowVolumeExpansion`. +- **ReadWriteMany (RWX) Mode**: Enables live migration by allowing shared access across nodes. +- **Block vs. Filesystem Modes**: CSI supports both `Filesystem` and `Block`. Choose based on workload performance needs. + +--- + +## 4. Example Scenario +Admin creates a `StorageClass`: +```yaml +apiVersion: storage.k8s.io/v1 +kind: StorageClass +metadata: + name: my-fast-storage +provisioner: csi.my-storage-vendor.com # This points to the specific CSI driver +parameters: + type: "ssd" +volumeBindingMode: WaitForFirstConsumer # Important for VM scheduling +allowVolumeExpansion: true +``` +User defines a `VirtualMachine` with a `DataVolume`: +```yaml +apiVersion: kubevirt.io/v1 +kind: VirtualMachine +metadata: + name: my-vm +spec: + dataVolumeTemplates: + - metadata: + name: my-vm-disk + spec: + storageClassName: my-fast-storage # References the StorageClass + source: + http: + url: "http://example.com/my-vm-image.qcow2" + pvc: + accessModes: + - ReadWriteOnce # Or ReadWriteMany for live migration + resources: + requests: + storage: 20Gi + template: + spec: + domain: + devices: + disks: + - name: my-vm-disk + disk: + bus: virtio + # ... other VM specs + volumes: + - name: my-vm-disk + dataVolume: + name: my-vm-disk +``` +In this flow: + +- KubeVirt sees the DataVolumeTemplate and requests a PVC (my-vm-disk) using my-fast-storage. + +- The my-fast-storage StorageClass directs the request to csi.my-storage-vendor.com (the CSI driver). + +- The CSI driver provisions a 20Gi volume on the backend storage. + +- CDI then imports my-vm-image.qcow2 into this newly provisioned PVC. + +- Once the data import is complete, KubeVirt starts the VM, and the PVC is attached as the VM's disk. + +--- + +## Summary + +KubeVirt uses CSI to: +- Abstract storage provisioning and attachment. +- Enable features like cloning, snapshots, and expansion. +- Import images using CDI with CSI-provisioned PVCs. +- Support enterprise-grade live migration and scalability. + diff --git a/content/kubermatic-virtualization/main/architecture/kube-v-architecture.png b/content/kubermatic-virtualization/main/architecture/kube-v-architecture.png new file mode 100644 index 000000000..f20425cf9 Binary files /dev/null and b/content/kubermatic-virtualization/main/architecture/kube-v-architecture.png differ diff --git a/content/kubermatic-virtualization/main/architecture/requirements/_index.en.md b/content/kubermatic-virtualization/main/architecture/requirements/_index.en.md new file mode 100644 index 000000000..76d2e4549 --- /dev/null +++ b/content/kubermatic-virtualization/main/architecture/requirements/_index.en.md @@ -0,0 +1,7 @@ ++++ +title = "Requirements" +date = 2025-06-28T12:07:15+02:00 +weight = 3 ++++ + +Find out about the requirements for the operation of Kubermatic Virtualization (Kube-V). diff --git a/content/kubermatic-virtualization/main/architecture/requirements/high-availability-deployment/_index.en.md b/content/kubermatic-virtualization/main/architecture/requirements/high-availability-deployment/_index.en.md new file mode 100644 index 000000000..4185212fc --- /dev/null +++ b/content/kubermatic-virtualization/main/architecture/requirements/high-availability-deployment/_index.en.md @@ -0,0 +1,82 @@ ++++ +title = "High-Availability Deployment" +date = 2025-06-28T12:07:15+02:00 +weight = 15 ++++ + +## High-Availability Deployment + +The hardware foundation for Kubermatic Virtualization is multi-faceted, encompassing requirements for the Kubermatic +Virtualization (Kube-V) management layer, the KubeVirt infrastructure nodes that host virtual machines, in addition to +various services that are running as part of the ecosystem. + +### Control Plane Nodes + +* Nodes: Minimum 3 control plane nodes to ensure a quorum for etcd (Kubernetes' key-value store) and prevent a single point of failure. +These should ideally be distributed across different failure domains (e.g., availability zones, racks). +* CPU: At least 2 vCPUs per control plane node. +* RAM: At least 4 GB RAM per control plane node. Recommended: 8-16 GB for robust performance. +* Storage: Fast, persistent storage for etcd (SSD-backed recommended) with sufficient capacity. + +### Worker Nodes + +* Minimum 2 worker nodes (for KubeVirt VMs): For HA, you need more than one node to run VMs. This allows for live migration +and VM rescheduling in case of a node failure. +* CPU: A minimum of 8 CPU cores per node is suggested for testing environments. For production deployments, 16 CPU cores +or more per node are recommended to accommodate multiple VMs and their workloads effectively. Each worker node must have +Intel VT-x or AMD-V hardware virtualization extensions enabled in the BIOS/UEFI. +This is a fundamental requirement for KubeVirt to leverage KVM (Kernel-based Virtual Machine) for efficient VM execution. +Without this, KubeVirt can fall back to software emulation, but it's significantly slower and not suitable for production HA. +* RAM: At least 8 GB RAM per node. Recommended: 16-32 GB, depending on the number and memory requirements of your VMs. +* Storage: SSDs or NVMe drives are highly recommended for good VM performance in addition to sufficient storage capacity +based on the disk images of your VMs and any data they store. + +### Storage + +* CSI Driver Capabilities (Crucial for HA/Live Migration): This is perhaps the most critical component for KubeVirt HA and live migration. + You need a shared storage backend that supports ReadWriteMany (RWX) access mode or Block-mode (volumeMode: Block) volumes. +* Capacity: Sufficient storage capacity based on the disk images of your VMs and any data they store. +* Performance: SSDs or NVMe drives are highly recommended for good VM performance where high-throughput services, + low-latency, high-IOPS storage (often block storage) is critical. +* Replication and Redundancy: To achieve HA, data must be replicated across multiple nodes or availability zones. + If a node fails, the data should still be accessible from another. + +### Networking + +A well-planned and correctly configured network infrastructure is fundamental to the stability and performance of +Kubermatic Virtualization. This includes considerations for IP addressing, DNS, load balancing, and inter-component communication. + +* High-bandwidth, low-latency connections: 1 Gbps NICs are a minimum; 10 Gbps or higher is recommended for performance-sensitive +workloads and efficient live migration. +* Load Balancing: External/internal load balancers for distributing traffic across control planes and worker nodes. +* Dedicated network for live migration (recommended): While not strictly minimal, a dedicated Multus network for live +migration can significantly reduce network saturation on tenant workloads during migrations. +* Connectivity: Full and unrestricted network connectivity is paramount between all host nodes. Firewalls and security +groups must be configured to permit all necessary Kubernetes control plane traffic, KubeVirt communication, and KubeV-specific +inter-cluster communication. +* DNS: DNS resolution is crucial for the Kube-V environment, enabling all nodes to find each other and external services. +A potential conflict can arise if both the KubeVirt infrastructure and guest user clusters +use NodeLocal DNSCache with the same default IP address, leading to DNS resolution issues for guest VMs. This can be +mitigated by adjusting the dnsConfig and dnsPolicy of the guest VMs. + + +| Component | Port(s) | Protocol | Direction | Purpose | +|:------------------:| :------------------: | :------: | :----------: | :-----------------------------------------------------: | +| API Server | 6443 | TCP | Inbound | All API communication with the cluster | +| etcd | 2379-2380 | TCP | Inbound | etcd database communication | +| Kubelet | 10250 | TCP | Inbound | Kubelet API for control plane communication | +| Kube-Scheduler | 10259 | TCP | Inbound | Kube-Scheduler component | +| Controller-Manager | 10257 | TCP | Inbound | Kube-Controller-Manager component | +| Kube-Proxy | 10256 | TCP | Inbound | Kube-Proxy health checks and service routing | +| NodePort Services | 30000-32767 | TCP/UDP | Inbound | Default range for exposing services on node IPs | +| KubeVirt API | 8443 | TCP | Internal | KubeVirt API communication | +| Live Migration | 61000-61009 (approx) | TCP | Node-to-Node | For migrating VM state between nodes | +| OVN NB DB | 6641 | TCP | Internal | OVN Northbound Database | +| OVN SB DB | 6642 | TCP | Internal | OVN Southbound Database | +| OVN Northd | 6643 | TCP | Internal | OVN Northd process | +| OVN Raft | 6644 | TCP | Internal | OVN Raft consensus (for HA OVN DBs) | +| Geneve Tunnel | 6081 | UDP | Node-to-Node | Default overlay network for pod communication (OVN) | +| OVN Controller | 10660 | TCP | Internal | Metrics for OVN Controller | +| OVN Daemon | 10665 | TCP | Internal | Metrics for OVN Daemon (on each node) | +| OVN Monitor | 10661 | TCP | Internal | Metrics for OVN Monitor | + diff --git a/content/kubermatic-virtualization/main/architecture/requirements/single-node-deployment/_index.en.md b/content/kubermatic-virtualization/main/architecture/requirements/single-node-deployment/_index.en.md new file mode 100644 index 000000000..b9786ee3a --- /dev/null +++ b/content/kubermatic-virtualization/main/architecture/requirements/single-node-deployment/_index.en.md @@ -0,0 +1,64 @@ ++++ +title = "Single Node Deployment" +date = 2025-06-28T12:07:15+02:00 +weight = 15 ++++ + +## Single Node Deployment + +The hardware foundation for Kubermatic Virtualization is multi-faceted, encompassing requirements for the Kubermatic +Virtualization (Kube-V) management layer, the KubeVirt infrastructure node that host virtual machines, in addition to +various services that are running as part of the ecosystem. + +### Host Configuration + +* CPU: A minimum of 8 CPU cores is suggested for testing environments. For production deployments, 16 CPU cores + or more are recommended to accommodate multiple VMs and their workloads effectively. The host node must have + Intel VT-x or AMD-V hardware virtualization extensions enabled in the BIOS/UEFI. + This is a fundamental requirement for KubeVirt to leverage KVM (Kernel-based Virtual Machine) for efficient VM execution. + Without this, KubeVirt can fall back to software emulation, but it's significantly slower and not suitable for production HA. +* RAM: At least 8 GB RAM per node. Recommended: 16-32 GB, depending on the number and memory requirements of your VMs. +* Storage: SSDs or NVMe drives are highly recommended for good VM performance in addition to sufficient storage capacity + based on the disk images of your VMs and any data they store. + +### Storage + +* CSI Driver Capabilities (Crucial for HA/Live Migration): This is perhaps the most critical component for KubeVirt. + You need a shared storage backend that supports ReadWriteMany (RWX) access mode or Block-mode (volumeMode: Block) volumes. +* Capacity: Sufficient storage capacity based on the disk images of your VMs and any data they store. +* Performance: SSDs or NVMe drives are highly recommended for good VM performance where high-throughput services, + low-latency, high-IOPS storage (often block storage) is critical. + +### Networking + +A well-planned and correctly configured network infrastructure is fundamental to the stability and performance of +Kubermatic Virtualization. This includes considerations for IP addressing, DNS, load balancing, and inter-component communication. + +* High-bandwidth, low-latency connections: 1 Gbps NICs are a minimum; 10 Gbps or higher is recommended for performance-sensitive + workloads. +* DNS: DNS resolution is crucial for the Kube-V environment, enabling all nodes to find each other and external services. + A potential conflict can arise if both the KubeVirt infrastructure and guest user clusters + use NodeLocal DNSCache with the same default IP address, leading to DNS resolution issues for guest VMs. This can be + mitigated by adjusting the dnsConfig and dnsPolicy of the guest VMs. + + +| Component | Port(s) | Protocol | Direction | Purpose | +|:------------------:| :------------------: | :------: | :----------: | :-----------------------------------------------------: | +| API Server | 6443 | TCP | Inbound | All API communication with the cluster | +| etcd | 2379-2380 | TCP | Inbound | etcd database communication | +| Kubelet | 10250 | TCP | Inbound | Kubelet API for control plane communication | +| Kube-Scheduler | 10259 | TCP | Inbound | Kube-Scheduler component | +| Controller-Manager | 10257 | TCP | Inbound | Kube-Controller-Manager component | +| Kube-Proxy | 10256 | TCP | Inbound | Kube-Proxy health checks and service routing | +| NodePort Services | 30000-32767 | TCP/UDP | Inbound | Default range for exposing services on node IPs | +| KubeVirt API | 8443 | TCP | Internal | KubeVirt API communication | +| Live Migration | 61000-61009 (approx) | TCP | Node-to-Node | For migrating VM state between nodes | +| OVN NB DB | 6641 | TCP | Internal | OVN Northbound Database | +| OVN SB DB | 6642 | TCP | Internal | OVN Southbound Database | +| OVN Northd | 6643 | TCP | Internal | OVN Northd process | +| OVN Raft | 6644 | TCP | Internal | OVN Raft consensus (for HA OVN DBs) | +| Geneve Tunnel | 6081 | UDP | Node-to-Node | Default overlay network for pod communication (OVN) | +| OVN Controller | 10660 | TCP | Internal | Metrics for OVN Controller | +| OVN Daemon | 10665 | TCP | Internal | Metrics for OVN Daemon (on each node) | +| OVN Monitor | 10661 | TCP | Internal | Metrics for OVN Monitor | + diff --git a/content/kubermatic-virtualization/main/installation/_index.en.md b/content/kubermatic-virtualization/main/installation/_index.en.md new file mode 100644 index 000000000..87bf41899 --- /dev/null +++ b/content/kubermatic-virtualization/main/installation/_index.en.md @@ -0,0 +1,161 @@ ++++ +title = "Installation" +date = 2025-06-28T12:07:15+02:00 +weight = 15 ++++ + +This chapter offers guidance on how to install Kubermatic Virtualization + +## Installing Kubermatic Virtualization with CLI + +Kubermatic Virtualization comes with an interactive installer, a CLI tool that helps administrators and users provision the entire platform easily. With just a few inputs, you can deploy and configure the stack in no time. + +{{% notice note %}} +To get started with the CLI, you will first need a Kubermatic Virtualization license. This license grants you access to the necessary resources, including the CLI. Please [contact sales](mailto:sales@kubermatic.com) to obtain your license. +{{% /notice %}} + +### **1. Navigating the Interactive CLI Installer** + +The Kubermatic Virtualization installer is an interactive CLI that guides you through the installation process with clear instructions and prompts for user input. Each page contains important information and features a help bar at the bottom to assist with navigation. + +![Welcome Page](./assets/0-welcome-page.png) + +--- + +### **2. Configuring the Network Stack** + +One of the foundational steps in setting up Kubermatic Virtualization is defining the network configuration. This step ensures that your virtual machines and containerized workloads have a dedicated IP range to operate within, similar to a default VPC. Proper network configuration is crucial for seamless communication and resource management. + + +#### **Key Components** +- **Network (CIDR)**: Specify the IP range where your virtual machines and containerized workloads will reside. This defines the subnet they will use by default. +- **DNS Server**: Provide the DNS server address to ensure proper name resolution for your workloads. +- **Gateway IP**: Define the gateway IP to facilitate network routing and connectivity. + +{{% notice warning %}} + +When setting up your network configuration, it is crucial to provide a working DNS server address to ensure proper name resolution for your virtual machines and containerized workloads, as failure to do so can result in issues like the inability to resolve domain names, failed connections to external services, or degraded functionality due to unresolved hostnames; if you do not have an internal DNS server configured, it is recommended to use a public and trusted DNS server such as Google Public DNS (`8.8.8.8` or `8.8.4.4`) or Cloudflare DNS (`1.1.1.1` or `1.0.0.1`) to ensure smooth operation and connectivity for your workloads. + +{{% /notice %}} + +![Kubermatic Virtualization Network Configuration](./assets/1-network-page.png) + + +--- + +### **3. Configuring the Load Balancer Service** + +In this step, you can enable Kubermatic Virtualization’s default Load Balancer service, **MetalLB**, to simplify the creation of load balancers for your workloads. MetalLB is ideal for evaluation and non-production environments but should not be used in production scenarios without proper enterprise-grade solutions. + +#### **Key Steps** +- **Enable MetalLB**: Toggle the checkbox to enable the Load Balancer service. +- **Define IP Range**: If MetalLB is enabled, specify the IP range that will be used by the Load Balancer. + +{{% notice warning %}} + +When defining the IP range for MetalLB, ensure that the range is valid and exists within your network infrastructure. This range will be reserved for load balancer IP addresses, and any misconfiguration or overlap with existing IPs can lead to network conflicts, service disruptions, or inaccessible workloads. Always verify the availability and uniqueness of the IP range before proceeding. + +{{% /notice %}} + +![Kubermatic Virtualization Load Balancer Configuration](./assets/2-lb-page.png) + +By configuring these settings, you ensure that your workloads have access to a simple yet effective Load Balancer solution for testing and development purposes. Let’s move on to the next configuration step! + +--- + +### **4. Configuring Nodes** + +In this section, you will define the number of nodes in your Kubermatic Virtualization cluster and provide detailed information about each node, including their IP addresses, usernames, and SSH key paths. Accurate configuration is crucial for ensuring smooth communication and management of your cluster. + +#### **Step 1: Specify the Number of Nodes** + +The first step is to determine how many nodes you want in your cluster. This number will dictate the scale of your infrastructure and influence the subsequent configuration steps. + +- **Input**: Enter the total number of nodes you plan to use. + +![Kubermatic Virtualization - How Many Nodes?](./assets/3-node-count.png) + + +#### **Step 2: Configure Each Node** + +After specifying the number of nodes, you will be prompted to configure each node individually. For each node, you need to provide the following details: + +1. **Node Address**: The IP address of the node. +2. **Username**: The username used to access the node via SSH. +3. **SSH Key Path**: The path to the SSH private key file used to authenticate with the node. + +![Kubermatic Virtualization - Node Configuration](./assets/4-node-config-page.png) + +Repeat this process for each node until all nodes are configured. + +#### **Why Accurate Configuration Matters** +- **Node Addresses**: Ensure that the IP addresses are correct and reachable within your network. +- **Usernames and SSH Keys**: Provide secure access to the nodes, enabling proper communication and management. + +By carefully configuring the number of nodes and providing accurate details for each node, you lay the foundation for a robust and manageable Kubermatic Virtualization environment. Let’s proceed to the next configuration step! + +--- +### **5. Configuring the Storage CSI Driver** + +In this step, you will decide whether to use the default Container Storage Interface (CSI) driver provided by Kubermatic Virtualization. The default CSI driver is designed for evaluation and staging environments and is not recommended for production use. + + + +#### **Key Information** +- **Default CSI Driver**: The default CSI driver (e.g., Longhorn) is included for testing purposes only. +- **Purpose**: It provides baseline storage functionality during evaluation and staging phases. +- **Recommendation**: For production environments, it is strongly advised to use a fully supported and robust storage solution. + +#### **Disclaimer** +Please be advised: +- The default CSI driver is **not intended or supported** for production environments. +- Its use in production is at your own risk and is not recommended. +- Kubermatic does not guarantee ongoing maintenance, reliability, or performance of the default CSI driver. + +![Kubermatic Virtualization - Storage CSI Driver Configuration](./assets/5-csi-page.png) + +By making an informed decision about the CSI driver, you ensure that your environment is configured appropriately for its intended purpose—whether it’s for testing or production. Let’s proceed to the next configuration step! + +--- + +### **6. Reviewing the Configuration** + +Before proceeding with the installation, it is crucial to review the full configuration to ensure all settings are correct. This step allows you to verify details such as Control Plane nodes, Worker nodes, network configurations, and other critical parameters. Once confirmed, the installation process will begin, and you will not be able to go back. + +#### **Key Information** +- **Purpose**: Confirm that all configurations are accurate before applying them. +- **Irreversible Step**: After confirming this page, the installation process will start, and changes cannot be made without restarting the entire setup. + +#### **What to Review** +- **Cluster Nodes**: + - Addresses + - Usernames + - SSH key file paths + +- **Other Configurations**: + - Network settings (CIDR, DNS server, Gateway IP) + - Load Balancer configuration (if enabled) + - Storage CSI driver selection + +![Kubermatic Virtualization - Configuration Review](./assets/6-review-page.png) + + +{{% notice warning %}} +**No Going Back**: Once you confirm this page, the installation process will begin, and you cannot modify the configuration without starting over. +By carefully reviewing the configuration, you ensure that your Kubermatic Virtualization environment is set up correctly from the start. Proceed with confidence when you’re ready! + +{{% /notice %}} + +--- + +### **8. Finalizing the Installation** + +Once you confirm the configuration, the installation process will begin, and you’ll be able to monitor its progress in real-time through detailed logs displayed on the screen. These logs provide transparency into each step of the deployment, ensuring you stay informed throughout the process. + +#### **What Happens During Installation** +- **Progress Monitoring**: Watch as the installer provisions the Control Plane, Worker Nodes, and other components. +- **Health Checks**: After deployment, the installer verifies that all parts of the stack are healthy and running as expected. +- **Completion**: Upon successful installation, the installer will generate and display the **kubeconfig** file for your Kubermatic Virtualization cluster. + +### **Congratulations!** +Your Kubermatic Virtualization environment is now up and running. With the kubeconfig file in hand, you’re ready to start managing your cluster and deploying workloads. Enjoy the power of seamless virtualization on Kubernetes! 🚀 \ No newline at end of file diff --git a/content/kubermatic-virtualization/main/installation/assets/0-welcome-page.png b/content/kubermatic-virtualization/main/installation/assets/0-welcome-page.png new file mode 100644 index 000000000..4617462b6 Binary files /dev/null and b/content/kubermatic-virtualization/main/installation/assets/0-welcome-page.png differ diff --git a/content/kubermatic-virtualization/main/installation/assets/1-network-page.png b/content/kubermatic-virtualization/main/installation/assets/1-network-page.png new file mode 100644 index 000000000..4cc7ee25b Binary files /dev/null and b/content/kubermatic-virtualization/main/installation/assets/1-network-page.png differ diff --git a/content/kubermatic-virtualization/main/installation/assets/2-lb-page.png b/content/kubermatic-virtualization/main/installation/assets/2-lb-page.png new file mode 100644 index 000000000..9d0e0798b Binary files /dev/null and b/content/kubermatic-virtualization/main/installation/assets/2-lb-page.png differ diff --git a/content/kubermatic-virtualization/main/installation/assets/3-node-count.png b/content/kubermatic-virtualization/main/installation/assets/3-node-count.png new file mode 100644 index 000000000..aecb6f70f Binary files /dev/null and b/content/kubermatic-virtualization/main/installation/assets/3-node-count.png differ diff --git a/content/kubermatic-virtualization/main/installation/assets/4-node-config-page.png b/content/kubermatic-virtualization/main/installation/assets/4-node-config-page.png new file mode 100644 index 000000000..a25486df4 Binary files /dev/null and b/content/kubermatic-virtualization/main/installation/assets/4-node-config-page.png differ diff --git a/content/kubermatic-virtualization/main/installation/assets/5-csi-page.png b/content/kubermatic-virtualization/main/installation/assets/5-csi-page.png new file mode 100644 index 000000000..23e25fa54 Binary files /dev/null and b/content/kubermatic-virtualization/main/installation/assets/5-csi-page.png differ diff --git a/content/kubermatic-virtualization/main/installation/assets/6-review-page.png b/content/kubermatic-virtualization/main/installation/assets/6-review-page.png new file mode 100644 index 000000000..2163ceb22 Binary files /dev/null and b/content/kubermatic-virtualization/main/installation/assets/6-review-page.png differ diff --git a/data/products.yaml b/data/products.yaml index 9756ada6b..95fb296d3 100644 --- a/data/products.yaml +++ b/data/products.yaml @@ -110,6 +110,17 @@ machine-controller: versions: - release: main name: main +kubermatic-virtualization: + name: kubermatic-virtualization + logo: "img/logo-kubermatic.svg" + title: Kubermatic-Virtualization + textName: Kube-V + description: Seamlessly modernize your infrastructure by building your private cloud entirely with Kubernetes. + weight: 1 + shareImage: img/share-kubermatic.png + versions: + - release: main + name: main # do not yet reveal KDP to the public # developer-platform: # name: developer-platform