Skip to content

Architecture Overview

The Toyota TMNA POC deploys Spectro Cloud Palette in an air-gapped configuration with a self-hosted Palette Management Appliance (PMA) managing a 3-node bare-metal Kubernetes cluster. The platform stack includes VMO for virtual machine orchestration and Portworx for software-defined storage across local SSDs and NVMe drives.

High-Level Architecture

graph TB
    subgraph PMA["Palette Management Appliance (VM)"]
        UI["Palette UI/API<br/>:443"]
        REG["Internal Registry<br/>:30003"]
        LUI["Local UI<br/>:5080"]
    end

    subgraph Cluster["3-Node Bare Metal Cluster"]
        subgraph N1["STG-WAHVP004<br/>10.25.233.4"]
            K1["PXKe Control Plane"]
            PX1["Portworx"]
        end
        subgraph N2["STG-WAHVP005<br/>10.25.233.5"]
            K2["PXKe Control Plane"]
            PX2["Portworx"]
        end
        subgraph N3["STG-WAHVP006<br/>10.25.233.6"]
            K3["PXKe Control Plane"]
            PX3["Portworx"]
        end
    end

    PMA -->|"TCP 6443<br/>K8s API"| Cluster
    Cluster -->|"TCP 443<br/>Agent Heartbeat"| PMA
    Cluster -->|"TCP 30003<br/>Image Pull"| REG

    subgraph Stack["Platform Stack"]
        OS["Ubuntu 24.04"]
        K8S["PXKe (kubeadm 1.33.6)"]
        CNI["Cilium 1.18.4"]
        CSI["Portworx"]
        VMO["VMO 4.8.9"]
        VMA["VMA 4.8.8"]
    end

Platform Stack

The cluster profile is composed of layered packs managed through Palette cluster profiles:

Layer Component Version Purpose
OS Ubuntu 24.04 (edge-native-byoi) 2.1.0 Base operating system with Kairos
Kubernetes PXKe (Palette K8s Engine) 1.33.6 Kubernetes distribution (kubeadm-based)
CNI Cilium 1.18.4 Container networking with eBPF
CSI Portworx TBD Software-defined storage on local disks
Addon MetalLB 0.15.2 Bare metal load balancer
Addon NGINX Ingress 1.14.1 Ingress controller
Addon Prometheus Operator 80.4.2 Monitoring and alerting
Addon VMO 4.8.9 Virtual Machine Orchestrator (KubeVirt)
Addon VMA 4.8.8 VM Migration Assistant

Network Topology

All four 25GbE NICs on each node are bonded into a single LACP aggregate and bridged. VLAN 111 is the native (untagged) VLAN, so the IP address is assigned directly to the bridge interface.

graph LR
    subgraph Node["Each Bare Metal Node"]
        NIC1["enp134s0f0np0<br/>25GbE"] --> BOND["bond0<br/>802.3ad LACP"]
        NIC2["enp134s0f0np1<br/>25GbE"] --> BOND
        NIC3["enp175s0f0np0<br/>25GbE"] --> BOND
        NIC4["enp175s0f0np1<br/>25GbE"] --> BOND
        BOND --> BR["br0<br/>Bridge<br/>10.25.233.x/24"]
    end

See Network Design for detailed NIC configuration, systemd-networkd files, and LACP parameters.

Storage Architecture

Storage is split between two systems:

  • Portworx -- Aggregates local SSDs (14x 7TB) and NVMe drives (4x 7TB) across all 3 nodes into a distributed storage pool for Kubernetes PVCs. Provides replication, snapshots, and live migration support for VMO workloads.

  • NetApp -- External file share for vehicle data. JBOD drives in vehicles capture raw data, which goes through an ETL pipeline before landing on a NetApp file share accessible to IVS applications.

See Storage Architecture for Portworx pool configuration, drive layout, and the NetApp data pipeline.

Deployment Topology

Component Type IP Address Role
Palette Management Appliance VM VIP: 10.25.232.155, Node: 10.25.232.252 Management plane, internal registry
STG-WAHVP004 Bare Metal 10.25.233.4 Control plane + worker
STG-WAHVP005 Bare Metal 10.25.233.5 Control plane + worker
STG-WAHVP006 Bare Metal 10.25.233.6 Control plane + worker

Air-Gap Design

The entire deployment operates without external network connectivity:

  1. Palette ISO and content bundles are downloaded from Artifact Studio on an internet-connected machine
  2. Files are transferred to the air-gapped environment via USB or out-of-band management
  3. The PMA ISO installs the management appliance with a built-in container registry on port 30003
  4. Content bundles (.zst files) are uploaded to the PMA, populating the internal registry with all required container images
  5. Bare metal nodes pull all images from the PMA internal registry during cluster deployment
  6. No Harbor or Artifactory dependency is required for the POC -- the PMA internal registry serves all images

This eliminates the need for a separate registry infrastructure during the POC phase. Harbor integration can be introduced later for production scalability.