Storage Architecture¶
The storage architecture uses two complementary systems: Portworx for Kubernetes-native persistent volumes on local SSDs and NVMe drives, and NetApp for vehicle data via an ETL pipeline from JBOD hard drives.
Storage Overview¶
graph TB
subgraph Node["Each Bare Metal Node"]
SSD["14x 7TB SSD<br/>(~98TB raw)"]
NVME["4x 7TB NVMe<br/>(~28TB raw)"]
BOOT["1x 223.5GB<br/>Boot Drive (/dev/sda)"]
end
SSD --> PX["Portworx Storage Pool"]
NVME --> PX
PX --> PVC["Kubernetes PVCs<br/>VMO Disks, StatefulSets"]
subgraph Vehicle["Vehicle Data Pipeline"]
JBOD["JBOD Hard Drives<br/>(Vehicle Data)"]
ETL["ETL Pipeline"]
NETAPP["NetApp File Share"]
end
JBOD --> ETL --> NETAPP
NETAPP --> |"NFS/CSI"| PVC
Local Drive Layout (Per Node)¶
Each NX-8150-G7 node has the following drive configuration:
| Drive Type | Count | Size Each | Total Raw | Purpose |
|---|---|---|---|---|
| SSD | 14 | ~7 TB (6TB reported, MK003840GWXFL) | ~98 TB | Portworx storage pool |
| NVMe | 4 | ~7.68 TB | ~30.7 TB | Portworx storage pool |
| Boot Drive | 1 | 223.5 GB (/dev/sda) |
223.5 GB | OS installation target |
Cluster totals (3 nodes):
| Metric | Value |
|---|---|
| Total Raw SSD | ~294 TB |
| Total Raw NVMe | ~92 TB |
| Total Raw Storage | ~386 TB |
| Usable (with 2x replication) | ~193 TB |
| Usable (with 3x replication) | ~129 TB |
Drive Wipe on Install
All 7TB drives (SSDs and NVMe) are wiped using sgdisk --zap-all during both the install and reset stages of the appliance mode ISO. This is required because the drives contain metadata (partition tables, prior platform artifacts) from the previous deployment that would otherwise interfere with Portworx pool initialization.
Portworx Configuration¶
Portworx aggregates the local SSDs and NVMe drives across all 3 nodes into a distributed storage pool.
Pool Design¶
| Parameter | Value |
|---|---|
| Pool Type | Automatic (Portworx auto-discovers local drives) |
| Replication Factor | 2x (recommended for 3-node cluster) |
| Drive Types | SSD + NVMe (mixed pool or separate tiers) |
| Metadata Drive | Dedicated partition on fastest NVMe (minimum 64GB) |
| KVDB | Internal (embedded etcd) |
Storage Classes¶
Portworx provides multiple storage classes for different workload requirements:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: portworx-sc-repl2
provisioner: pxd.portworx.com
parameters:
repl: "2"
io_profile: "auto"
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediate
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: portworx-sc-repl3
provisioner: pxd.portworx.com
parameters:
repl: "3"
io_profile: "auto"
reclaimPolicy: Delete
allowVolumeExpansion: true
volumeBindingMode: Immediate
Drive Wipe Script¶
The install ISO includes a drive wipe step that runs during both initial install and reset stages. This clears any existing partition tables and prior platform metadata:
#!/bin/bash
# Wipe all 7TB drives (SSDs and NVMe)
# Skips the boot drive (/dev/sda, 223.5GB)
for disk in $(lsblk -dno NAME,SIZE | awk '$2 == "7T" {print "/dev/"$1}'); do
echo "Wiping $disk..."
sgdisk --zap-all "$disk"
wipefs -a "$disk"
dd if=/dev/zero of="$disk" bs=1M count=100
done
Destructive Operation
The drive wipe script destroys all data on the 7TB drives. This is intentional for the POC -- the drives contain data from the prior platform deployment and must be completely cleared for Portworx to initialize clean storage pools.
Portworx Inter-Node Ports¶
All Portworx ports are inter-node only (same subnet, no firewall rules needed):
| Port | Protocol | Purpose |
|---|---|---|
| 9001 | TCP/REST | Management API |
| 9002 | UDP | Gossip protocol (node discovery) |
| 9003 | TCP | Data replication |
| 9012-9014 | TCP/gRPC | Node communication, namespace, diagnostics |
| 9018-9019 | TCP/gRPC | Internal KVDB peer + client |
| 9020-9022 | TCP/REST | SDK gateway + health monitor |
NetApp Storage¶
NetApp provides file share storage for vehicle data that has been processed through an ETL pipeline.
Vehicle Data Pipeline¶
graph LR
V["Vehicle<br/>JBOD Drives"] -->|"Raw Data"| I["Ingest<br/>Station"]
I -->|"Upload"| ETL["ETL Pipeline"]
ETL -->|"Processed"| NAS["NetApp<br/>File Share"]
NAS -->|"NFS Mount"| K8S["K8s Pods<br/>(IVS Apps)"]
Data Flow¶
- Vehicle JBOD -- Hard drives in vehicles capture raw sensor and telemetry data during operation
- Ingest -- Drives are physically connected to ingest stations when vehicles return
- ETL -- Raw data is transformed, filtered, and formatted through the ETL pipeline
- NetApp -- Processed data lands on a NetApp file share for long-term storage and application access
- K8s Consumption -- IVS applications running on the Palette-managed cluster access NetApp via NFS mount or CSI driver
NetApp Integration (Discovery Required)¶
The NetApp integration details are still being confirmed:
| Item | Status | Notes |
|---|---|---|
| Protocol | TBD | NFS mount vs. NetApp Trident CSI driver |
| Network access | TBD | Verify NetApp is reachable from VLAN 111 |
| Endpoint/IP | TBD | Shawn Kettenring to provide |
| Current platform usage | TBD | How IVS apps consume NetApp on current platform today |
| Permissions/exports | TBD | NFS export config for cluster subnet |
Shawn Kettenring
Shawn is the RDS-OneTech storage and infrastructure lead responsible for the NetApp environment. Contact him for all storage-related questions: shawn.kettenring@toyota.com, C: 502-642-3614.
Boot Drive¶
The OS installs on /dev/sda (223.5GB) -- the smallest drive in each node. The appliance mode ISO targets this drive specifically.
| Parameter | Value |
|---|---|
| Install Target | /dev/sda |
| Size | 223.5 GB |
| Partitions | BIOS Boot (Legacy), OS, Recovery (A/B) |
| Boot Mode | Legacy (CSM) -- UEFI not viable on this hardware |
A/B Partitions
The appliance mode OS uses A/B partition layout for zero-downtime upgrades. During an OS upgrade, the new image is written to the inactive partition. On reboot, the system switches to the updated partition. If the upgrade fails, it automatically rolls back to the previous partition.