Infrastructure

Industrial HPC Buildout

Centrally orchestrated compute clusters designed for maximum bandwidth-bound inference efficiency.

System Topology

Tiered orchestration and compute layers designed for maximum inference efficiency.

Tier 1: Workstation
The Architect's Bench

Dual-use workstation. Day: Creative production. Night: Vast.ai compute node.

CPUAMD Ryzen Threadripper 7960X
GPU2x NVIDIA RTX 6000 Blackwell Max-Q
MotherboardASUS Pro WS TRX50-SAGE WIFI
Memory192GB
Storage (Win)4TB Samsung 990 PRO NVMe
Tier 2: Compute
Seed Rig Clusters

High-density 4x GPU nodes filling the "VRAM Vacuum." Phase 2 Deployment: 40x RTX 6000 Blackwell (3.8TB VRAM) across 10 units.

GPU
4x NVIDIA RTX 6000 Blackwell Max-Q (96GB GDDR7 ECC)
CPU
AMD EPYC 9124 (16-Core)
Motherboard
Gigabyte MZ73-LM0
Power
1x EVGA 1600W P+ (Primary) + 1x Seasonic 1000W (Secondary)
Thermal Load
5k BTU

Technical Specifications

Rationale for component selection and financial yield.

01GPU
4x NVIDIA RTX 6000 Blackwell Max-Q (96GB GDDR7 ECC)
384GB unified VRAM pool; Max-Q 300W TDP enables residential deployment.
02CPU
AMD EPYC 9124 (16-Core)
128 PCIe Gen 5.0 lanes enable full x16 bandwidth for all GPUs.
03Motherboard
Gigabyte MZ73-LM0 (Rev 3.0) SP5
Server-grade board with IPMI remote management for headless fleet operation.
04Memory
256GB (8x32GB) DDR5-4800 ECC RDIMM
1:1 VRAM-to-RAM ratio minimum for efficient model loading.
05Storage
1TB NVMe (OS) + 4TB NVMe (Model Cache)
High-capacity scratch disk for HuggingFace model weight cache.
06Power
1x EVGA 1600W P+ (Primary) + 1x Seasonic 1000W (Secondary)
Dual-PSU topology for ~1,450W total system load.
07Chassis
Phanteks Enthoo Pro 2 Server Edition
Supports dual PSUs and SSI-EEB motherboards with exceptional airflow.
08Cooling
Grow Tent Enclosure + AC Infinity Cloudline S8
800+ CFM external exhaust; winter heat recovery to offset heating costs.
$34,424

CapEx Yield (Hardware Cost)

~$2,025

Monthly Revenue Target

-$2,400/yr

OpEx Efficiency (Cloud Avoidance)

Industrial Operations

Strategic migration from leased acceleration to asset ownership.

Phase 1
Pilot Batch

Specifications

Hardware3 Seed Rigs + 1 Architect's Bench
LocationAustintown, OH (Lab)
PowerNEMA 14-50 (240V/50A)

Objectives

Revenue Target~$6,075/mo
CapEx Requirement$129,272
Market Entry StrategyLow-Risk Validation
Phase 2
Leased Acceleration Platform

Specifications

LocationAustintown, OH
Power200A @ 480V 3φ
Capacity34 Rigs

Timeline

M1-M3Fleet Ramp
M4-M12Steady State
Power Efficiency Advantage+50% Capacity
Phase 3
Unlisted HQ (Owned)

Specifications

Power800A @ 480V 3φ
Capacity164 Rigs
Rate~$0.08/kWh

Objective

Rent Exp.$0.00
EquityOwned Asset
Self-Funded via Phase 2 RevenueAsset ownership
Scalability
Megawatt Scale

1 MW Capacity

Seed Rigs~689 Units
RTX 6000 Blackwell GPUs~2,756 Units

Density Metric

Theoretical density based on 1.45kW continuous load per rig. Demonstrates infrastructure efficiency at industrial scale.

Environment & Connectivity

High-bandwidth spinal architecture paired with passive climate leveraging.

Network Architecture
Uplink2Gbps Symmetric Fiber
Backbone10GbE SFP+ Spine
IsolationWireGuard / VLAN Tunnels
Economizer Cooling

Leveraging Ohio's climate (~5,000 free-cooling hours/year). Open-air rackmount chassis paired with industrial HVLS fans eliminate the need for traditional AC.

Target PUE1.1

Model lifecycle + scaling direction

How we iterate models today and how the industry’s infrastructure direction informs our roadmap.

Model lifecycle pipeline (fine-tune → evaluate → deploy)
Current

Applied AI wins come from iteration velocity and repeatability. Our pipeline is designed to take an open model, fine-tune on domain data, evaluate, then deploy to production inference—while keeping data governance and reproducibility first-class.

  • Base model selection (open-source / frontier).
  • Fine-tune + eval loop for domain fit (repeatable, auditable).
  • Deployment to production inference with monitoring and rollback.
Hardware roadmap (directional)
Forward-looking

We build with what is economical and deployable today, while aligning to the industry’s rack-scale direction. This is roadmap context, not a procurement commitment.

  • Current: Seed Rig clusters optimized for memory-heavy inference.
  • Next: deskside systems accelerate dev/staging workflows when needed.
  • Future: rack-scale AI factory patterns (Rubin-era framing).
Sources / Further Reading
NVIDIA context

These links provide industry context and terminology used throughout the plan. They are not a statement of procurement or vendor dependency.

  • Inside the NVIDIA Rubin platform: six new chips, one AI supercomputer

    NVIDIA

    Open
  • DGX Spark and DGX Station: open-source frontier models

    NVIDIA

    Open
  • Equinix Private AI with NVIDIA DGX

    NVIDIA

    Open
  • RTX AI Garage: fine-tuning with Unsloth and DGX Spark

    NVIDIA

    Open