AlpenX - Unlock Your AI Potential
by our Computing Power
Your Global Partner in High-Performance
Hardware and AI Innovation
As an global team with deep roots in Austria, Germany, Australia, China, and the USA, our global perspective and resources will be fundamental to our partners.
Make industrial-grade AI and compute as accessible and dependable as utilities, so every builder in Europe can ship real products faster.
We deliver reliable AI servers and solutions with fast lead times, fair pricing, and steady support, combining a global supply base with local execution, so partners can avoid vendor lock-in, protect project economics, and launch with confidence.
- Partner-first — Your launch date, not our vanity metrics, is the north star.
- Reliability over hype — We ship what works in production, with clear specs and benchmarks.
- Speed with responsibility — Quick delivery, transparent timelines, and accountable SLAs.
- Open & compatible — Fit into existing stacks with minimal change; no lock-in.
- Craft & simplicity — Clean architectures, tested configs, docs you can actually use.
- Integrity & compliance — Data privacy, safety, and honest communication come first.
- Efficiency & sustainability — Better TCO, power awareness, and lifecycle thinking.
One stop
AI solution


VA16 DeepSeek FP8+RAG
- Product: 4U 8-GPU Server
- CPU: X86/C86 32C or ARM 64C * 2
- Memory: 32G DDR4/5 RDIMM ECC * 16 (512GB)
- AI acceraltion Card: Hanbo VA16-128G * 8 (1TB)
- Metahuman: L2/L20/VA1&VG1000
- Power: 2.5KW
- Models: DeepSeek-R1 671B FP8 single12tk/s or 4 stream -realtime;Qwen3-30B-A3B FP8 16 stream 20tk/s or 64 stream real-time
USE CASE:
DeepSeek Full-Power Version: Experience blazing-fast, real-time inference with the flexibility of single-node efficiency (12k tokens/sec) or the robust power of four-node concurrent processing.
Qwen3-30B MoE: Our enterprise-grade solution scales effortlessly, delivering high-speed performance (20k tokens/sec on 16 nodes) or massive, concurrent real-time inference for up to 64 nodes. It is the ideal platform to power intelligent office applications, tailored to the specific scale and quality requirements of any business unit.
VA1L Qwen3-30B-A3B FP8+RAG
- Product: 2U / Tower 1/2/4/8/10 GPU Servers
- CPU: X86/C86 32C or ARM 64C 1/2
- Memory: 32G DDR4/5 RDIMM ECC * 4 (128GB)
- AI acceraltion Card: Hanbo VA1L-64G * 1/2/4/8/10
- Metahuman: L2/L20/VA1&VG1000
- Power: <2KW
- Models: Qwen3-30B-A3B FP8 single 20tk/s or 4-40 real-time
USE CASE:
Qwen3-30B MoE delivers flexible and high-performance inference, scaling from a single node at 20k tokens/second to a 40-node cluster for massive concurrent tasks. It is designed to efficiently power intelligent office applications for businesses of any size.
VA10S-128G AI Workstation
- Product: Workstations
- CPU: X86/ARM CPU * 1
- Memory: 8G DDR5 * 4 (32GB)
- AI acceraltion Card: Hanbo VA10S-128G
- Metahuman: L2/L20/VA1&VG1000
- Power: < 600 W
- Models: Qwen3-30B-A3B or Qwen3-8B/4B
USE CASE:
Enabling real-time understanding and generation of multimodal content directly on edge devices, accelerating digital transformation for industries like manufacturing and transportation.
VS1000-32G MEC

- Product: Edge Computing
- CPU: ARM SoC * 1
- Memory: 16GB
- AI acceraltion Card: Hanbo VE1M-32G
- Metahuman: L2/L20/VA1&VG1000
- Power: 60 W
- Models: Qwen3-30B-A3B or Qwen2.5-VL 7B
USE CASE:
Enabling real-time understanding and generation of multimodal content directly on edge devices, accelerating digital transformation for industries like manufacturing and transportation.
Computing Power - Server

AI Training and Inference server
- CPU: Dual Hygon 7470 (2.6GHz / 48 cores / 256MB L3 / 350W)
- GPU: C500 (PCIe Gen5)
- Memory: 32GB DDR4/S RDIMM ECC ×16 (512GB total)
- Storage: 1TB NVMe U.2 3.5" RI SSD ×2, 8TB SATA 2.5" ×4
- Network Cards: Management NIC: 2-port 25GE optical interface NIC (OCP 3.0);
- Data NIC: 400Gb high-performance NIC (PCIe 5.0)
- Power Supply: 5400W AC & 240V high-voltage commercial power modules
- Chassis: 4U B-GPU 19-inch standard server

Edge server
- CPU: Dual Hygon 7390 or Dual Phytium S5000C, 64 cores
- GPU: VA16-128G ×8 (Total VRAM 1TB)
- Memory: 32GB DDR4/5 RDIMM ECC ×16 (512GB)
- Storage: 1TB NVMe U.2 3.5" SSD ×2; 8TB SATA 2.5" ×4
- Network Cards: Management NIC: 2-port 25GE optical interface NIC (OCP 3.0)
- Data NIC: 256Gb high-performance NIC (PCIe 5.0)
- Power Supply: 2500W AC & 240V high-voltage DC power modules
- Chassis: 4U 8-GPU 19-inch standard server chassis

Rendering server
- CPU: Dual Phytium S5000C (2.1GHz / 64 cores / 32MB / 350W)
- GPU: VG1000-64G ×16, 70W per card, 7.6 TFLOPS rendering performance
- Memory: 64GB DDR4/5 RDIMM ECC ×16 (1TB)
- Storage: 1.92TB NVMe U.2 3.5" RI SSD ×2; 8TB SATA 2.5" ×8
- Network Cards: 3 × dual-port 25G optical NICs (for business, with in-band management and storage)
- Power Supply: 2500W AC & 240V high-voltage DC modules, N+M redundancy
- Chassis: 4U 16-GPU 19-inch standard server chassis

GPU server
- CPU: Supports 2 × AMD EPYC™ 9004/9005 processors, totaling 192 cores
- GPU: RTX 2060/3060 ×16, dual-width full-height cards
- Memory: 64GB DDR4/5 RDIMM ECC ×16 (1TB)
- Storage: Supports 8 × 3.5"/2.5" SAS/SATA drives (optionally 2/4 × U.2 NVMe SSDs); supports 2 onboard M.2 slots
- Network Cards: Management NIC: 2-port 25GE optical NIC (OCP 3.0)
- Data NIC: 256Gb high-performance NIC (PCIe 5.0)
- Power Supply: Dual-level power system, each level with 4 platinum-grade CRPS modules (2000W/2700W), supporting 2+2 or 3+1 redundancy modes
- Chassis: 7U 16-GPU standard rack server
Trusted by Thousands of Property Owners and Managers

















