Open source workloads on steroids

Revolutionary servers designed for modern software applications

Bamboo Arm Servers are architected to run applications based on modern open software architectures, such as containers.​ With highly balanced I/O and application processing capabilities, our PANDA-based servers are the right platform for solutions such as Kubernetes, Platform-as-a-Service, AI/ML, and Edge computing.



Legacy IT infrastructure isn’t architected for the way Kubernetes uses storage and network resources. Bamboo’s revolutionary approach to network and storage I/O addresses the unique requirements of distributed, containerized applications.

We deliver unmatched network, storage and memory access throughput. By removing the usual bottlenecks no other Kubernetes platform achieves comparable throughput density.

Containers have their own unique system of port mappings, overlays, and bandwidth requirements that create a host of interoperability challenges. Bamboo eliminates these configuration roadblocks by delivering full bandwidth to each and every application processor. Similarly, each application processor has high performance direct access to local and onboard networked storage enabling data locality, and managing large datasets without incurring the costs and configuration woes of shared memory.

Edge Computing

Edge Computing

Delivering a service that is time sensitive requires low latency for both compute power and access to critical data.

Edge computing places systems physically close to the generation of massive amounts of data for the purpose of supporting split-second computational operations on that data.

A PANDA-based server, because of its high throughput density and data storage capabilities with low power consumption and heat output, is uniquely suitable for edge computing.

Bamboo Servers deliver higher compute density with less air conditioning, electricity and cabling than a traditional architected server, making edge computing simple.


AI/ML Simulation Development

GPUs are superb for purely mathematical computations, but if you have a real world AI/ML application then the computations are continually interrupted, and more data has to be fed into the GPU. ​

This bottleneck is overcome by the PANDA architecture. ​

PANDA based servers are better balanced to deliver I/O so you can feed the application more effectively, making any PANDA server a better scale-up proposition for AI/ML centralized at the data center, or perfect at the edge.



No matter how you are charged for your datacenter space, if by space or power consumption, a PANDA-based server reduces your costs, as we use as little as 20% of the power, or lower, and 10% of the space for a given workload by enabling high I/O throughput and reduced heat output.

Deployment is simple with a modular design that scales from a single blade in 1U to a complete rack with minimal cabling. 

Servers are built with diagnostic capabilities and customer hot swappable parts so you don’t have to wait for an engineer to arrive.

We run all common open-systems-based software so you can build service platforms at a much reduced total operating cost.