Xelera Silva provides best-in-class throughput and latency for XGBoost, LightGBM and Random Forest inference by leveraging COTS datacenter-grade FPGA accelerators.
10x higher throughput than GPUs, 50x higher throughput than CPUs.
Microseconds query latency without jitter.
Usable with standard machine learning frameworks and usable with zero code change.
Built to run out-of-the-box on COTS servers and datacenter FPGA hardware.
Gradient Boosting frameworks such as XGBoost, LightGBM and CatBoost, as well as Random Forest algorithms are widely used in ransomware and DDOS detection systems, recommender systems and high-frequency trading systems.
Benchmark:
Test setup:
High Frequency Traders use decision algorithms to automate trading instructions. The automated decisions are increasingly made by XGBoost and LightGBM models. A low latency is key for these systems. Silva overcomes the latency disadvantage of machine learning algorithms: Inference of XGBoost and LightGBM models is performed with a latency of a few microseconds. This enables customer to make better decisions and win speed races.
The network is the first instance that must be protected to secure the enterprise IT system. Firewalls using Gradient Boosting Decision Trees perform inspection of the network flow data to detect suspicious patterns (e.g., a DDOS attack). Silva provides the necessary inference speed to ensure that the computationally intensive classification of network flows can be performed at the required bandwidth to avoid slowing down the network system.