High-level async TCP/IP networking for Rust using DPDK for kernel-bypass packet I/O.
dpdk-net combines three technologies to provide high-performance networking:
- DPDK - Kernel-bypass packet I/O directly to/from the NIC
- smoltcp - User-space TCP/IP stack
- Async runtime - Uses rust async runtime for task scheduling
This enables building network applications (HTTP servers, proxies, etc.) that bypass the kernel network stack entirely, achieving lower latency and higher throughput.
Benchmarks shows 2X throughput and half latency than tokio server.
┌─────────────────────────────────────────────────────────────────────┐
│ Application Layer │
│ (axum, tonic gRPC, hyper, TcpStream, TcpListener) │
└─────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ Framework Layer │
│ dpdk-net-util (DpdkApp, WorkerContext, HTTP client, │
│ axum serve, tonic serve/channel) │
└─────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ Async Runtime Layer │
└─────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ TCP/IP Stack (smoltcp) │
└─────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ DPDK (kernel-bypass I/O) │
└─────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────┐
│ Hardware NIC │
└─────────────────────────────────────────────────────────────────────┘
- Async/await support -
TcpListener,TcpStreamwithfutures_io::AsyncRead/AsyncWrite; runtime-agnostic - axum integration - Serve axum
Routerdirectly on DPDK sockets (dpdk-net-utilfeature:axum) - tonic gRPC - gRPC server and client over DPDK (
dpdk-net-utilfeature:tonic) - HTTP client -
DpdkHttpClientfor HTTP/1.1 and HTTP/2 requests (dpdk-net-util) - Multi-queue scaling - RSS (Receive Side Scaling) distributes connections across CPU cores
- DpdkApp framework - Lcore-based application runner with per-queue smoltcp stacks
- CPU affinity - Worker threads pinned to cores for optimal cache locality
| Crate | Description |
|---|---|
dpdk-net |
Core library: DPDK wrappers, smoltcp integration, async TCP sockets |
dpdk-net-sys |
FFI bindings to DPDK C library (generated via bindgen) |
dpdk-net-util |
DpdkApp, WorkerContext, HTTP client, LocalExecutor, axum serve(), tonic serve() + DpdkGrpcChannel |
dpdk-net-test |
Test harness, example servers, integration tests |
- Architecture - Crate structure and implementation details
- Design - Design docs for DpdkApp, Axum, Tonic, HTTP Client
- Benchmarks - Performance comparison with tokio on Azure
- Limitations - Known limitations and constraints
- Linux with hugepages configured
- DPDK-compatible NIC (Intel, Mellanox, etc.) or virtual device for testing
- Root privileges (for DPDK memory and device access)
From package manager or build from source:
cmake -S . -B build
cmake --build build --target dpdk_configure
cmake --build build --target dpdk_build --parallel
sudo cmake --build build --target dpdk_installuse dpdk_net_util::{DpdkApp, WorkerContext};
use dpdk_net_util::axum::serve;
use dpdk_net::socket::TcpListener;
use axum::{Router, routing::get};
use smoltcp::wire::Ipv4Address;
fn main() {
// Initialize EAL (e.g., via EalBuilder)
// ...
let app = Router::new().route("/", get(|| async { "Hello from DPDK!" }));
DpdkApp::new()
.eth_dev(0)
.ip(Ipv4Address::new(10, 0, 0, 10))
.gateway(Ipv4Address::new(10, 0, 0, 1))
.run(move |ctx: WorkerContext| {
let app = app.clone();
async move {
let listener = TcpListener::bind(&ctx.reactor, 8080, 4096, 4096).unwrap();
serve(listener, app, std::future::pending::<()>()).await;
}
});
}use dpdk_net_util::tonic::serve;
use dpdk_net::socket::TcpListener;
// Inside DpdkApp::run() closure:
let greeter = GreeterServer::new(MyGreeter::default());
let routes = tonic::service::Routes::new(greeter);
let listener = TcpListener::bind(&ctx.reactor, 50051, 4096, 4096).unwrap();
serve(listener, routes, std::future::pending::<()>()).await;This project is under active development. The core functionality works, but the API surface is evolving.
MIT