Skip to main content
Axum is unique in that it doesn’t have its own bespoke middleware system and instead integrates with tower. This means the entire ecosystem of tower and tower-http middleware works seamlessly with axum.

Why tower?

While it’s not necessary to fully understand tower to write or use middleware with axum, having at least a basic understanding of tower’s concepts is recommended. Tower provides a standardized interface for composable middleware through the Service and Layer traits. Resources:

Applying middleware

Axum allows you to add middleware at multiple levels:
  • Entire routers: Router::layer and Router::route_layer
  • Method routers: MethodRouter::layer and MethodRouter::route_layer
  • Individual handlers: Handler::layer

Basic layer application

use axum::{
    routing::get,
    Router,
};
use tower_http::trace::TraceLayer;

async fn handler() {}

let app = Router::new()
    .route("/", get(handler))
    .layer(TraceLayer::new_for_http());

Applying multiple middleware

Use tower::ServiceBuilder to apply multiple middleware efficiently:
use axum::{
    routing::get,
    Extension,
    Router,
};
use tower_http::trace::TraceLayer;
use tower::ServiceBuilder;

async fn handler() {}

#[derive(Clone)]
struct State {}

let app = Router::new()
    .route("/", get(handler))
    .layer(
        ServiceBuilder::new()
            .layer(TraceLayer::new_for_http())
            .layer(Extension(State {}))
    );

Commonly used tower-http middleware

The tower-http crate provides many production-ready middleware:

TraceLayer

High-level tracing and logging:
use tower_http::trace::TraceLayer;

let app = Router::new()
    .route("/", get(handler))
    .layer(TraceLayer::new_for_http());

CorsLayer

Cross-Origin Resource Sharing (CORS) handling:
use axum::{
    http::{HeaderValue, Method},
    routing::get,
    Json, Router,
};
use tower_http::cors::CorsLayer;

let app = Router::new()
    .route("/json", get(json_handler))
    .layer(
        CorsLayer::new()
            .allow_origin("http://localhost:3000".parse::<HeaderValue>().unwrap())
            .allow_methods([Method::GET])
    );

CompressionLayer

Automatic compression of responses:
use axum::{routing::post, Json, Router};
use serde_json::Value;
use tower::ServiceBuilder;
use tower_http::{
    compression::CompressionLayer,
    decompression::RequestDecompressionLayer,
};

let app = Router::new()
    .route("/", post(root))
    .layer(
        ServiceBuilder::new()
            .layer(RequestDecompressionLayer::new())
            .layer(CompressionLayer::new())
    );

async fn root(Json(value): Json<Value>) -> Json<Value> {
    Json(value)
}

TimeoutLayer

Timeout requests after a duration:
use axum::{
    routing::get,
    error_handling::HandleErrorLayer,
    http::StatusCode,
    BoxError,
    Router,
};
use tower::{ServiceBuilder, timeout::TimeoutLayer};
use std::time::Duration;

async fn handler() {}

let app = Router::new()
    .route("/", get(handler))
    .layer(
        ServiceBuilder::new()
            // HandleErrorLayer must go above TimeoutLayer
            .layer(HandleErrorLayer::new(|_: BoxError| async {
                StatusCode::REQUEST_TIMEOUT
            }))
            .layer(TimeoutLayer::new(Duration::from_secs(10)))
    );

RequestIdLayer

Set and propagate request IDs:
use tower_http::request_id::{
    RequestIdLayer,
    PropagateRequestIdLayer,
};

let app = Router::new()
    .route("/", get(handler))
    .layer(RequestIdLayer::new())
    .layer(PropagateRequestIdLayer::new());

Middleware ordering

Router::layer ordering

When you add middleware with Router::layer, all previously added routes are wrapped. Middleware executes from bottom to top:
use axum::{routing::get, Router};

async fn handler() {}

let app = Router::new()
    .route("/", get(handler))
    .layer(layer_one)    // Executes third
    .layer(layer_two)    // Executes second
    .layer(layer_three); // Executes first
Think of middleware as layers of an onion:
        requests
           |
           v
+----- layer_three -----+
| +---- layer_two ----+ |
| | +-- layer_one --+ | |
| | |               | | |
| | |    handler    | | |
| | |               | | |
| | +-- layer_one --+ | |
| +---- layer_two ----+ |
+----- layer_three -----+
           |
           v
        responses
Execution flow:
  1. layer_three receives the request
  2. Passes to layer_two
  3. Passes to layer_one
  4. Passes to handler (produces response)
  5. Response goes back through layer_one
  6. Then layer_two
  7. Finally layer_three

ServiceBuilder ordering

ServiceBuilder composes layers to execute top to bottom, which is more intuitive:
use tower::ServiceBuilder;
use axum::{routing::get, Router};

async fn handler() {}

let app = Router::new()
    .route("/", get(handler))
    .layer(
        ServiceBuilder::new()
            .layer(layer_one)   // Executes first
            .layer(layer_two)   // Executes second
            .layer(layer_three) // Executes third
    );
With ServiceBuilder:
  1. layer_one receives request first
  2. Then layer_two
  3. Then layer_three
  4. Then handler
  5. Response bubbles back up through layer_three, layer_two, layer_one
Recommendation: Use ServiceBuilder for better readability and mental model.

The layer method

The layer method is available on Router, MethodRouter, and Handler. It accepts any type implementing tower::Layer:
pub fn layer<L>(self, layer: L) -> Self
where
    L: Layer<Route> + Clone + Send + 'static,
    L::Service: Service<Request> + Clone + Send + 'static,

Applying to routers

use axum::Router;
use tower_http::trace::TraceLayer;

let app = Router::new()
    .route("/", get(handler))
    // Apply to all routes
    .layer(TraceLayer::new_for_http());

Applying to specific routes

let app = Router::new()
    .route("/public", get(public_handler))
    .route("/admin", get(admin_handler))
    // Only applies to /admin route
    .route_layer(middleware::from_fn(require_auth));

Applying to handlers

use axum::handler::Handler;

let handler = handler_fn.layer(some_middleware);

let app = Router::new()
    .route("/", get(handler));

Writing custom tower middleware

For maximum control, implement tower::Service and tower::Layer:
use axum::{
    response::Response,
    body::Body,
    extract::Request,
};
use futures_core::future::BoxFuture;
use tower::{Service, Layer};
use std::task::{Context, Poll};

#[derive(Clone)]
struct MyLayer;

impl<S> Layer<S> for MyLayer {
    type Service = MyMiddleware<S>;
    
    fn layer(&self, inner: S) -> Self::Service {
        MyMiddleware { inner }
    }
}

#[derive(Clone)]
struct MyMiddleware<S> {
    inner: S,
}

impl<S> Service<Request> for MyMiddleware<S>
where
    S: Service<Request, Response = Response> + Send + 'static,
    S::Future: Send + 'static,
{
    type Response = S::Response;
    type Error = S::Error;
    // BoxFuture is Pin<Box<dyn Future + Send + 'a>>
    type Future = BoxFuture<'static, Result<Self::Response, Self::Error>>;
    
    fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
        self.inner.poll_ready(cx)
    }
    
    fn call(&mut self, request: Request) -> Self::Future {
        let future = self.inner.call(request);
        Box::pin(async move {
            let response: Response = future.await?;
            Ok(response)
        })
    }
}
Use custom tower middleware when:
  • Middleware needs to be configurable via builder methods
  • You intend to publish middleware for others to use
  • You need maximum performance (avoid boxing futures)
Learn more: Building a middleware from scratch

Tower combinators

Tower provides utility combinators for simple transformations:

map_request

use tower::ServiceBuilder;

let app = Router::new()
    .route("/", get(handler))
    .layer(
        ServiceBuilder::new()
            .map_request(|mut req: Request| {
                req.headers_mut().insert("x-custom", "value".parse().unwrap());
                req
            })
    );

map_response

let app = Router::new()
    .route("/", get(handler))
    .layer(
        ServiceBuilder::new()
            .map_response(|mut res: Response| {
                res.headers_mut().insert("x-custom", "value".parse().unwrap());
                res
            })
    );

then

Chain an async function after the service:
let app = Router::new()
    .route("/", get(handler))
    .layer(
        ServiceBuilder::new()
            .then(|response: Response| async move {
                // Process response asynchronously
                response
            })
    );

and_then

Chain a fallible async function:
let app = Router::new()
    .route("/", get(handler))
    .layer(
        ServiceBuilder::new()
            .and_then(|response: Response| async move {
                // Can return an error
                Ok::<_, BoxError>(response)
            })
    );

Wrapping the entire app

Apply middleware around your entire application by wrapping the Router (which implements Service):
use axum::{
    routing::get,
    Router,
};
use tower::ServiceBuilder;

async fn handler() {}

let app = Router::new().route("/", get(handler));

// Apply middleware around the entire app
let app = ServiceBuilder::new()
    .layer(some_middleware)
    .service(app);
This is useful for:
  • Middleware that needs to run before routing
  • URI rewriting middleware
  • Backpressure-sensitive middleware

Rewriting request URIs

Middleware added with Router::layer runs after routing. To rewrite URIs before routing, wrap the entire Router:
use tower::Layer;
use axum::{
    Router,
    ServiceExt,
    extract::Request,
};

fn rewrite_request_uri<B>(req: Request<B>) -> Request<B> {
    // Modify URI
    req
}

let middleware = tower::util::MapRequestLayer::new(rewrite_request_uri);

let app = Router::new();

// Apply the layer around the whole Router
let app_with_middleware = middleware.layer(app);

let listener = tokio::net::TcpListener::bind("0.0.0.0:3000").await.unwrap();
axum::serve(listener, app_with_middleware.into_make_service()).await;

Error handling

Axum’s error handling model requires handlers to always return a response. Middleware can introduce errors, so they must be handled gracefully using HandleErrorLayer:
use axum::{
    routing::get,
    error_handling::HandleErrorLayer,
    http::StatusCode,
    BoxError,
    Router,
};
use tower::{ServiceBuilder, timeout::TimeoutLayer};
use std::time::Duration;

async fn handler() {}

let app = Router::new()
    .route("/", get(handler))
    .layer(
        ServiceBuilder::new()
            // HandleErrorLayer receives errors from layers below it
            .layer(HandleErrorLayer::new(|_: BoxError| async {
                StatusCode::REQUEST_TIMEOUT
            }))
            .layer(TimeoutLayer::new(Duration::from_secs(10)))
    );
Important: HandleErrorLayer must be placed above middleware that can produce errors.

Accessing state in tower layers

Pass state to custom tower layers through the layer’s constructor:
use axum::{
    Router,
    routing::get,
    extract::{State, Request},
};
use tower::{Layer, Service};
use std::task::{Context, Poll};

#[derive(Clone)]
struct AppState {}

#[derive(Clone)]
struct MyLayer {
    state: AppState,
}

impl<S> Layer<S> for MyLayer {
    type Service = MyService<S>;
    
    fn layer(&self, inner: S) -> Self::Service {
        MyService {
            inner,
            state: self.state.clone(),
        }
    }
}

#[derive(Clone)]
struct MyService<S> {
    inner: S,
    state: AppState,
}

impl<S, B> Service<Request<B>> for MyService<S>
where
    S: Service<Request<B>>,
{
    type Response = S::Response;
    type Error = S::Error;
    type Future = S::Future;
    
    fn poll_ready(&mut self, cx: &mut Context<'_>) -> Poll<Result<(), Self::Error>> {
        self.inner.poll_ready(cx)
    }
    
    fn call(&mut self, req: Request<B>) -> Self::Future {
        // Access self.state here
        self.inner.call(req)
    }
}

async fn handler(_: State<AppState>) {}

let state = AppState {};

let app = Router::new()
    .route("/", get(handler))
    .layer(MyLayer { state: state.clone() })
    .with_state(state);

Backpressure considerations

Axum expects services to not care about backpressure and always be ready. When routing to multiple services, axum:
  • Always returns Poll::Ready(Ok(())) from Service::poll_ready
  • Drives actual readiness inside the response future
Implications:
  • Avoid routing to backpressure-sensitive middleware
  • Use load shedding if needed
  • Apply backpressure-sensitive middleware around the entire app
  • Errors from poll_ready appear in the response future, not immediately
Note: Handlers created from async functions don’t care about backpressure and are always ready.

See also