-->
Written by: Marlon Colca
Posted on 03 May 2025 - 4 months ago
python logging loguru
In async apps (FastAPI, asyncio workers, etc.), logging can get tricky.
In async apps (FastAPI, asyncio workers, etc.), logging can get tricky.
If your logs are too fast — or your writes too slow — you risk blocking the event loop or even losing log lines.
In this post, we’ll set up non-blocking, thread-safe, async-friendly logging using Loguru.
If you log directly to disk in a tight async loop:
for i in range(10000):
logger.info(f"Logging item {i}")
You’re doing blocking I/O. That means:
Loguru supports enqueue=True, which uses a background thread with a queue:
logger.add("app.log", enqueue=True)
That’s it. Now your logs won’t block your event loop.
from fastapi import FastAPI
from loguru import logger
app = FastAPI()
logger.remove()
logger.add("app.log", enqueue=True)
@app.get("/")
async def read_root():
logger.info("Root endpoint hit")
return {"message": "hello"}
No blocking, no lost logs — perfect for production.
You can still use your custom formatters:
logger.add("structured.json", format=json_formatter, enqueue=True)
It works exactly the same — just async-safe.
Even with async logging, logging inside hot loops or massive async tasks can flood your disk. Use:
If you’re using FastAPI, asyncio, or any concurrent system — always add enqueue=True. It’s the easiest way to make your logs safe, fast, and non-blocking.
👉 Sending Logs Over the Network: Loguru + Syslog, HTTP, and More
Local log files are great, but what happens when you scale?