Quickly start a Docker-based Python code execution sandbox for local development (e.g. executing code generated by your AI agent when developing locally). Made possible with uv and PEP 723 – Inline script metadata (up-to-date PyPA specs page).
Step 1: Clone the repo or actually only need to have the following files:
├── code_exec_client.py
├── Dockerfile
├── entrypoint.sh
└── server.py
(No need to pip install something or uv add something, the server is just built with Python standard lib socketserver.)
Step 2: Build a Docker image:
docker build -t fridge-code-exec-img .
Step 3: Up a container; tune the memory, cpu, pids, port, etc to needs:
docker run --rm \
--name fridge \
--memory="512m" \
--cpus="0.5" \
--pids-limit=100 \
--read-only \
--tmpfs /tmp:exec,nosuid,nodev \
--tmpfs /home/appuser/.cache/uv:exec \
-p 8080:8080 \
fridge-code-exec-img
Step 4: Use the CodeExecutionClient from the code_exec_client.py in service functions, controllers, etc. Example client code below:
import logging
from code_exec_client import (
CodeExecutionClient,
ExecutionResult,
ServerConnectionError,
ServerResponseError,
)
logging.basicConfig(
level=logging.INFO, format="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
)
logger = logging.getLogger(__name__)
HOST, PORT = "localhost", 8080
SCRIPT_TO_EXECUTE = """
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "youtube-transcript-api"
# ]
# ///
from youtube_transcript_api import YouTubeTranscriptApi
def fetch_transcript(url):
video_id = url.split("v=")[1].split("&")[0] if "v=" in url else url.split("/")[-1]
api = YouTubeTranscriptApi()
transcript = api.get_transcript(video_id)
return " ".join([f"{t['text']}" for t in transcript])
url = "https://www.youtube.com/watch?v=dQw4w9WgXcQ"
print(fetch_transcript(url))
"""
def main():
try:
with CodeExecutionClient(HOST, PORT) as client:
result = client.execute(SCRIPT_TO_EXECUTE)
print_result(result)
except (ServerConnectionError, ServerResponseError):
logger.exception("Execution failed: %s")
if __name__ == "__main__":
main()The SCRIPT_TO_EXECUTE must have a PEP 723 metadata on top of the script. Working towards automatically adding this metadata so that the raw script from somewhere (e.g. LLM-generated) can be passed directly to the .execute method. For now I'm prompting the coder LLM like:
write python script to fetch the transcript of a youtube video given the URL. dont write code that requires user input. dont add docstrings and code comments. use f-strings for concatenation. when done writing the code, add a PEP 723 inline metadata on the top of the script indicating the packages to be installed with their versions; always use python >=3.12.
example PEP 723 block:
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "numpy==2.3.5"
# ]
# ///