Deploy a production-ready DeployStack Satellite with nsjail process isolation for multi-team environments on Debian 13.
This guide covers deploying a production-ready DeployStack Satellite with nsjail process isolation for secure multi-team environments. For development or single-team deployments, see the Quick Start guide.
When to use this guide:
Production deployments serving multiple teams
Enterprise environments with strict security requirements
Shared infrastructure where teams need complete isolation
Multi-tenant satellite deployments
For development or single-team usage, the Docker Compose setup is simpler and sufficient.
Production satellites provide enterprise-grade security through:
nsjail Process Isolation: Complete process separation per team with Linux namespaces and cgroup enforcement
Resource Limits: CPU, memory, and process limits per MCP server (virtual RAM unlimited via rlimit, 512MB physical RAM via cgroup when enabled, 60s CPU, 1000 processes)
Multi-Runtime Support: Node.js (npx) and Python (uvx) with runtime-aware isolation
Filesystem Jailing: Read-only system directories, isolated writable spaces per runtime
Non-Root Execution: Satellite runs as dedicated deploystack user
Audit Logging: Complete activity tracking with automatic rotation
# Install Python 3sudo apt-get install -y python3 python3-pip# Install UV (Python package manager)curl -LsSf https://astral.sh/uv/install.sh | sh# Verify installationpython3 --version # Should show Python 3.xuvx --version # Should show uvx version
Python Runtime Support: The satellite automatically detects Python MCP servers and spawns them using uvx with runtime-aware isolation. Python and Node.js servers run in separate cache directories for complete isolation.
nsjail provides the process isolation that enables secure multi-team satellite operation.
Why nsjail? nsjail uses Linux namespaces and cgroups to create completely isolated environments for each team’s MCP servers. This prevents teams from accessing each other’s data or interfering with other processes.
nsjail requires unprivileged user namespaces to be enabled at the kernel level.
# Create sysctl configurationecho 'kernel.unprivileged_userns_clone=1' | sudo tee /etc/sysctl.d/99-deploystack-userns.conf# Apply immediatelysudo sysctl -p /etc/sysctl.d/99-deploystack-userns.conf# Verify settingcat /proc/sys/kernel/unprivileged_userns_clone# Should return: 1
Important: This kernel setting is required for nsjail to function. Without it, all MCP server spawns will fail. The setting persists across reboots via the sysctl configuration file.
Log Rotation: Logs rotate daily and retain 7 days of history by default. Adjust the rotate value in the logrotate configuration if you need longer retention.
# Create base directory for GitHub-based MCP server deployments# Required for tmpfs isolation of GitHub repository installationsmkdir -p /opt/mcp-deploymentschown deploystack:deploystack /opt/mcp-deploymentschmod 755 /opt/mcp-deployments
Critical Requirement: This directory is required for GitHub-based MCP server installations. Without it, the satellite will fail to start in production mode with a clear error message:
❌ FATAL: GitHub deployment base directory does not exist: /opt/mcp-deploymentsFix: sudo mkdir -p /opt/mcp-deployments && sudo chown deploystack:deploystack /opt/mcp-deployments
Why this directory is needed: When users install MCP servers directly from GitHub repositories (e.g., github:owner/repo#ref), the satellite:
Downloads the GitHub tarball
Creates a tmpfs mount at /opt/mcp-deployments/{team_id}/{installation_id}
Extracts the code into the tmpfs mount (300MB size limit)
Builds and runs the MCP server in isolated memory
This approach provides secure, isolated execution for GitHub-sourced MCP servers without polluting the filesystem.
Create the .env file with your production configuration.
Registration Token: You must generate this token from your DeployStack admin interface before proceeding. Navigate to Admin → Satellites → Pairing to generate a global satellite token.
# Create .env filecat > .env << 'EOF'# DeployStack Satellite Configuration# Server ConfigurationPORT=3001NODE_ENV=productionLOG_LEVEL=info# Backend ConnectionDEPLOYSTACK_BACKEND_URL=https://cloud.deploystack.ioDEPLOYSTACK_BACKEND_POLLING_INTERVAL=60# Satellite Public URL (REQUIRED for remote MCP client connections)# This is the publicly accessible URL where MCP clients connect# Used for OAuth 2.0 Protected Resource Metadata (RFC 9728)# Example: https://satellite.example.com (no /mcp or /sse paths)DEPLOYSTACK_SATELLITE_URL=https://satellite.example.com# Satellite Identity (10-32 chars, lowercase a-z0-9-_ only)DEPLOYSTACK_SATELLITE_NAME=prod-satellite-001# Registration Token (from admin panel)DEPLOYSTACK_REGISTRATION_TOKEN=deploystack_satellite_global_eyJhbGc...# Status DisplayDEPLOYSTACK_STATUS_SHOW_UPTIME=trueDEPLOYSTACK_STATUS_SHOW_VERSION=trueDEPLOYSTACK_STATUS_SHOW_MCP_DEBUG_ROUTE=false# Event SystemEVENT_BATCH_INTERVAL_MS=3000EVENT_MAX_BATCH_SIZE=100# nsjail Resource LimitsNSJAIL_MEMORY_LIMIT_MB=inf # Virtual memory limit — "inf" required for Node.js WASM (undici reserves ~10GB virtual address space)NSJAIL_CGROUP_MEM_MAX_BYTES=536870912 # Physical memory limit: 512MB (cgroup, only active with Delegate=yes in systemd unit)NSJAIL_CPU_TIME_LIMIT_SECONDS=60 # CPU time limitNSJAIL_MAX_PROCESSES=1000 # Process limit (rlimit)NSJAIL_CGROUP_PIDS_MAX=1000 # Process limit (cgroup)NSJAIL_RLIMIT_NOFILE=1024 # File descriptor limitNSJAIL_RLIMIT_FSIZE=50 # Max file size in MBNSJAIL_TMPFS_SIZE=100M # Tmpfs size for /tmp# Process Idle Timeout (seconds, 0 to disable)MCP_PROCESS_IDLE_TIMEOUT_SECONDS=180EOF# Secure the environment filechmod 600 .env
# Enable service for automatic startupsudo systemctl enable deploystack-satellite# Start the servicesudo systemctl start deploystack-satellite# Check statussudo systemctl status deploystack-satellite
# View service statussudo systemctl status deploystack-satellite# View live logssudo tail -f /var/log/deploystack-satellite/satellite.log# Check for errorssudo tail -f /var/log/deploystack-satellite/error.log
Virtual Memory: unlimited (rlimit_as = inf — required because Node.js v24 uses WASM internally which reserves ~10GB of virtual address space; this is virtual, not physical RAM)
Physical Memory: 512MB via cgroup (only active when Delegate=yes is set in the systemd unit — see below)
CPU Time: 60 seconds (enforced via rlimit_cpu)
Processes: 1000 (enforced via rlimit_nproc and cgroup pids.max, required for package managers like npm and uvx)
File Descriptors: 1024 (enforced via rlimit_nofile)
Maximum File Size: 50MB (enforced via rlimit_fsize)
tmpfs /tmp: 100MB (enforced via tmpfs mount)
Cgroup limits are auto-detected: The satellite automatically detects whether cgroup v2 is available and delegated. When running as a systemd service with Delegate=yes, physical memory (512MB) and PID limits are enforced via cgroup in addition to rlimits. Without Delegate=yes, the satellite falls back to rlimit-only mode — nsjail still runs safely with full namespace isolation. See the Enable Cgroup Limits section below to activate precise physical memory enforcement.
Primary Security = Namespace Isolation: The satellite’s security model relies on Linux namespaces (PID, Mount, User, IPC, UTS) to isolate MCP servers from each other and the host system. Resource limits (rlimits) provide secondary DoS protection. With user namespace active, all privilege escalation attacks (including setuid-based rlimit bypasses) are prevented.
# Allow only backend communication (satellite polls backend)# No inbound rules needed - satellite uses outbound polling# Optional: Allow local status checkssudo ufw allow from 127.0.0.1 to any port 3001# If you need external access to satellite (not recommended)sudo ufw allow 3001/tcp
# View current token in .env (be careful - this is sensitive)sudo -u deploystack grep REGISTRATION_TOKEN /opt/deploystack/deploystack/services/satellite/.env
# Create health check scriptsudo tee /usr/local/bin/check-satellite-health > /dev/null << 'EOF'#!/bin/bashif systemctl is-active --quiet deploystack-satellite; then if curl -sf http://localhost:3001/api/status/backend > /dev/null; then echo "OK" exit 0 else echo "WARN: Service running but not responding" exit 1 fielse echo "ERROR: Service not running" exit 2fiEOFsudo chmod +x /usr/local/bin/check-satellite-health# Test health checksudo /usr/local/bin/check-satellite-health
# CPU and memory usagetop -p $(pgrep -f "deploystack-satellite")# Detailed process informationsudo systemctl status deploystack-satellite# Network connectionssudo ss -tn | grep :3001
# Backup persistent data and configurationsudo tar czf /opt/backups/satellite-backup-$(date +%Y%m%d).tar.gz \ /opt/deploystack/deploystack/services/satellite/.env \ /opt/deploystack/deploystack/services/satellite/persistent_data
By default the satellite runs in rlimit-only mode. Adding Delegate=yes to the systemd unit gives the satellite ownership of its cgroup subtree, which activates precise physical memory (512MB) and PID enforcement per MCP process. No code changes are needed — the satellite auto-detects cgroup availability at startup.
Cgroup v2 available at /sys/fs/cgroup/system.slice/deploystack-satellite.service — memory/PID limits will be enforced
If you see Cgroup v2 unavailable instead, verify that Delegate=yes is in the service file and that you reloaded systemd.You can also check active limits on a running MCP process:
# Find a running MCP process PIDps aux | grep "npx.*mcp"# Check its cgroup assignment (replace {pid} with actual PID)cat /proc/{pid}/cgroup# Check enforced limitscat /sys/fs/cgroup/system.slice/deploystack-satellite.service/NSJAIL.*/memory.maxcat /sys/fs/cgroup/system.slice/deploystack-satellite.service/NSJAIL.*/pids.max
Cgroup limits are optional. The rlimit-only default provides strong security through namespace isolation and adequate DoS protection. Cgroup limits add precise physical memory enforcement per MCP process, which is useful in high-density multi-team environments where a single runaway process consuming all RAM would otherwise affect other teams.