Runs the Ollama runtime to host local LLM models for experimentation and integration with developer tools. Models are stored on the ollama_models volume so downloads persist across container updates.【F:devtools/compose.yml†L17-L45】
ollama/ollama:latestbackend_netollama_models:/root/.ollamaollama ps via localhost every 30 seconds with a 120-second start period, allowing GPU initialization before marking healthy.【F:devtools/compose.yml†L31-L41】unless-stoppedbackend_net; use tailscale/ssh tunnels or a reverse proxy if remote access is required.