A local AI research platform for DGX Spark โ brain simulation, RL training, model surgery, and world building.
Neural Lab is a Python Flask + Socket.IO server running as a systemd user service on the DGX Spark. The UI is a single-page HTML/JS app served by the Clawboard hub.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Your Phone / Browser โ
โ http://100.109.173.109:8090/31-neural-lab/ โ
โ (HTML + JS + Three.js + Socket.IO client) โ
โโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโ
โ REST API โ WebSocket
โผ โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Neural Lab Backend (port 8103) โ
โ orchestrator.py โ Flask + Flask-SocketIO โ
โ โโโ SwarmOrchestrator (brain agents, message bus) โ
โ โโโ SimulationEngine (pymunk physics) โ
โ โโโ NeuralLabPlatform (registry, assets, extensions) โ
โ โโโ RL Training (stable-baselines3 + PyTorch) โ
โ โโโ Model Workshop (safetensors inspection) โ
โ Service: neural-lab.service โ
โ Python: /home/pmello/.openclaw/neural-lab-env/ โ
โโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโ
โ HTTP (Ollama API) โ File I/O
โผ โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ Ollama (port 11434) โ โ Local Files โ
โ qwen3.5:0.8b (agents) โ โ ~/.openclaw/neural-lab/ โ
โ qwen3.5:2b (prefrontal)โ โ โโโ snapshots/ โ
โ qwen3.5:4b (vision) โ โ โโโ memories-*.json โ
โ โ โ โโโ rl-models/ โ
โ llama.cpp (port 18080) โ โ โโโ worlds/ โ
โ 122B-A10B (narrator) โ โ โโโ extensions/ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโ โ โโโ logs/ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
/api/state every 1-2 seconds for agent/brain stateClick ๐ง Start Brain. This launches 17 agents across 5 brain regions:
| Region | Agents | Model | Role |
|---|---|---|---|
| ๐ง Prefrontal Cortex | 3 | 2b | Planning, decisions, executive control |
| ๐๏ธ Visual Cortex | 3 | 0.8b | Process camera input, scene understanding |
| ๐ Auditory Cortex | 3 | 0.8b | Process audio/speech (via Whisper) |
| ๐งฉ Hippocampus | 3 | 0.8b | Memory formation, pattern recognition |
| ๐ฌ Broca's Area | 3 | 2b | Speech production, responses |
Use the ๐ฌ Chat tab or the ๐ง Narrator widget. Your messages flow:
You โ Wernicke's (comprehension) โ Prefrontal (thinking) โ Broca's (response) โ You
The Narrator is separate โ it observes the brain and explains what's happening.
~/.openclaw/neural-lab/rl-models/Everything in Neural Lab is a plugin. The ๐งฉ Platform tab shows all registered components:
Drop a .py file into ~/.openclaw/neural-lab/extensions/:
# my_extension.py
def register(registry):
from platform import PluginEntry, PluginType
registry.register(PluginEntry(
id='env-my-world',
name='My Custom World',
type=PluginType.ENVIRONMENT,
description='A custom environment',
tags=['custom'],
))
# Model Workshop workshop scan # List all 15+ models workshop inspect gpt2 # See every layer & weight stats workshop layers gpt2 attn # Filter to attention layers workshop explain whisper # AI explains the architecture workshop duplicate gpt2 my-exp # Safe copy for editing workshop python # Interactive Python + safetensors
| Endpoint | Method | Description |
|---|---|---|
/api/state | GET | Full brain state (agents, messages, connections) |
/api/start | POST | Start brain with mode & agents config |
/api/ask | POST | Send a message to the brain |
/api/narrate | POST | Get narrator summary |
/api/sim/start | POST | Start simulation {environment, num_agents} |
/api/sim/state | GET | Current sim state (agents, physics, objects) |
/api/sim/world/add | POST | Add object {type, x, y} |
/api/sim/world/save | POST | Save world layout {name} |
/api/rl/start | POST | Start RL training {algorithm, environment, total_timesteps} |
/api/rl/status | GET | Training progress (episodes, rewards, timesteps) |
/api/workshop/scan | GET | Scan for model files |
/api/workshop/inspect | POST | Inspect a model {path} |
/api/platform | GET | Full platform state (registry, assets, extensions) |
/api/platform/templates/<id>/apply | POST | Apply environment template |
# Check status systemctl --user status neural-lab.service # Restart after code changes systemctl --user restart neural-lab.service # View logs journalctl --user -u neural-lab.service -f # The service uses a Python venv with all dependencies: # /home/pmello/.openclaw/neural-lab-env/ # pymunk, torch, stable-baselines3, gymnasium, pettingzoo, # safetensors, flask, flask-socketio, requests
~/.openclaw/workspace-main/tools/neural-lab/ โโโ orchestrator.py # Main server (~2700 lines) โโโ simulation.py # Physics engine (pymunk) โโโ rl_env.py # Gymnasium RL environment wrapper โโโ model_workshop.py # Model inspection/surgery โโโ platform.py # Plugin registry, assets, extensions โโโ ARCHITECTURE.md # Full technical specification โโโ RESEARCH.md # Research directions ~/.openclaw/hub/31-neural-lab/ โโโ index.html # Main UI (~3300 lines, 10 tabs) โโโ guide.html # This guide โโโ lib/three.min.js # Three.js (local copy) ~/.openclaw/neural-lab/ โโโ snapshots/ # Brain state snapshots โโโ memories-*.json # Agent memories per session โโโ rl-models/ # Trained RL models (.zip) โโโ worlds/ # Saved world layouts โโโ extensions/ # Custom plugins (.py) โโโ logs/ # Session logs ~/models/workshop/ # Model library (safetensors) โโโ bert-tiny-4m/ # 4M params โ fast experiments โโโ gpt2-124m/ # 124M โ classic transformer โโโ whisper-tiny-39m/ # Audio transcription model โโโ clip-vit-base-150m/# Vision+text multimodal โโโ tinyllama-1.1b/ # Llama architecture โโโ ... (15 models total)
| Component | Requirement | DGX Spark |
|---|---|---|
| RAM | 16GB minimum | 128GB โ |
| GPU | Optional (CPU training works) | GB10 โ |
| Ollama | 0.8b model minimum | 0.8b, 2b, 4b โ |
| Python | 3.12+ | 3.14 โ |
| Storage | ~10GB for models | 1.9TB free โ |
Neural Lab v2.2 โ Built on DGX Spark with โก by the ARC platform