โ† Back to Neural Lab

๐Ÿงฌ Neural Lab Guide

A local AI research platform for DGX Spark โ€” brain simulation, RL training, model surgery, and world building.

๐Ÿ—๏ธ Architecture Overview

How It Runs

Neural Lab is a Python Flask + Socket.IO server running as a systemd user service on the DGX Spark. The UI is a single-page HTML/JS app served by the Clawboard hub.

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  Your Phone / Browser                                     โ”‚
โ”‚  http://100.109.173.109:8090/31-neural-lab/               โ”‚
โ”‚  (HTML + JS + Three.js + Socket.IO client)                โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
             โ”‚ REST API                        โ”‚ WebSocket
             โ–ผ                                 โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  Neural Lab Backend (port 8103)                           โ”‚
โ”‚  orchestrator.py โ€” Flask + Flask-SocketIO                 โ”‚
โ”‚  โ”œโ”€โ”€ SwarmOrchestrator (brain agents, message bus)        โ”‚
โ”‚  โ”œโ”€โ”€ SimulationEngine (pymunk physics)                    โ”‚
โ”‚  โ”œโ”€โ”€ NeuralLabPlatform (registry, assets, extensions)     โ”‚
โ”‚  โ”œโ”€โ”€ RL Training (stable-baselines3 + PyTorch)            โ”‚
โ”‚  โ””โ”€โ”€ Model Workshop (safetensors inspection)              โ”‚
โ”‚  Service: neural-lab.service                              โ”‚
โ”‚  Python: /home/pmello/.openclaw/neural-lab-env/           โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
             โ”‚ HTTP (Ollama API)               โ”‚ File I/O
             โ–ผ                                 โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”    โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚  Ollama (port 11434)     โ”‚    โ”‚  Local Files                โ”‚
โ”‚  qwen3.5:0.8b (agents)  โ”‚    โ”‚  ~/.openclaw/neural-lab/    โ”‚
โ”‚  qwen3.5:2b (prefrontal)โ”‚    โ”‚  โ”œโ”€โ”€ snapshots/             โ”‚
โ”‚  qwen3.5:4b (vision)    โ”‚    โ”‚  โ”œโ”€โ”€ memories-*.json        โ”‚
โ”‚                          โ”‚    โ”‚  โ”œโ”€โ”€ rl-models/             โ”‚
โ”‚  llama.cpp (port 18080)  โ”‚    โ”‚  โ”œโ”€โ”€ worlds/                โ”‚
โ”‚  122B-A10B (narrator)    โ”‚    โ”‚  โ”œโ”€โ”€ extensions/            โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜    โ”‚  โ””โ”€โ”€ logs/                  โ”‚
                                โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Connection Flow

  1. Hub server (port 8090) serves the static HTML page
  2. Page loads and connects Socket.IO to port 8103 (the backend)
  3. UI polls /api/state every 1-2 seconds for agent/brain state
  4. Real-time events (messages, training progress) come via WebSocket
  5. All brain agents call Ollama (0.8b/2b) for thinking
  6. Camera/mic from your phone โ†’ backend API โ†’ Visual/Auditory Cortex agents
  7. Multiple devices connect to the same backend (shared brain)

๐Ÿš€ Quick Start

1. Start the Brain

Click ๐Ÿง  Start Brain. This launches 17 agents across 5 brain regions:

RegionAgentsModelRole
๐Ÿง  Prefrontal Cortex32bPlanning, decisions, executive control
๐Ÿ‘๏ธ Visual Cortex30.8bProcess camera input, scene understanding
๐Ÿ‘‚ Auditory Cortex30.8bProcess audio/speech (via Whisper)
๐Ÿงฉ Hippocampus30.8bMemory formation, pattern recognition
๐Ÿ’ฌ Broca's Area32bSpeech production, responses

2. Chat with the Brain

Use the ๐Ÿ’ฌ Chat tab or the ๐Ÿง  Narrator widget. Your messages flow:

You โ†’ Wernicke's (comprehension) โ†’ Prefrontal (thinking) โ†’ Broca's (response) โ†’ You

The Narrator is separate โ€” it observes the brain and explains what's happening.

3. Run a Simulation

  1. Go to ๐ŸŸ๏ธ Sim tab
  2. Select an environment (Hide & Seek, Foraging, Maze, Open Field)
  3. Click โ–ถ Start
  4. Watch agents move in the 3D arena
  5. Enable Brain-controlled to let the LLM drive agent decisions

4. Train with RL

  1. Expand ๐Ÿงช RL Training at the bottom of the Sim tab
  2. Pick algorithm (PPO, A2C, DQN), environment, and timesteps
  3. Click ๐Ÿš€ Train
  4. Watch the reward curve update in real-time
  5. Trained models auto-save to ~/.openclaw/neural-lab/rl-models/

5. Inspect Models

  1. Go to ๐Ÿ”ฌ Workshop tab โ€” models auto-scan
  2. Click any model to see layers, shapes, weight statistics
  3. Click ๐Ÿค– AI Explain for a plain-language architecture summary
  4. Click ๐Ÿ“‹ Duplicate for Editing to make a safe copy

6. Build Worlds

  1. In ๐ŸŸ๏ธ Sim tab, expand ๐Ÿ—๏ธ World Builder
  2. Select a tool (Wall, Box, Food, Flag, Ramp)
  3. Click on the 3D arena to place objects
  4. Save your world for later, or load environment templates from the ๐Ÿงฉ Platform tab

๐Ÿงฉ Platform & Extensions

Everything in Neural Lab is a plugin. The ๐Ÿงฉ Platform tab shows all registered components:

Writing Extensions

Drop a .py file into ~/.openclaw/neural-lab/extensions/:

# my_extension.py
def register(registry):
    from platform import PluginEntry, PluginType
    registry.register(PluginEntry(
        id='env-my-world',
        name='My Custom World',
        type=PluginType.ENVIRONMENT,
        description='A custom environment',
        tags=['custom'],
    ))

๐Ÿ’ป CLI Tools

# Model Workshop
workshop scan              # List all 15+ models
workshop inspect gpt2      # See every layer & weight stats
workshop layers gpt2 attn  # Filter to attention layers
workshop explain whisper    # AI explains the architecture
workshop duplicate gpt2 my-exp  # Safe copy for editing
workshop python            # Interactive Python + safetensors

๐Ÿ“ก API Reference

EndpointMethodDescription
/api/stateGETFull brain state (agents, messages, connections)
/api/startPOSTStart brain with mode & agents config
/api/askPOSTSend a message to the brain
/api/narratePOSTGet narrator summary
/api/sim/startPOSTStart simulation {environment, num_agents}
/api/sim/stateGETCurrent sim state (agents, physics, objects)
/api/sim/world/addPOSTAdd object {type, x, y}
/api/sim/world/savePOSTSave world layout {name}
/api/rl/startPOSTStart RL training {algorithm, environment, total_timesteps}
/api/rl/statusGETTraining progress (episodes, rewards, timesteps)
/api/workshop/scanGETScan for model files
/api/workshop/inspectPOSTInspect a model {path}
/api/platformGETFull platform state (registry, assets, extensions)
/api/platform/templates/<id>/applyPOSTApply environment template

๐Ÿ”ง Service Management

# Check status
systemctl --user status neural-lab.service

# Restart after code changes
systemctl --user restart neural-lab.service

# View logs
journalctl --user -u neural-lab.service -f

# The service uses a Python venv with all dependencies:
# /home/pmello/.openclaw/neural-lab-env/
#   pymunk, torch, stable-baselines3, gymnasium, pettingzoo,
#   safetensors, flask, flask-socketio, requests

๐Ÿ“ File Structure

~/.openclaw/workspace-main/tools/neural-lab/
โ”œโ”€โ”€ orchestrator.py    # Main server (~2700 lines)
โ”œโ”€โ”€ simulation.py      # Physics engine (pymunk)
โ”œโ”€โ”€ rl_env.py          # Gymnasium RL environment wrapper
โ”œโ”€โ”€ model_workshop.py  # Model inspection/surgery
โ”œโ”€โ”€ platform.py        # Plugin registry, assets, extensions
โ”œโ”€โ”€ ARCHITECTURE.md    # Full technical specification
โ””โ”€โ”€ RESEARCH.md        # Research directions

~/.openclaw/hub/31-neural-lab/
โ”œโ”€โ”€ index.html         # Main UI (~3300 lines, 10 tabs)
โ”œโ”€โ”€ guide.html         # This guide
โ””โ”€โ”€ lib/three.min.js   # Three.js (local copy)

~/.openclaw/neural-lab/
โ”œโ”€โ”€ snapshots/         # Brain state snapshots
โ”œโ”€โ”€ memories-*.json    # Agent memories per session
โ”œโ”€โ”€ rl-models/         # Trained RL models (.zip)
โ”œโ”€โ”€ worlds/            # Saved world layouts
โ”œโ”€โ”€ extensions/        # Custom plugins (.py)
โ””โ”€โ”€ logs/              # Session logs

~/models/workshop/     # Model library (safetensors)
โ”œโ”€โ”€ bert-tiny-4m/      # 4M params โ€” fast experiments
โ”œโ”€โ”€ gpt2-124m/         # 124M โ€” classic transformer
โ”œโ”€โ”€ whisper-tiny-39m/  # Audio transcription model
โ”œโ”€โ”€ clip-vit-base-150m/# Vision+text multimodal
โ”œโ”€โ”€ tinyllama-1.1b/    # Llama architecture
โ””โ”€โ”€ ... (15 models total)

โšก Hardware Requirements

ComponentRequirementDGX Spark
RAM16GB minimum128GB โœ“
GPUOptional (CPU training works)GB10 โœ“
Ollama0.8b model minimum0.8b, 2b, 4b โœ“
Python3.12+3.14 โœ“
Storage~10GB for models1.9TB free โœ“

Neural Lab v2.2 โ€” Built on DGX Spark with โšก by the ARC platform