Skip to main content
After completing this guide, you’ll have a running Spaceduck gateway with a chat model connected and ready to use.

Prerequisites

  • Bun v1.3 or later
  • A chat model — either a local server (llama.cpp, LM Studio) or a cloud API key (Bedrock, Gemini, OpenRouter)

Install and run

1

Clone and install

git clone https://github.com/maziarzamani/spaceduck.git
cd spaceduck
bun install
2

Install system dependencies

# SQLite with extension support (required for sqlite-vec)
brew install sqlite
3

Configure deployment settings

cp .env.example .env
The .env file controls deployment knobs only (port, log level). Provider settings are managed in the Settings UI.
4

Start the gateway

bun run dev
Open http://localhost:3000 in your browser.
5

Configure your chat model

Click the Settings icon in the sidebar, then go to Chat.
  1. Select a Provider from the dropdown (e.g., llama.cpp, Bedrock, Gemini)
  2. Enter the Base URL if using a local provider
  3. Set your API key if using a cloud provider
  4. Choose a Model
Changes to provider, model, and system prompt hot-apply immediately — no restart needed.
If you’re unsure which provider to start with, see the Model Providers overview for a comparison.
6

Send your first message

Go back to the chat view and type something. You should see a streaming response from your configured model.
If you see a response, Spaceduck is working. Your conversations are now being stored in SQLite with automatic fact extraction.

Optional: enable semantic recall

By default, Spaceduck uses keyword search (FTS5) for cross-conversation memory. To enable vector-based semantic recall:
  1. Go to Settings > Memory
  2. Toggle Semantic recall on
  3. Select an embedding provider and model
  4. Click Test to verify the connection
Semantic recall requires a separate embedding model. You can use the same provider as your chat model, or run a dedicated embedding server. See llama.cpp embeddings for a local setup.

Optional: install tools

bunx playwright install chromium
Enables web browsing and page interaction via the browser tool.
pip install marker-pdf   # requires Python 3.10+, PyTorch
When marker_single is on your PATH, the marker_scan tool is automatically registered. Upload PDFs via the paperclip button in the chat UI.
pip install openai-whisper   # requires Python 3.9+, ffmpeg
When whisper is on your PATH, the mic button appears in the chat UI. Hold to record, release to transcribe.

Choose your client

Spaceduck supports multiple clients — all connecting to the same gateway.
ClientBest forHow to connect
Web UIDefault experienceOpen http://localhost:3000
Desktop appNative macOS/Linux/Windows windowSee Desktop App
CLIScripting and automationSee CLI
WhatsAppMobile accessRequires Baileys setup

What you now have

  • A running gateway at http://localhost:3000
  • A chat model connected and streaming responses
  • Persistent conversation history in SQLite
  • Automatic fact extraction after every turn
  • Keyword-based cross-conversation recall (or semantic recall if you enabled it)

Next steps