Embeddings

Set up once. Search becomes hybrid — keyword and semantic — automatically.

Why it matters

Without embeddings, pk search only finds notes containing your exact words. Your agent misses context that exists under a different title.

Without embeddings

Search database latency → nothing. Your note is titled "slow queries."

Agent assumes no decision was made.

With embeddings

Search database latency → "slow queries," "pg performance issues," "connection pool tuning."

Agent has the full picture.

With embeddings configured, every pk search uses hybrid mode — BM25 keyword matching combined with vector similarity, merged automatically. No flags needed.

Setup

Takes about two minutes. Runs locally — no API keys, no GPU required.

Step 2

Pull a model

ollama
$ollama pull nomic-embed-text

~274 MB · runs on CPU · no GPU required

Step 3

Configure pk and index your notes

configure + index
$pk config --embedding nomic-embed-text
$pk index

Run pk index again after adding new notes — vectors aren't generated on pk new.

Step 4

Search — hybrid is now automatic

search
$pk search "slow database queries"

No flags. BM25 + vector results are merged automatically when embeddings are configured.

Recommended models

ModelSizeNotes
nomic-embed-text274 MBBest balance of quality and speed — start here
mxbai-embed-large670 MBHigher quality, slightly larger

Config reference

config
$pk config --embedding nomic-embed-text
$pk config --no-embedding
$pk config --base-url http://my-host:11434

Config lives at ~/.pk/config.json and applies across all projects. See Config for more.