← Back to Labs
Case Study · GL-001

Coherence Diagnostic Engine

An AI system built on one constraint: it can search, surface, and observe. It cannot decide.

AR-001
"Automation may observe, summarize, and suggest. Automation may not decide."

The Problem

Years of decision logs, daily journals, and pattern observations. Thousands of entries. The signal is in there, but the volume makes it invisible.

I needed a system that could search semantically, surface patterns I'd never notice, and help me prepare for diagnostic conversations. What I didn't need: a system that draws conclusions or automates judgment.

The Architecture

Everything runs on local infrastructure. No data leaves my network.

DGX Spark
NVIDIA Grace Blackwell. Local inference and embedding generation.
Synology NAS
Filesystem is source of truth. Markdown canon, schemas, project data.
ChromaDB
Vector database for semantic search across all artifacts.
Python + Streamlit
Interface layer. React rebuild on the roadmap.

What It Does

What It Doesn't Do

It doesn't summarize my thinking for me. It doesn't recommend actions. It doesn't decide what's important.

Every output is a surface. I decide what it means.

Demo

[ Video coming soon ]

Why This Matters

The default assumption is that AI should optimize, recommend, and automate decisions. I think that's backwards for a lot of contexts.

Some domains require human judgment to remain human. Diagnosis is one of them. The value isn't in the answer. It's in the process of arriving at one.

Building tools that respect that distinction is the work.

Building something similar?

If you're thinking about AI governance, decision infrastructure, or human-in-the-loop systems, I'd welcome a conversation.