Project Overview: ai-appointment-setter


I decided to anchor this project around one operational rule: shared memory in Obsidian is the source of truth, and product work should stay lightweight and iteration-friendly. That decision shows up both in AGENTS.md (explicit QMD memory workflow) and in execution themes: stabilize tests first, then improve the user-facing product surface.

What We Built

  • A Node.js project with a clear implementation center in app/, supported by docs/ and skills/.
  • A polished UI pass across the core product surface, reflected in recent commits: “Create polished landing page with shadcn/ui components” and “Rebuild dashboard with professional shadcn/ui layout.”
  • Test reliability fixes in webhook and integration flows before/alongside UI work, including repeated ConversationEngine mock corrections.
  • A working dashboard path actively touched in app/src/app/dashboard/layout.tsx and app/src/app/dashboard/page.tsx.

Why We Built It

  • We needed continuity across agent and human operators; the Obsidian rule prevents decision drift and preserves context between sessions.
  • We prioritized practical delivery over platform expansion: improve trust and usability in the existing app instead of adding new infrastructure.
  • Session focus confirms this direction: recent work emphasized project health verification and quality checks, not feature sprawl.
  • The sequencing in commits suggests an intentional strategy: fix brittle test surfaces, then invest in UX quality and rebrand alignment (“Rebrand from AI Appointment Setter to Dial AI”).

How It Works

  • Operational governance is explicit: before substantial work, we prefetch project memory from the canonical Obsidian location via the documented QMD script flow.
  • Day-to-day implementation happens inside app/, where API, middleware, tests, and dashboard UI live together as the active surface.
  • The current repo root has no package scripts, so execution is driven from subproject context (not a monorepo-wide task runner contract).
  • The practical loop is: recover memory context, run health/quality checks, then ship scoped UI or reliability improvements with clear commit intent.