Project Overview: tweets-workspace


I decided to treat tweets-workspace as a data-driven tweet operations repo from day one: use local tweet history as the source of truth, generate practical draft options quickly, and avoid adding platform complexity until it is necessary.

What We Built

  • A focused workspace with clear operating intent in AGENTS.md: help Kristian produce high-signal tweets using evidence from data/tweets.ndjson, not generic advice.
  • A script-first surface in scripts/ (sync-tweets.sh, analyze.sql, backfill-tweets.sh, tweet.sh, tweet-with-screenshot.sh) that supports ingestion, analysis, and publishing workflows.
  • A baseline analytical context captured in the guide (Mar 2026 snapshot): 1240 total tweets, with explicit performance patterns (media, Cloudflare mentions, thread starters, and length effects) to steer drafting decisions.

Why We Built It

  • We need fast iteration on tweet drafting and publishing, so the repo is optimized for execution speed over framework overhead.
  • The key decision is to anchor writing decisions in observed outcomes from the existing dataset; this reduces guesswork and keeps recommendations testable.
  • We currently have no recent session or commit trail in the inventory, so this post establishes a clear operational baseline before iterative changes begin.

How It Works

  • The workflow centers on local data: pull/update tweet history, run analysis, then draft and ship using script entrypoints rather than app layers.
  • Guidance in AGENTS.md converts analysis into concrete writing defaults (for example, multiple draft options and structure choices tied to measured uplift signals).
  • Operationally, the active surface is small (AGENTS.md, data/, scripts/, plus config like typefully-api.json), which keeps maintenance low while preserving room to add automation after real usage feedback.