Zeke
2025archivedResearch workstation that scores content by signal-to-noise ratio and surfaces citation-backed insights.
// GitHub
// Problem
AI and marketing content is 90% recycled hype. Thousands of podcasts, papers, and videos drop weekly, but most repeat the same talking points. Finding the two minutes that matter meant wading through hours of noise.
// Solution
A hype scoring algorithm that ranks content by substance: 40% keyword relevance, 30% highlight type (breaking changes beat generic quotes), 20% source authority, 10% freshness. Every insight links to source timestamps for instant verification.
// What I Built
Full-stack platform forked from Midday's finance codebase. Engine layer ingests YouTube, arXiv, RSS, and blogs. Jobs layer (Trigger.dev) runs AI extraction. Dashboard surfaces prioritized insights with jump-links. Built speaker-aware podcast outlines, paper novelty detection, and 'Why it matters' briefs tied to user goals.
// Technologies
Next.js 15 + React 19
Dashboard with tRPC, Zustand state management, and Framer Motion workspace interactions.
Trigger.dev
Background job orchestration for long-running AI extraction tasks.
Supabase
PostgreSQL with RLS for multi-tenant workspaces, plus Auth and Storage.
Drizzle ORM
Type-safe database with JSONB for flexible scoring schema iteration.
OpenAI + Vercel AI SDK
Structured output extraction with confidence scores and citation mapping.
// Lessons Learned
- 01Building content intelligence is easier than building distribution. The product worked, but I had no desire to market another AI productivity SaaS.
- 02Forking Midday saved weeks. Their multi-tenant patterns and component library were worth the awkward 'invoice' variable names.
- 03Hype scoring is useful but hard to explain. Users got 'this podcast has 3 highlights' but not why scores differed by 0.13 points.
- 04Two weeks validated I didn't want to ship this. Technical completion and market conviction are different things.