Content pipeline
The content pipeline turns signals into publishable content across channels.
Content pipeline
The content pipeline turns signals into publishable content across channels.
Pipeline stages
Signals → Brief → Generate → Humanize → Queue → Approve → Publish
1. Brief
Claude reads up to 20 recent signals and writes a 1-paragraph summary with an editorial angle. The brief captures what’s happening and why it matters to the company.
2. Generate
From the brief, Claude generates content for each configured channel:
| Channel | Format | Typical length |
|---|---|---|
linkedin | Professional post with hook | 200-300 words |
x_thread | Thread of short posts | 3-5 tweets |
facebook | Social post | 150-250 words |
blog | Long-form article with headers | 800-1500 words |
release_email | Email with subject line | 300-500 words |
newsletter | Digest format | 500-800 words |
yt_script | Script with hook, sections, talking points | 1000-2000 words |
Generation is voice-aware. Claude reads the org’s voice profile (persona, audience, tone, brand keywords) and uses approved content as examples and spiked content as anti-patterns.
3. Humanize
The humanizer removes AI-generation patterns:
- Significance inflation (“pivotal”, “landmark”, “game-changing”)
- Vague attribution (“experts say”, “many believe”)
- AI vocabulary (delve, tapestry, paradigm, leverage, nuanced)
- Structural patterns (em-dash overuse, rule-of-three lists)
- Promotional language (“revolutionary”, “cutting-edge”)
It then adds voice-specific texture — opinions, acknowledged gaps, concrete examples.
4. Queue
Generated content enters the queue with status queued. From here you can:
- Approve — Move to
approved, ready for publish - Spike — Reject (content becomes anti-pattern memory)
- Edit — Modify headline or body text
- Humanize — Re-run the humanizer
- Schedule — Set a future publish datetime
5. Publish
Approved content is published to connected platforms:
- LinkedIn — Via OAuth API
- Facebook — Via page token
- HubSpot — Via CMS API
- Medium — Via publishing API
- YouTube — Script upload with metadata
Running the pipeline
Full pipeline (scout + generate)
POST /api/pipeline/scout → pull signals
POST /api/pipeline/generate → brief + generate + humanize
Or in one call via MCP:
pressroom_full_pipeline(org_id=1, channels=["linkedin", "blog"])
Generate from specific story
Instead of using the latest signals, generate from a curated story:
POST /api/stories/{story_id}/generate
{"channels": ["linkedin", "blog"]}
Regenerate a single item
POST /api/pipeline/regenerate
{"content_id": 42, "channel": "linkedin"}
Content memory
Pressroom learns from editorial decisions:
- Approved content is included as positive examples in future generation prompts
- Spiked content is included as anti-patterns (what to avoid)
- This creates a feedback loop where content quality improves over time
Scheduling
Schedule content for auto-publish at a specific time:
POST /api/content/{id}/schedule
{"publish_at": "2025-01-15T09:00:00Z"}
The background scheduler checks for due content and publishes it automatically.
Email drafts
Content with channel release_email or newsletter can be composed into email drafts:
POST /api/email/drafts/compose
{"content_id": 42}
This creates a draft with subject line, HTML body, and text body ready for sending.