From Reddit (or X) to Your Backlog: One Discovery Playbook

    A tactical, week-shaped playbook for turning community conversations into product signals, outreach opportunities, and shipped outcomes. This isn't theory — it's a repeatable pattern you can run every week, with specific instructions on where to configure everything in NotSolo.


    Why Community Discovery Works for Solo Founders

    Most startup advice says "talk to your users." But when you're pre-PMF with a handful of users (or none yet), the question is: where do you find them? Paid ads are expensive and imprecise. Cold email has low response rates. Content takes months to rank.

    Community platforms — Reddit, X, Facebook groups, niche forums — are where your potential users are already having conversations about the problems you solve. They're asking questions, sharing frustrations, recommending tools, and debating approaches. This is the richest source of market intelligence available to a solo founder, and it's free.

    The problem is time. Manually scanning subreddits, reading threads, drafting helpful replies, and tracking which conversations turned into leads takes hours per week — hours you don't have when you're also building, shipping, and supporting users. This is exactly the kind of work that benefits from AI agent support: high-volume scanning and classification done by machines, with strategic decisions and final drafts reviewed by you.

    NotSolo's approach to community discovery uses Scout (outreach agent) for the scanning and drafting layer, your review for quality control, and the full agent squad for downstream actions — product specs from Forge, content from Quill, lifecycle emails from Pulse, and strategic analysis from Atlas. This playbook shows how it all connects in practice.


    The Shape of the Week

    This playbook maps to a single weekly cycle. Each week, you test a specific discovery hypothesis and measure whether it generated meaningful engagement. Over multiple weeks, you learn which channels, topics, and approaches work for your ICP.

    • Monday: Set the hypothesis — e.g., "Engage in 5 relevant threads on r/SaaS and r/startups to generate 3 profile visits."
    • Tue–Thu: Scout scans, classifies, and drafts. You review and approve 1–2x per day.
    • Thursday: Check Squad Chat for mid-week insights from Atlas and Scout.
    • Friday: Atlas reports engagement metrics. You decide: continue, adjust channels, or try a different approach.

    Step 1: Configure Your Discovery Channels

    Before Scout can scan, it needs to know where to look. This configuration lives in Dashboard → Settings → ICP & Platforms.

    Choosing subreddits

    Start with 2–5 subreddits where your ideal customers are active. The key is relevance — you want subreddits where people discuss the problem your product solves, not just the industry broadly. Here's how to think about it:

    • High relevance: Subreddits about the specific pain point (e.g., r/freelanceDesign if you build design tools, r/EmailMarketing if you build email tools).
    • Medium relevance: Broader founder/SaaS communities where your ICP hangs out (e.g., r/SaaS, r/startups, r/Entrepreneur).
    • Low relevance: Generic tech or business subreddits — high noise, low signal. Avoid these until you've dialed in the higher-relevance channels.

    Enter the subreddit names (without the r/ prefix) in the Subreddit list field. Scout will include these in every heartbeat scan.

    Setting up X search queries

    X (Twitter) works differently from Reddit — instead of communities, you search by keyword. Add phrases your ICP might use when expressing the problem you solve:

    • "looking for a tool that" + your category
    • "anyone tried" + competitor names
    • "frustrated with" + the pain point
    • Specific technical questions your product answers

    Enter these in the X search queries field. Be specific — vague queries generate noise. "Looking for a freelance invoicing tool" is much better than "invoicing."

    Facebook groups (optional)

    If your ICP is active in Facebook groups — common for B2B tools targeting freelancers, agencies, coaches, or niche industries — you can add group URLs in the Facebook Group URLs field. This requires an Apify integration for scraping.


    Step 2: How Scout Scans and Classifies

    With heartbeats enabled in Dashboard → Agents, Scout runs its board-scan skill on each heartbeat interval. Here's what happens during a typical scan:

    1. Data collection: Scout reads recent posts from your configured subreddits using the Reddit API, and recent tweets matching your X search queries. For Facebook groups, it uses the Apify scraper. This is a read-only operation — Scout doesn't post anything during scanning.
    2. Relevance classification: Each post is evaluated against your ICP profile (defined in Settings → ICP & Platforms) and product description. Scout uses your connected LLM to classify posts as high, medium, or low relevance. High-relevance posts are ones where the poster is explicitly expressing a pain point your product addresses, or asking for a recommendation in your category.
    3. Task creation: For high-relevance posts, Scout creates a Kanban task with a descriptive title, a link to the original post, the relevance score, and a draft reply. The task lands in the Review column for your approval.
    4. Squad Chat summary: After each scan, Scout posts a summary in Squad Chat: how many posts were scanned, how many were classified as relevant, and how many tasks were created. This gives you a quick pulse on channel activity without opening the Kanban board.

    Why classification matters: Without classification, you'd get a task for every Reddit post — hundreds per day across a few subreddits. The relevance classification is what makes this scalable. Scout filters the noise so you only see the 3–10 posts per week that actually match your ICP. This is the difference between "AI that dumps data on you" and "AI that surfaces what matters."


    Step 3: Reviewing and Approving Outreach

    Open your Kanban board (Dashboard → Mission Queue) and check the Review column. For each Scout task:

    1. Read the original thread. Click the source URL in the task to see the full context. Is this a genuine conversation where your product is relevant, or is it a question that's been answered already?
    2. Review Scout's draft reply. The draft is in the task comments. Check for:
      • Is it helpful first, promotional second (or not at all)?
      • Does it accurately describe your product's capability?
      • Does it match the tone of the community? (Reddit hates obvious marketing.)
      • Would you be comfortable posting this with your personal account?
    3. Edit if needed. Most early drafts need tweaking — Scout is learning your voice. Edit the comment text directly in the task, then approve. Over time, as you make consistent edits, Scout's drafts will more closely match your style.
    4. Approve or reject. Move approved tasks to Done — Scout executes the action. Move rejected tasks back to Assigned with a comment explaining what to change.

    The golden rule of community outreach: The best replies add value without mentioning your product at all. "Here's how I solved that problem..." works infinitely better than "Check out my tool at..." People click your profile when you're genuinely helpful. They ignore (or downvote) you when you're selling. Scout is trained on this principle, but your review is the quality gate.


    Step 4: From Signals to Product Insights

    Here's where the multi-agent system creates value that a single tool can't: as Scout engages in conversations, the other agents are watching and extracting higher-order insights.

    Forge spots product signals

    Forge (product agent) monitors the threads Scout engages with and looks for recurring patterns. If three different Reddit threads mention "version history" as a missing feature, Forge aggregates those signals and creates a spec task: "Add version history — 3 user signals from r/freelanceDesign." The task includes links to all source threads and a prioritized recommendation.

    This is the "signals to shipped" pipeline in action. Without agent coordination, you'd need to manually track feature requests across platforms and notice the pattern yourself. With Forge watching, patterns surface automatically.

    Quill creates content from engagement

    Quill (content agent) observes which topics generate strong engagement in your outreach. If a Reddit thread about "client revision workflows" gets 20 comments, Quill may propose a blog post: "Why Freelance Designers Need Version History (And How to Stop Emailing PNGs)." The post targets a long-tail SEO keyword and references the real pain points from the community conversation.

    This content loop is powerful because it's grounded in real demand — you're not guessing what to write about. You're writing about topics you've already validated through community engagement.

    Pulse follows up with leads

    When Scout's outreach generates signups — someone visits your site from a Reddit thread and creates an account — Pulse (customer success agent) detects the new user signal and can draft a personalized onboarding email. The email references how they found you ("I noticed you came from the thread about revision workflows..."), which dramatically improves open rates compared to generic onboarding sequences.


    Step 5: Closing the Loop on Friday

    At the end of the week, Atlas (strategy agent) compiles the discovery results into your weekly report. The report includes:

    • Outreach metrics: Threads engaged, communities covered, and engagement quality (replies, upvotes, profile visits if trackable).
    • Lead pipeline: Any new leads created from outreach, their status, and relevance scores.
    • Product signals: Feature requests or patterns Forge identified from community conversations.
    • Content opportunities: Topics Quill flagged for potential blog posts or content pieces.
    • Channel comparison: If you're running outreach across multiple platforms, Atlas shows which channels generated the most engagement.

    Compare these results against your cycle hypothesis. Did you hit your engagement target? Which subreddits or search queries produced the best results? Were there any surprises — topics you didn't expect to resonate?

    Making the decision

    Based on the results, decide the next step for your discovery channel:

    • Double down: "r/SaaS generated 5 relevant threads and 2 profile visits — increase target to 10 threads next week."
    • Adjust: "r/startups had low relevance — replace with r/indiehackers. Keep r/SaaS."
    • Pivot: "Reddit generated 0 leads after 3 weeks — switch to content-first strategy and use SEO tracking instead."
    • Expand: "Reddit is working well — add Facebook group discovery via Apify as a second channel."

    Record the decision in your cycle and create the next week's hypothesis. This creates a continuous improvement loop: each week's discovery efforts are more targeted than the last because they're informed by real data.


    Running This Playbook for Multiple Weeks

    The first week is calibration — you're learning which channels produce signal and which produce noise. The second week is refinement — you've narrowed your subreddit list and adjusted your X queries. By week 3–4, you have a well-tuned discovery machine that reliably surfaces relevant conversations with minimal manual effort.

    At this point, the value compounds. You've built a presence in communities where your ICP lives. People start recognizing your helpful comments. Your profile visits become organic. Quill has published content based on real community pain points, which ranks for long-tail keywords. The discovery playbook has become a growth flywheel — and it's running on 5–10 minutes of daily review time.


    Related Guides & Docs