Skip to Main Content

Wiki SaaS for Engineering Teams

How a Solo Wiki Founder Turns Self-Host Installs Into a Hacker News Citation Graph

Synthesised by Generated by Diffmode's 576-vector synthesis engine · Last updated

Same $5,640 Stripe number, third Monday running. Your last five paying teams came from Hacker News and r/engineeringmanagers. The 89 monthly self-host installs are the asset you have not touched.

The short version

  • You are stuck at $5,640 MRR because 41 of 62 paying teams arrived through Hacker News, dev.to, and r/engineeringmanagers — and every other channel you tried (Google Ads, LinkedIn DMs, a $1,200 Pragmatic Engineer sponsorship) burned cash and produced flat ROI.

  • Skip the next ad test. Turn the stale-doc detector — already running inside 89 monthly self-host installs you do not monetize — into a quarterly public benchmark report distributed through the exact channel that has converted every time.

  • Diffmode walked your $400/mo marketing budget, 18 weekly hours, and solo-team constraints against 576 documented growth mechanisms and surfaced one pair an engineer-founder can run alone without buying a single click.

Run synthesis on your numbers

Get the plan synthesised for your product.

Diffmode pairs your specific budget, team, and stage against 576 documented growth mechanisms — and ships back a plan only your business could run.

Start my plan

Plan in your inbox within one business day. No credit card.

The tactic

What to actually run

The Docs-Rot Census

How the solo founder of an engineering-wiki SaaS turns 89 monthly self-host installs into a recurring industry report only one vendor in the category is legally allowed to publish

Here is the move. The stale-doc detector you already shipped — the one that flags wiki pages whose linked code paths have not been touched in 90 days — becomes a public opt-in census. Every self-host installer sees one 90-word consent prompt on first run. The ones who accept send three anonymized numbers into a Postgres table on the infrastructure you already pay for: page count, stale-page percentage, repo language. Nothing identifying. Ninety days later you publish the State of Engineering Docs Rot — one headline statistic, three charts, the raw CSV. dev.to long-form on a Tuesday morning. Show HN two hours after. Cross-post the methodology to r/engineeringmanagers as a discussion. No CTA button anywhere. The Census is the marketing. The numbers do the work.

Look at where your 62 paying teams actually came from. 41 arrived through Hacker News, dev.to, and r/engineeringmanagers — the surfaces staff engineers and EMs at small SaaS startups already live on. Google Ads cost you $640 for one conversion against Atlassian's retargeting wall. The $1,200 Pragmatic Engineer sponsorship produced two paid teams. The structural reason the Census beats both: Confluence and Notion cannot publish rot benchmarks against their own products without their legal team killing the post inside an hour. Outline, Slab, and Slite do not have a git-linked detector to instrument in the first place. You are the only vendor whose positioning — docs that do not lie — is reinforced by publishing the bad news. The smallness is the moat. No coding required to add a fourth channel. No agency.

Diffmode surfaced the pair that converts the 89 unmonetized self-host installs sitting on your existing $90/mo tool stack into a dataset competitors cannot replicate. Plain English: you publish the rot numbers nobody else can publish, distributed through the channel that has already converted 41 of 62 teams. The page hands you the consent prompt copy, the dev.to skeleton, the Show HN intro comment, the four-hour comment-response rule, and the kill criteria. Then you ship. One quarter, one statistic, one CSV. The Q2 report cites Q1. The Q3 report cites Q2. By Q4 the term docs rot is appearing in HN comment threads you did not start — and the only page that cleanly defines the term is yours.

Expected Results

180–340 dev.to reactions + 18–34 self-host opt-ins in Week 1; 1–4 paying customers in Month 1

By Month 3 the second Census release re-cites the first, the term docs rot starts appearing in unrelated HN threads, and 6–11 new paid teams per month become attributable to first-touch on a Census post or a downstream citation — Month 1 is for seeding the format, not closing revenue.

Budget Required

$0 net new spend

Postgres + Plausible $9/mo + Mixpanel + Sentry already inside the existing $90/mo tool budget; Datawrapper free tier renders charts; dev.to, Hacker News, and r/engineeringmanagers are free surfaces. The founder's 6–8 weekly writing hours are the only unbilled cost.

Time to Signal

14 days

Self-host opt-in rate measured against the 20–38% target band by end of Week 2; dev.to launch-post reactions and HN front-page surface visible inside 96 hours; first downstream Census citation typically lands by Week 8 if the headline statistic carries.

Why this combination wins

Stuck at $5,640 MRR for six months. Last five paying teams came from Hacker News and r/engineeringmanagers. Google Ads, LinkedIn DMs, and a $1,200 Pragmatic Engineer slot all produced flat ROI. The 89 monthly self-host installs do not monetize.
Free-tool data aggregation alone gives you a gated PDF report nobody upvotes on HN. Public-metrics transparency alone gives you a Baremetrics-style page only your own founder peers read. Together they produce category-defining numbers your incumbents are legally barred from publishing.

Tools You'll Need

ToolPurposeCostSetup
Postgres + 60-line ingestion scriptStores opt-in anonymized rot metrics — page count, stale-page percentage, median age of flagged docs, repo language — from self-host installs into a table you already run on existing infrastructureFree (runs on existing infra)4–6 hours one-time
dev.toPublishes the launch long-form and every quarterly Census release on the founder's strongest historical channel — the same surface that brought Marc Rouleau in via the ADR-templates postFree0 minutes (account exists)
Datawrapper (free tier)Renders the rot-age distribution histogram, the percent-stale-by-team-size chart, and the language-cohort breakdown at publication quality without a design contractorFree up to 10K monthly chart views20 minutes per chart
Plausible AnalyticsTracks Census landing-page visits segmented by referrer — dev.to, Hacker News, r/engineeringmanagers, direct — so the founder can tie trial signups back to a specific thread$9/mo (already in tool budget)5 minutes
Hacker News Show HNSurfaces the Census to the highest-intent engineering-leadership audience on a single submission — one front-page surface delivers 8,000 to 25,000 visitsFree5 minutes per quarterly release

Week 1: Day-by-Day Plan

1
Ship the consent prompt and the ingestion pipeline so a self-host install can emit one anonymized row into your Postgres table
~~3 hours
  • Write the 90-word opt-in consent screen titled Help us publish the first public rot benchmark, with three checkboxes — page count, stale-page percentage, repo language — and a hard No thanks button defaulting to off if the user just hits Enter.
  • Build the 60-line ingestion script that writes consenting installs into a new Postgres table on your existing infrastructure — no new tool, no new bill.
  • Stand up the public landing page at /docs-rot-census on the marketing site with a placeholder Q2 2026 report drops [date] block; the full report content lands Day 5.
  • Wire Plausible to segment /docs-rot-census visits by referrer — dev.to, Hacker News, Reddit, direct — so the launch traffic is attributable from minute one.

A self-host install on a clean machine shows the consent prompt once, accepting it writes one anonymized row to Postgres, and Plausible tracks /docs-rot-census visits by referrer.

2
Pull the first 90 days of stale-doc data and lock down one headline statistic that carries the report
~~3 hours
  • Pull 90 days of stale-doc detector output from your own production install plus 12 to 30 consenting beta self-host users contacted directly — even a small sample is defensible if you publish the bias honestly.
  • Pick the one headline statistic likely to make an engineer say huh, really — candidates include percent of wikis with at least one 180-day-stale page, median age of the oldest stale page, or percent of stale pages pointing at code modified inside the last 30 days.
  • Build three Datawrapper charts: rot-age distribution histogram, percent-flagged-stale by team size, language-cohort breakdown.

One headline statistic is locked in writing, three charts are exported as PNGs, and the dataset CSV is staged for publish alongside the long-form.

3
Draft the dev.to launch long-form at 1,400 to 1,800 words with the methodology described honestly
~~3 hours
  • Draft the post structured as hook (the headline statistic), methodology (200 words on what the detector measures and how the consent flow works), three Datawrapper charts inline, four implications for engineering teams (not for vendor selection), raw-data CSV link, and one closing sentence acknowledging the product made the measurement possible.
  • Hand the draft to the part-time technical writer (4 hrs/week) for a tone pass — cut any sentence that reads like vendor positioning rather than research.
  • Schedule the post for Day 4 at 8am ET, the Tuesday-morning window that historically performs best on dev.to.

Post is in dev.to draft state, charts render correctly in preview, the raw CSV is uploaded to the marketing site, and the writer has signed off on tone.

4
Launch the Census across dev.to, Hacker News, r/engineeringmanagers, and the Pragmatic Engineer comment surface in a single coordinated window
~~2 hours
  • Publish the dev.to post at 8am ET; the first social-proof reactions land before HN voters see the link.
  • At 10am ET, submit to Hacker News as Show HN: We measured docs rot across N engineering wikis here is what we found — body is four sentences linking to the dev.to article and the raw CSV.
  • Post to r/engineeringmanagers as a Discussion, not a link drop — frame it as We tried to measure how fast engineering wikis go stale here is the data would love a methodology sanity-check.
  • Drop one non-promotional, data-only comment on the next Pragmatic Engineer Friday issue thread citing one statistic — same audience the $1,200 sponsorship reached, now reached for free with a referenceable artifact.

All four surfaces are live, and the founder is responding to every HN comment inside 15 minutes — engineers reward authors who defend their methodology publicly and fast.

5
Read the 24-hour signal, calendar the next four quarterly releases, and write the keep/cut note for Q2
~~1 hour
  • Pull dev.to reactions, HN points, r/engineeringmanagers upvotes, and Plausible /docs-rot-census visits for the previous 24 hours.
  • Compare against the PMF-signal band — 180 to 340 reactions, one HN ≥50 points in four hours, 18 to 34 install opt-ins.
  • If at the floor: rewrite the headline statistic for Q2, not the whole tactic. If above ceiling: copy the same angle deeper.
  • Calendar the next four quarterly releases on the founder calendar — the report's value is its predictability, not novelty per drop.

The founder has read the signal, written a one-paragraph keep/cut note for Q2 in the founder notebook, and the next four Census release dates are blocked on the calendar.

Templates

Self-Host Opt-In Consent Prompt (90 words)
Use on the self-host installer first-run screen — shown exactly once, defaults to off if the user hits Enter without ticking any box. The consent copy is where the dataset moat actually compounds; vague copy here kills the opt-in rate, which is why the kill criterion is opt-in below 10 percent by end of Week 2.

Help us publish the first public rot benchmark We run a quarterly report called the Docs-Rot Census — an industry-wide measurement of how fast engineering wikis go stale, what kinds of stacks rot fastest, and how team size correlates with documentation decay. If you opt in below, your install will send three anonymized data points to our aggregation table after each daily scan: [ ] Total page count (a single integer) [ ] Percent of pages flagged stale (a single integer) [ ] Primary repo language (one short string) No page contents. No URLs. No team identifiers. No IP address. Raw CSV of the aggregated dataset is published with every quarterly Census release at /docs-rot-census. [ Opt in to the Census ] [ No thanks ] You can change this anytime in Settings → Privacy. Default if you skip this screen: off.

dev.to Census Long-Form Skeleton (1,400–1,800 words)
Use when drafting the launch post on Day 3 and every subsequent quarterly Census release. The structure is fixed across releases — the consistency is what makes the archive valuable to an engineer evaluating vendors, and the published methodology is what survives the inevitable HN methodology-grilling.

Title: We measured how fast engineering wikis go stale. Here is what [N] installs showed us. [HOOK — 2 sentences] In [N] engineering wikis we instrumented over the last [PERIOD], [HEADLINE_STATISTIC — e.g. 47% had at least one page that has not been touched in 180 days while the code it documents was modified last week]. That is the lie cohort. This post is the methodology and the raw data. [METHODOLOGY — 200 words] We run an open-source detector that flags a wiki page as stale when the git paths it references have been modified after the page's last-edit timestamp. [N] teams opted into anonymous aggregation — page count, percent flagged stale, median age of flagged docs, primary repo language. Nothing identifying. Sample skews toward [SAMPLE_BIAS — e.g. 5–40 engineer teams using markdown-native wikis]. The raw CSV is at the bottom of the post. [FINDING 1 — chart + 3 sentences] [Chart: rot-age distribution histogram] [Three sentences interpreting the chart, including one counter-intuitive read — e.g. older wikis do not rot faster per page; they rot the same per page, but they have more pages.] [FINDING 2 — chart + 3 sentences] [Chart: percent-flagged-stale by team size] [FINDING 3 — chart + 3 sentences] [Chart: language-cohort breakdown] [FOUR IMPLICATIONS — bullet list, for engineering teams not vendor selection] - [Implication 1: about reading rot signals before they bite] - [Implication 2: about ADR-template habits] - [Implication 3: about the onboarding cost of rotted docs] - [Implication 4: about why search quality is a lagging indicator of rot] [CLOSING — 2 sentences] This data was collected by [PRODUCT_NAME]'s open-source self-host build. The methodology is in the README; the raw CSV is below. The Q[N+1] cut publishes on [DATE]. [Raw data CSV link]

Show HN Submission + First Author Comment
Use when posting any Census release to Hacker News at 10am ET on launch day — exactly two hours after the dev.to post goes live, so HN voters land on a thread that already has 30+ dev.to reactions as social proof in the comments. The first comment goes within 60 seconds of submission; HN's ranking algorithm weights early author engagement and the comment shape doubles the thread-engagement ratio.

Title field: Show HN: Docs-Rot Census — we measured stale-doc rates across [N] engineering wikis URL field: [direct dev.to article URL — NOT the marketing site] First-comment-by-author (post within 60 seconds of submission): --- Author here. Two notes on methodology before the comments roll in: (1) The sample is self-selected — teams opted in during a self-host install. That biases the dataset toward teams who already care about docs quality, which probably understates rot in the broader population. Raw CSV is linked at the bottom of the article if you want to slice it differently. (2) Yes, we make a wiki product. Yes, we built the detector. The question I expect: is this just a marketing report. My answer: the headline number ([HEADLINE_STATISTIC]) is bad news for our category as much as for any incumbent. If it were a marketing report we would have gated the data. The CSV is free; the methodology is in the README. Happy to discuss methodology, sampling bias, or anything in the data that looks weird. ---

Week 1 Checkpoint

By end of Week 1, the Census instrument is live, the launch artifact is in front of the audience that already drives 41 of 62 paying teams, and the 14-day signal tells you whether the headline carries or whether the Q2 release needs a different statistic.

  • 180–340 dev.to reactions on the launch post AND at least one of two HN submission attempts reaching ≥50 points within four hours
  • 18–34 self-host installs (of the ~22 weekly install flow) have opted into the Census telemetry, producing a defensible Q2 dataset without any new acquisition work
  • First downstream citation of the term docs rot in an unrelated HN comment thread or newsletter by end of Week 8 — the leading indicator that the category vocabulary is starting to anchor on the Census

When to pivot

If launch-post dev.to reactions stay below 90 (half the low band) AND both HN Show HN attempts die under 25 points within 24 hours, rewrite the headline statistic before the Q2 release — do not pivot away from the tactic on Week-1 numbers alone, because the Census is a brand-compounding move whose value lands by Month 3 through Census-on-Census citation, not single-launch revenue. If the self-host opt-in rate stays below 10 percent by end of Week 2, the consent prompt is broken — rewrite the copy before Q2 ships.

Weeks 2+: Scaling Schedule

WeekFocusTasksTime
Week 2Defend the methodology in public and pitch the dataset to three engineering newsletters that did not receive a launch outreachReply within 15 minutes to every HN, dev.to, and r/engineeringmanagers comment that questions sampling or interpretation — engineers reward authors who defend data publicly, and those threads become future SEO surface., Pitch the dataset (not the product) to Console.dev, Software Lead Weekly, and Pointer.io with a single-sentence raw CSV is public, no gate., Cut a 12-tweet thread from the dev.to post — one chart per tweet, the headline statistic as tweet 1 — using the same format that produced the founder's 800K-impression doc-tooling thread.6 hours total
ProAvailable on Pro

Read before you ship

Caveats

Plan the writing block as a recurring calendar entry, not as catch-up work. The Census needs 6 to 8 hours of weekly writing time on top of the 18 hours/week the founder already allocates to growth — and a quarterly week that spikes to 14 hours when the Census drops. Most of those hours land on weekends when OSS-issue triage is quiet. If the OSS contributor backlog spikes — and as the founder of a BUSL-licensed wiki you are also the maintainer that 14-engineer self-host installs ping when something breaks — the quarterly cadence is the first thing to slip. A missed quarter is what kills the format's credibility.

Budget ceiling: the existing $90/mo tool stack (Mixpanel, Plausible, Sentry, GitHub Actions overage) plus the $400/mo marketing envelope leaves zero room for a paid amplification layer. The Census deliberately costs nothing in paid spend so it fits inside the runway-protection limit. Do not pay to boost the dev.to post; do not buy a Pragmatic Engineer slot for the Census drop. The $1,200 sponsorship in the founder-input history produced two paid teams at flat ROI — a free organic comment citing one Census number on the same audience is the better extraction of that surface.

Skill gap: ad campaigns is the Limited skill in the founder-input table. Do not try to fix that with this tactic. If the Q1 Census underperforms the reaction band, the answer is to rewrite the headline statistic for Q2 — not to run a paid retargeting test on the article URL. The audience pattern-matches retargeting to vendor-speak, which is exactly the anti-trigger this audience scrolls past.

Audience reachability: the tactic depends on Hacker News, dev.to, and r/engineeringmanagers staying live channels for the staff-engineer and EM buyer segment. If your customer mix shifts toward 100-plus-engineer enterprise procurement — for example an SAML-tier wiki sale to a 200-engineer fintech via inbound — the audience surface fragments and the Census loses its specificity to the founder's existing channel mix. The kill criterion zero downstream citations by end of Week 8 is the formal signal that the founder has moved out of the pair Diffmode synthesized for, not that the tactic is broken in the abstract.

Closest analogue

Case study: A Byte of Coding (Alex) — solo daily programming newsletter at $15K/year against 3,009 subscribers

Alex runs A Byte of Coding, a daily curated programming newsletter that crossed 3,009 subscribers and produced roughly $15,000 in revenue inside a single year — solo, no agency, no paid acquisition. Revenue mix: newsletter ad slots at $3 cost-per-click (~$300 per ad), occasional sponsored technical posts at $3,000 to $5,000 each, and a B2D marketing consulting line he added later. The numbers are public in his Indie Hackers write-up. The pricing rule Alex shares — start at $100 per 1,000 subscribers and raise $50 per sponsor until one balks — is the kind of operator math the engineering-wiki founder reading this page recognizes as actually-from-the-spreadsheet rather than blog-post-theory.

The fingerprint match is not the vertical — Alex sells curated programming articles, you sell an engineering-wiki SaaS. The match is the operator seat: solo technical founder, the audience already lives in the channel, zero paid acquisition, the moat is the dataset Alex publishes alongside the offer. Alex shares his click-data publicly when he pitches sponsors — exactly the move the Census prescribes for self-host opt-in benchmarks. His best-performing sponsored article landed on the first page of Hacker News, and he used that single data point to anchor every subsequent sales pitch, exactly the way Reliability-Log-style cadence anchors every later citation back to the first Census release. The cold-email reply rate Alex publishes — roughly 25 percent response, with 20 percent of responders converting, for a 4 percent overall cold-convert rate — is the kind of receipt-level transparency that turns a newsletter into a citable industry artifact instead of a marketing surface.

Alex broke through his own plateau (from ~1,500 to ~3,009 subscribers, and from incidental income to $15K/year) by treating the newsletter the same way the Census treats self-host installs: the data he was already collecting became the product he sold. He ran the equivalent of this play himself at the exact bootstrapped-solo MRR band the reader of this page is sitting at — and his pricing breakdown, sales-followup discipline (he repeats ALWAYS FOLLOW UP three times), and click-data-as-receipts approach all map cleanly onto the Docs-Rot Census mechanism. The artifact is on abyteofcoding.com if you want to verify the cadence before you commit to instrumenting your own self-host build.

Source: https://abyteofcoding.com

Failure modes

Anti-patterns

Do not gate the Census raw CSV behind a lead-capture form. The audience pattern-matches gated B2B benchmark PDFs to vendor marketing — every gated report on the internet has trained their gut to scroll past. Gating the CSV is what kills HN, dev.to, and r/engineeringmanagers distribution at once.

Do not pitch the wiki product inside the Census long-form. The closing sentence acknowledges the product made the measurement possible — that is the entire promotional surface. No Sign up free button. No Try the cloud version line. Promotional copy in the body trips the anti-trigger for corporate marketing in technical contexts, and the Show HN thread gets buried by an engineering crowd that values restraint.

Do not run the Census monthly. Quarterly is the floor. The audience pattern-matches frequent founder updates to AI-generated content farms; the value lives in the rarity of a cleanly-published benchmark. If a quarter passes without a meaningfully different dataset, skip — better to publish nothing than ship thin filler that erodes the format before the second release.

Do not argue with HN methodology critics in-thread. The four-hour rule says respond with numbers, not opinions. If a commenter says your sampling biases toward teams that already care about docs quality, the answer is to publish that critique in next quarter's Census as a methodology footnote — not to defend the sample in the thread. Defending in-thread reads as marketer-voice; updating the next release reads as operator-voice.

Do not buy retargeting on the Census post. The Atlassian-pixel-saturated audience that killed the $640 Google Ads test will also kill any paid amplification on the Census. Free dev.to and HN distribution is category-defining; paid contaminates the trust the format is built on.

Run it against your numbers

Get a tailored plan for your business by tomorrow.

Run Diffmode against your specific budget, team, and stage. Anton emails a tailored plan within one business day — written for the constraints only your business has.

Start my plan

Free to start. No credit card.