17 questions before you write a line of code
Published on Feb 15, 2026
Getting Real is a framework for building products without wasting time on things that don’t matter. It assumes you’ve already decided what to build. This is what I run before that.
If your idea fits the Signal-to-Artifact pattern - raw public data converted into a shareable artifact that replaces a manual workflow - run the five-question pre-filter first. It takes ten minutes and kills most ideas before you need this checklist.
The 17-Point Framework
Demand
1. Revenue proof exists. Someone is already making money from something like this. Not “people would pay” - people are paying, right now, for something adjacent enough to count as proof.
2. Complaints are specific and findable. Search G2, Reddit, HackerNews. The pain should be documented, vocal, and specific. “I spend 2-4 hours reading competitor reviews manually before quarterly planning” passes. “Companies want better insights” fails.
3. The paying customer is obvious. You can name a job title, a trigger event, and the moment of acute need. No buyer fragmentation. If you’re unsure who pulls out the card, the idea isn’t ready.
Distribution
4. The output is a showable artifact with a built-in content loop. Can the core output - a report, a score, a ranking, a URL - be shared without requiring login? Does sharing it create demand for more? If the value only exists inside a dashboard, distribution requires separate work you haven’t planned for.
5. Someone who saw it two weeks ago can find it again. This is a proxy for memorability and SEO potential. If you can’t describe the product in a search query someone would actually type, you have a discovery problem.
Revenue
6. A zero-friction entry channel exists. There’s a pay-per-use, freemium, or impulse-purchase option that gets first dollars in without requiring a sales call or annual commitment. The path from “I just found this” to “I just paid for this” should be under five minutes.
7. Price × volume math works and the buyer is clear. Pick the realistic path: 150 subscribers at $100/mo, or 30 at $500/mo. Run the numbers. B2B vs B2C is decided - who pulls out the card is named upfront, not figured out after launch.
Build
8. One person can build the core MVP in one week. With AI-assisted coding. Not a polished product - a working version that produces real output. If it takes longer, the scope is wrong.
9. You can personally judge whether the output is good. You have enough domain familiarity to look at the output and know immediately if it’s right or broken. If you need an expert to evaluate quality, you’ll be flying blind during iteration.
10. The “50% better” gap is specific and nameable. Not “more intelligent” or “deeper analysis.” Name the specific incumbent weakness and the specific way you close it. 50% better means a category shift - what Linear did to Jira - not a speed or price improvement on the same thing. If someone who uses the alternative would see your output and say “I could never get this from my current tool,” you pass.
Cold Start
11. Supply can be pre-populated without waiting for signups. For data products: generate the first 100 outputs before launch. For marketplaces: you can seed one side. If the product is useless until it has users, the cold start is a structural problem.
12. The product inserts into an existing habit. No behavior change required. Someone is already doing this manually - weekly, monthly, as part of a workflow. You’re replacing a step in that workflow, not creating a new one.
Durability
13. It gets harder to copy over time. Data moat, brand trust, historical baselines, network effects. Name the specific mechanism. “We’ll have more data” is not a moat unless you explain why your data accumulates and competitors’ doesn’t.
14. General-purpose AI tools cannot replicate the core output. Run your core query through Claude Deep Research and Perplexity before scoring this. Don’t imagine - go look. If the gap is formatting, not substance, you fail. Durable differentiation requires proprietary data access (scraping-gated sources, first-party operational data) or continuous monitoring with historical baselines - not better synthesis of publicly indexable content.
Implementation
15. The data you need is actually accessible at the depth you need. Build a Feature × Data Field × Source × Coverage % table for your core features. Run a real test - inspect actual output, not documentation claims. If any core feature has under 60% coverage, it fails or must be de-scoped. This step has killed more ideas than any other.
16. Pipeline complexity is bounded and unit economics work. Count your external dependencies (APIs, scrapers, LLM services, databases). More than four is a warning sign - each one is a failure point and a maintenance burden. Separately: run the pipeline for 10 real outputs and measure actual costs against your lowest price tier. COGS should be under 40% of revenue.
Pre-Revenue Commitment
17. Total spend before first revenue is under $500 and under three weeks. For a solo bootstrapped founder, this is the budget. List every cost. If you can’t ship something real within these constraints, the scope is wrong or the idea is wrong.
Scoring
- 15-17: Strong conviction - build immediately
- 11-14: Viable with known gaps - build if gaps are addressable in the first two weeks
- 7-10: Multiple structural issues - needs reframing before committing
- Below 7: Kill it
The bands matter less than the specific items you fail. A fail on #3 (paying customer obvious) or #14 (AI replication) is usually fatal. A fail on #5 (memorability) is usually fixable.
How to score honestly
The failure mode of any checklist is motivated reasoning. You want the idea to pass, so you find a way to make each item pass.
Three patterns to watch for:
- Adjacent revenue isn’t direct revenue. “Revenue proof exists” fails if the evidence is that a different kind of product in a vaguely related market makes money. It passes when someone is paying for something close enough that your product is a direct substitute or improvement.
- Vague differentiation isn’t differentiation. “More intelligent” and “deeper analysis” don’t name a gap. The 50% better claim passes when you can finish this sentence with something specific: “Anyone using [incumbent] cannot get [specific thing] - and we can.”
- Future moats aren’t current moats. “We’ll build a data moat over time” fails. A moat that doesn’t exist at launch is a hope. Score what exists today.
After your first pass, do an adversarial review: for every item you marked pass, write one sentence of specific evidence. If the sentence is vague, downgrade it. If you can’t write the sentence, it fails.
This framework is specific to solo founders building bootstrapped SaaS products. It draws on work by Samuel Rondot and Rob Walling, iterated through real evaluation sessions.