FoundersProcess· Jun 2025

UAT for Founders: How to Test Your Product Before Launch

Most founders I've worked with think about testing too late, test the wrong things, or skip it entirely and cross their fingers.

The result is usually the same: they launch, real users immediately find problems that should have been caught before launch, and the early impression of the product — which is the hardest impression to change — is one of roughness and unreliability.

User Acceptance Testing (UAT) doesn't require a QA team, expensive tools, or a formal engineering background. It requires a systematic approach, a clear set of scenarios, and people willing to use your product honestly. Here's how to do it properly.

What UAT Is and Isn't

UAT is the final phase of testing before launch. Its purpose is to verify that the product does what it's supposed to do, from the perspective of a real user trying to accomplish a real goal.

It is not a bug hunt by your development team. Developers test their own code throughout the build — unit tests, integration tests, QA checks. UAT happens after all of that, with people who didn't build the product, using it the way actual users will.

It is also not the same as user research. User research helps you decide what to build. UAT verifies whether what you built works.

The distinction matters because they require different participants and different approaches. Your developers can't run your UAT — they know too much about how the system works and will unconsciously avoid the edge cases. Your target users are ideal UAT participants, but they need to be testing against defined scenarios, not just "have a play and tell me what you think."

Start With a Test Plan

Before you put anyone in front of your product, write down what you're testing.

A test plan doesn't need to be complex. For each feature area, you need:

Scenario: What is the user trying to do? Written from the user's perspective, not the system's. "A new user creates an account, verifies their email, and completes the onboarding flow" — not "test the authentication system."

Expected result: What should happen if the product is working correctly? Be specific. Not "the user should be able to log in" but "after entering valid credentials and clicking Sign In, the user should see their dashboard within 3 seconds."

Test data: What information does the tester need? If you're testing a payment flow, they need test card numbers. If you're testing a search, they need to know what's in the system to search for. Don't make testers guess.

Pass/Fail/Partial/Blocked criteria: How does the tester record the result? I use four categories: Pass (worked as expected), Fail (didn't work), Partial (worked but not completely or not well), Blocked (couldn't test because something earlier broke). This structure makes the data immediately usable — you can sort by result and prioritise fixes without reading through everything.

When I built the UAT system for SwissPay, I had 170+ test cases across 18 feature areas. That sounds like a lot, but payment products have genuinely many flows to test — onboarding, KYC, payment initiation, receipt generation, transaction history, error states, settings. If you're testing a simpler product, you might have 30–50 test cases. The point is to cover every meaningful path a user might take.

Who Should Test

Not your developers. They know the happy path and their brains will take it automatically.

Not just you. You know the product too well and you'll rationalise rough edges that users will stumble over.

  • Ideal testers:
  • People who match your target user profile — they'll naturally do the things real users do
  • People who are slightly non-technical — they'll find friction that technically-minded people overlook
  • People willing to be honest — you want to know what breaks, not what works

How many? For an early-stage product, 5–10 testers will surface the majority of critical issues. The diminishing returns from adding more testers above this number are real. Nielsen's research on usability testing suggests that five testers find roughly 85% of usability problems — a useful benchmark even for functional testing.

Brief them properly. Tell testers what you're testing and why it matters. Give them the test cases in a format they can follow. Tell them to report everything that feels wrong, even if they're not sure it's a bug. Some of the most valuable UAT feedback comes from testers who say "I wasn't sure if it was supposed to work like this, but it felt weird."

How to Capture Results

You need a structured way to capture results that makes the data usable. The most common UAT failure isn't in the testing — it's in the reporting. Testers send emails, WhatsApp messages, Loom videos, and verbal feedback that all have to be manually consolidated into something engineering can act on.

The tool doesn't matter much. I've used Google Forms with Apps Script (which gave me automated aggregation and a daily summary email to engineering), simple Notion databases, and Airtable. What matters is that every tester uses the same format for every test case, so you end up with comparable data across testers and features.

  • At minimum, capture per test case:
  • Tester name
  • Date and device (browser, OS, mobile/desktop)
  • Feature area
  • Test scenario
  • Pass/Fail/Partial/Blocked
  • Notes (what specifically happened, error messages, screenshots)

If you can, add screenshot capture. Testers describing a bug in text is less useful than a screenshot. For complex interaction bugs, short Loom videos are worth their weight.

What to Test Beyond the Happy Path

Most products test the happy path adequately — the flow works when everything goes right. What kills early-stage products is the unhappy path — what happens when things go wrong.

Error states. What does the user see when they enter invalid data? When the network drops mid-flow? When a required field is empty? Error messages that say "Error: 400" mean nothing to a user. Error states that aren't designed leave users stranded.

Empty states. What does the user see when they first log in and there's nothing there yet? A blank screen with no guidance is one of the most common causes of new user drop-off. The empty state needs to either give them a next action or explain what they're looking at.

Mobile. If your product has any mobile users, test on actual mobile devices — not just browser dev tools. The experience is often significantly different, and things that work on desktop routinely break on mobile.

Edge cases in your core flow. For SwissPay, this meant testing payment flows with amounts at the limits (very small, maximum allowed), with different currency combinations, with network interruptions mid-transaction. These aren't exotic scenarios — they're things real users will hit.

Permissions and access. If your product has different user roles, test each role explicitly. Role-based access bugs are embarrassing and sometimes have security implications.

What to Do With the Results

  • By the end of UAT, you should have a clear picture of:
  • Pass rate by feature area (which parts of the product are solid and which aren't)
  • A prioritised list of failures and their severity
  • A decision about launch readiness

On severity: not all failures are equal. A typo on a confirmation screen and a payment that fails silently are both Fails. But the payment failure is a launch blocker and the typo is a nice-to-fix. Define your launch criteria before you review results, not after — it's easier to make that call without data in front of you.

A pass rate above ~85% on critical user flows, with no launch-blocking issues unresolved, is a reasonable bar for a controlled beta with real users. Full public launch typically warrants a higher bar.

The Broader Point

UAT isn't just about catching bugs before launch. It's about building the habit of systematic verification before you ship.

Products that feel polished to users aren't necessarily built better than products that feel rough. They're tested better. The edge cases were found before users hit them. The error states were designed. The empty states were thought through.

This is achievable without a QA team and without expensive tooling. It requires taking the time to define test cases clearly, recruiting the right testers, and using the results systematically.

The hour you spend writing test cases before launch is usually worth more than the week you spend firefighting after it.