TechnicalProcess· Dec 2025

Building a UAT System with Google Forms and Apps Script

When I joined Winnov8 as Technical Product Manager for SwissPay, one of the first things I needed to figure out was how we were going to test the product before launch.

SwissPay was a mobile-first tap-to-pay payment solution targeting African markets. The product covered onboarding, KYC, payment flows, SoftPOS integration, FX conversion, transaction history, settings, and more. Getting any of it wrong in a live environment — especially around payments — wasn't an option.

We had a small team, no dedicated QA function, and no budget for enterprise test management tools. What we did have was a beta group of real users willing to test, a tight deadline, and a product that had to work.

So I built a UAT system using Google Forms and Apps Script. It handled 170+ test cases across 18 features. Here's exactly how I did it.

Why Google Forms?

The honest answer is: because everyone already has it.

Beta testers don't want to learn new tools. They're doing you a favour. The less friction between them and the testing task, the more responses you'll get and the better the data will be.

Google Forms is universally familiar, works on any device, and captures responses in a Google Sheet automatically. That last part is important — once responses are in a Sheet, you can do almost anything with Apps Script.

There are real limitations: you can't attach screenshots natively, branching logic is basic, and it's not designed for this use case. But for a lean team moving fast, the benefits outweigh the friction of something purpose-built.

Structuring the Test Cases

Each test case needed to capture:

  • Feature area (e.g. Onboarding, Payments, Transaction History)
  • Test scenario (what the tester is trying to do)
  • Expected result (what should happen)
  • Actual result (what did happen — pass/fail/partial)
  • Notes (anything unexpected, error messages, device details)
  • Tester name and date

I organised the 18 feature areas into separate sections within a single Google Form. Within each section, test cases were individual questions with a structured format: scenario description at the top, then a multiple choice for Pass / Fail / Partial / Blocked, followed by an optional text field for notes.

The structured format was deliberate. If you leave testers with an open text box and "tell us what happened," you get wildly inconsistent responses. Forcing the Pass/Fail/Partial/Blocked classification first means the data is immediately usable — you can filter on it, aggregate it, and prioritise fixes without reading every single note.

Automating the Response Pipeline with Apps Script

Raw form responses in a Google Sheet are useful but not immediately actionable. I wanted:

  1. 1. A live dashboard showing pass rates by feature area
  2. Automatic flagging of failures and blocked tests
  3. A summary email I could send to the engineering team at the end of each testing day

All of this runs on Apps Script — JavaScript that runs server-side in Google's infrastructure, triggered by form submission.

Aggregating Results

When a form response comes in, a trigger function fires. It reads the new row, identifies the feature area and result classification, and updates a summary sheet.

The summary sheet tracks total tests, passes, failures, partials, and blocked counts per feature area. Pass rate is a simple formula: =passes/total.

Flagging Failures

Any test that comes in as Fail or Blocked triggers an automatic highlight — the row turns red in the responses sheet. I also log these to a separate "Issues" sheet that the engineering team has direct access to.

Engineering could see new failures as they came in during the day, rather than waiting for a report. This sped up the fix cycle significantly.

Daily Summary Email

At the end of each testing day, a time-based trigger fires a summary email to the team. It pulls the current state of the summary sheet and formats it.

Simple, but it meant the engineering team woke up each morning with a clear picture of where things stood.

What the 170+ Test Cases Covered

Across 18 feature areas, the test cases covered:

  • Onboarding: Registration flow, email verification, phone number capture, error handling for duplicate accounts
  • KYC: Document upload, ID verification states, rejection flows, retry logic
  • Payments: Tap-to-pay initiation, NFC detection, payment confirmation, receipt generation
  • SoftPOS: Terminal detection, connection stability, transaction completion
  • FX: Currency selection, rate display, conversion accuracy, rate refresh
  • Transaction History: List view, filtering, individual transaction detail, date ranges
  • Settings: Profile updates, notification preferences, linked accounts, security settings
  • Error states: Network failures, timeout handling, retry flows, error messaging

For each area, I wrote the expected result explicitly. This matters more than it sounds. "Payment should go through" is a bad test case. "After tapping the device on the terminal, the user should see a success screen within 3 seconds with a transaction reference number" is a test case someone can actually verify.

What I'd Do Differently

Screenshots. The biggest gap in this system was screenshot capture. Testers had to describe bugs in text, which is inefficient and often incomplete. If I were building this again, I'd add a Google Drive folder per tester and include a file upload field in the form — or use a tool like Loom for video capture of failures.

Device and OS capture. Payment behaviour varies significantly across Android versions and device manufacturers. I added a free-text device field late in the process, but it should have been a structured dropdown from the start.

Severity classification. Pass/Fail/Partial/Blocked is good, but it doesn't capture severity. A typo on a confirmation screen and a payment failing silently both count as Fail. Adding a severity field (Critical / High / Medium / Low) would have made prioritisation faster.

The Result

The UAT system ran for three weeks before our beta launch. By the end, we had a clear pass rate per feature area, a prioritised issues list for engineering, and confidence in the areas that mattered most.

We caught payment edge cases that would have been embarrassing in a live environment. We identified UX friction in the onboarding flow before real users hit it. And we did it with no specialist tooling and minimal overhead.

The system isn't glamorous. But it worked — and it cost nothing to run.

If you're approaching a product launch and need a way to structure testing without a QA team, this approach is worth considering. The fundamentals — structured test cases, automated aggregation, daily visibility for engineering — transfer to almost any product context.

The Google Forms template and the Apps Script are available on request.