How a broken Cypress config at 2am turned into a product. This is the unfiltered version.
If you've maintained a Cypress or Selenium suite, you know the feeling. Tests that passed yesterday break today because someone moved a button. A CSS class changed. A modal got refactored. The site works fine — the tests don't.
Tests break when IDs change, classes get renamed, or the DOM restructures. The site still works. The tests don't care.
Writing a Cypress test for a multi-step form takes longer than building the form. Most teams give up and test the happy path only.
Every UI change means updating tests. Developers avoid the test suite. Eventually nobody trusts it.
Because the alternative is finding out your checkout flow is broken when a customer emails you about it.
We started experimenting with AI and natural language. Instead of writing selectors and page objects, we described tests in plain English: "Navigate to the signup page. Enter a name. Click Submit."
AI interpreted the instructions and drove a real browser. No selectors. No framework lock-in. And when someone moved a button or renamed a label, the tests kept passing — because they described intent, not implementation.
It was good. Fast to write, resilient to UI changes. We started replacing our Cypress suite with natural language tests and never looked back.
Then we scaled it up. Added a harness to run batches. Pointed AI at dozens of test scenarios running on a schedule. It worked great — until the invoice arrived.
AI costs at scale are brutal. Every test run meant AI tokens. Every scheduled check, every retry, every re-run. The per-execution cost made it impractical for continuous monitoring. We needed a different approach.
The solution: AI runs once. It interprets your instructions, drives the browser, and records every interaction as structured steps. After that, replays execute those recorded steps without invoking AI. No per-run AI token charges.
AI interprets your natural language instructions and drives a real browser. Every click, every form fill, every navigation is recorded.
Subsequent runs replay the recorded steps through a resilience framework. No AI involvement means predictable, low-cost execution.
If a replay fails because the UI changed, AI re-engages to adapt and re-records for future runs. You don't touch anything.
We built this for ourselves. Internal tooling to solve our own testing problem. But then other people saw it and asked if they could use it.
So we made a decision: build it as a product. Package it up, add multi-tenancy, build a dashboard, add scheduling and alerting. Make it something a team can adopt without reading our source code.
We added uptime monitoring and Lighthouse audits because they belong in the same workflow. If you're already checking whether your site works, you should also know if it's up and if it's fast. One platform, one dashboard, one alerting system.
We're in early beta. Here's what we're working on and what's coming.
Custom JS functions, better variable handling, and more control over test execution flow.
Trigger tests from your terminal or CI/CD pipeline. Connect to AI coding tools like Claude Code and Cursor.
Auto-generation is experimental and improving. Better crawling, smarter test scenario detection, fewer false positives.
We haven't heard from you yet. The roadmap shifts based on what real users actually need.
We're in early beta and we're looking for partners — people who want to help build the best website testing tool, not just use one.
You get founding-member pricing and direct access to the team. Tell us the good, the bad, and the ugly. Especially the ugly — that's the stuff that makes the product better.
The vision is a single platform for everything your website needs: testing, monitoring, performance, and more. But right now, we're focused on making what exists as good as it can be. Join us.
We're in early beta. You get founding-member pricing and direct access to the team. Your feedback shapes what we build next.
Loading...
Please wait a moment