How I Set Up Automated Testing For React Native App With Maestro Final
Reporting, Collaboration, and Lessons Learned
How I Set Up Automated Testing for a React Native App with Maestro
- Part 1 – Why I Chose Maestro and How I Got Started
- Part 2 – Setting Up Tests on macOS Runners
-
Part 3 – Reporting, Collaboration, and Lessons Learned
In the first two posts, I explained why we chose Maestro and how I set up the test environments for both Android and iOS. In this final part, I’ll show how I built the reporting and collaboration system around Maestro tests, how it fits into our daily workflow, and what I learned through the process.
Building a Complete Automated Testing Structure
The testing itself is only half the story. For automation to be effective, the results need to be visible and actionable. Otherwise, QA teams won’t know what failed, and developers won’t have enough context to fix issues quickly.
Here’s the structure I implemented:
GitHub Action → Maestro → S3 → GitHub Action → Slack + GitHub Issues
- Maestro runs the actual tests.
- S3 stores screenshots and logs when failures occur.
- GitHub Actions orchestrates the pipeline and generates reports.
- Slack + GitHub Issues distribute results so the whole team stays in the loop.
This flow ensures that failures are never hidden—they’re documented, traceable, and easy to reproduce.
Reporting Results in Practice
The reporting workflow looks like this:
- Build artifacts (
.apk
and.app
) for Android and iOS. - Trigger GitHub Action whenever new server code is pushed.
- Run Maestro tests across emulators and simulators.
- Export results: debug logs, screenshots, and test summaries.
- Upload artifacts to S3 with public links.
- Create GitHub Issues automatically for failed tests, including S3 links and screenshots.
- Generate an HTML report for a human-friendly summary.
- Post results to Slack, so QA and devs get notified instantly.
For example, when a test fails, the Slack message looks something like:
- ❌ Test: Login Flow
- Screenshot: [S3 link]
- Logs: [S3 link]
- GitHub Issue: [link]
This way, no one needs to dig through CI logs manually—everything is one click away.
Collaboration Benefits
Automated testing is not just about saving developer time. It’s also about improving collaboration between teams:
- QA doesn’t have to repeat the same manual tests for old client. Instead, they can focus on exploratory testing and edge cases.
- Developers get immediate context (logs + screenshots) when something breaks, making debugging faster.
- Managers/PMs can track test coverage and failures via GitHub Issues without asking developers for updates.
In other words, the automation system became a shared source of truth for the entire team.
Challenges and Limitations
Of course, setting this up wasn’t without challenges:
- Performance constraints: Our GitHub Action runners were not powerful enough for parallel testing. Everything runs sequentially, which increases build times.
- iOS quirks: Accessibility grouping in iOS required a custom Babel plugin to work around (as I covered in Part 1).
- Maintenance overhead: As the app grows, test scripts also grow, which means they need continuous refactoring and review.
- Updating scripts for each release: Whenever a new version is released, old scripts may fail because of changed text or added features. This requires the QA team to continually update and maintain the scripts to keep tests reliable.
Despite these, the benefits far outweigh the drawbacks. Even a limited automation setup already reduced repetitive manual testing by a huge margin.
What I Want to Add Next
This project is still evolving, and there’s a lot more I’d like to explore:
- Parallel testing with stronger runners or Maestro Cloud.
- RTL (Right-to-Left) language support for full internationalization testing.
- Deeper integration with analytics—for example, tracking failure trends over time.
- Better reporting dashboards, perhaps combining Maestro outputs with visualization tools.
Automation should always grow with the app itself, and I see this system as a foundation to build upon.
Lessons Learned
Looking back, here are the key takeaways from this journey:
- Start small, then iterate – don’t try to automate everything at once. Begin with the most repetitive flows.
- Make results visible – automation is only useful if the whole team can access the outputs easily.
- Collaborate early with QA – test coverage is most effective when QA and devs design it together.
- Don’t underestimate infra – weak machines or missing setups can become bottlenecks.
Setting up Maestro for our React Native app was not just a technical exercise; it changed how our team thinks about testing. Instead of dreading repetitive checks, we now see automated testing as a safety net that lets us move faster and release more confidently.
👉 That’s the end of this series! If you’ve tried Maestro or built a similar pipeline, I’d love to hear how you approached reporting, collaboration, or scaling tests further.