GitHub Actions
Add human QA testing to your CI/CD pipeline with two specialized actions.
| Action | Purpose |
|---|---|
| Issue Tester Action | Auto-validate GitHub issues |
| QA Test Action | Test any URL on demand |
Setup: Add RUNHUMAN_API_KEY to your repository secrets (get your key) and RUNHUMAN_TESTING_URL to your repository variables.
Issue Tester Action
Automatically test GitHub issues with human QA and video recordings.
This action detects issues linked to merged PRs, uses AI to analyze whether each issue is human-testable (skipping code refactors, documentation, etc.), extracts test URLs and generates test instructions from the issue body, runs human QA tests, and manages the issue lifecycle—reopening failed issues and updating labels automatically.
Quick Setup
After your deploy workflow completes, this workflow automatically tests any issues linked to the merged PR and provides video recordings to validate the fixes.
# .github/workflows/test-issues.yml
name: Test Linked Issues
on:
workflow_run:
workflows: [CI] # Change to your deploy workflow name
types: [completed]
branches: [main]
jobs:
test-issues:
if: github.event.workflow_run.conclusion == 'success'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
ref: ${{ github.event.workflow_run.head_sha }}
- uses: volter-ai/runhuman-issue-tester-action@0.0.6
with:
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
test-url: ${{ vars.RUNHUMAN_TESTING_URL }}
Configuration
| Input | Required | Default | Description |
|---|---|---|---|
| api-key | Yes | - | Runhuman API key (starts with qa_live_) |
| github-token | No | github.token | GitHub token for API access |
| issue-number | No | - | Test a specific issue (bypasses PR detection) |
| test-url | No | - | Base URL for testing (AI appends paths from issues) |
| qa-label | No | qa-test | Label that marks issues for testing |
| auto-detect | No | true | Let AI decide which issues are testable |
| target-duration-minutes | No | 5 | Target test duration (1-60 minutes) |
| reopen-on-failure | No | true | Reopen issue if test fails |
| failure-label | No | qa-failed | Label added when test fails |
| remove-failure-label-on-success | No | true | Remove failure label on pass |
| issue-pattern | No | - | Custom regex for issue numbers in commits |
Action Outputs
| Output | Description |
|---|---|
| tested-issues | JSON array of tested issue numbers |
| passed-issues | JSON array of passed issue numbers |
| failed-issues | JSON array of failed issue numbers |
| skipped-issues | JSON array of skipped issue numbers |
| total-cost-usd | Total cost of all tests in USD |
| results | Full results object as JSON |
Workflow Triggers
After CI/Deploy (Recommended)
Wait for your deploy workflow to complete, then test:
on:
workflow_run:
workflows: [CI] # Change to your deploy workflow name
types: [completed]
branches: [main]
jobs:
test-issues:
if: github.event.workflow_run.conclusion == 'success'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
ref: ${{ github.event.workflow_run.head_sha }}
- uses: volter-ai/runhuman-issue-tester-action@0.0.6
with:
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
test-url: ${{ vars.RUNHUMAN_TESTING_URL }}
On PR Merge
Test immediately when a PR is merged:
on:
pull_request:
types: [closed]
branches: [main]
jobs:
test-issues:
if: github.event.pull_request.merged == true
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: volter-ai/runhuman-issue-tester-action@0.0.6
with:
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
Manual Testing
Test any issue on demand:
name: Manual Issue Test
on:
workflow_dispatch:
inputs:
issue-number:
description: 'Issue number to test'
required: true
type: number
test-url:
description: 'Test URL (optional)'
required: false
type: string
jobs:
test-issue:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: volter-ai/runhuman-issue-tester-action@0.0.6
with:
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
issue-number: ${{ inputs.issue-number }}
test-url: ${{ inputs.test-url }}
Issue Detection
The action finds linked issues from two sources:
PR Closing References
Issues linked in the PR description or sidebar:
- “Closes #123”
- “Fixes #456”
Commit Message Keywords
Issues referenced in commit messages:
fix #123,fixes #123,fixed #123close #123,closes #123,closed #123resolve #123,resolves #123,resolved #123
Custom Patterns
Use issue-pattern for project-specific references:
- uses: volter-ai/runhuman-issue-tester-action@0.0.6
with:
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
issue-pattern: 'PROJ-(\d+)' # Match "PROJ-123" style references
Writing Testable Issues
Include a test URL and clear reproduction steps:
## Bug Description
The checkout button is unresponsive on Safari.
## Test URL
https://staging.myapp.com/checkout
## Steps to Reproduce
1. Add items to cart
2. Go to checkout
3. Click "Place Order"
4. Nothing happens
## Expected Behavior
Order should be submitted and confirmation shown.
Test Results
On Pass: Results posted as comment, issue stays closed, qa-failed label removed.
On Fail: Detailed findings posted with screenshots/video, issue reopened, qa-failed label added.
QA Test Action
Test any URL with human testers. You provide the URL, test instructions, and what data to extract.
Quick Setup
# .github/workflows/qa-test.yml
name: QA Test
on: [pull_request]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: volter-ai/runhuman-qa-test-action@v0.0.1
with:
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
url: https://staging.myapp.com
description: Verify the homepage loads and navigation works
output-schema: |
{
"pageLoads": { "type": "boolean", "description": "Page loads correctly" },
"navigationWorks": { "type": "boolean", "description": "Navigation works" }
}
Configuration
| Input | Required | Default | Description |
|---|---|---|---|
| api-key | Yes | - | Your Runhuman API key |
| url | Yes | - | URL to test (must be publicly accessible) |
| description | Yes | - | Instructions for the tester |
| output-schema | Yes | - | JSON schema for structured results |
| target-duration-minutes | No | 5 | Time limit for the tester (1-60 minutes) |
| allow-duration-extension | No | false | Allow tester to request more time |
| max-extension-minutes | No | - | Maximum extension minutes allowed |
| fail-on-error | No | true | Fail workflow if test fails |
Using Output Schema
Define structured data you want returned:
- uses: volter-ai/runhuman-qa-test-action@v0.0.1
with:
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
url: https://staging.myapp.com/checkout
description: Test the checkout flow
output-schema: |
{
"checkoutWorks": {
"type": "boolean",
"description": "Does checkout complete?"
},
"issues": {
"type": "array",
"description": "Any issues found"
}
}
Action Outputs
Use outputs in subsequent steps:
- name: Run QA Test
id: qa
uses: volter-ai/runhuman-qa-test-action@v0.0.1
with:
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
url: https://staging.myapp.com
description: Test the application
output-schema: |
{
"appWorks": { "type": "boolean", "description": "Application works correctly" }
}
- name: Report Results
run: |
echo "Status: ${{ steps.qa.outputs.status }}"
echo "Cost: ${{ steps.qa.outputs.cost-usd }}"
echo "Data: ${{ steps.qa.outputs.data }}"
| Output | Description |
|---|---|
| status | Job status (completed, error, incomplete, abandoned) |
| success | Whether the test passed (true/false as string) |
| result | Full result object as JSON string |
| explanation | Tester’s findings |
| data | Extracted structured data (JSON) |
| cost-usd | Cost in USD |
| duration-seconds | Test duration |
| job-id | Job ID for reference |
Advanced Usage
Parallel Tests - Test multiple flows simultaneously:
jobs:
test-login:
runs-on: ubuntu-latest
steps:
- uses: volter-ai/runhuman-qa-test-action@v0.0.1
with:
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
url: https://staging.myapp.com/login
description: Test login flow
output-schema: |
{
"loginWorks": { "type": "boolean", "description": "Login works correctly" }
}
test-checkout:
runs-on: ubuntu-latest
steps:
- uses: volter-ai/runhuman-qa-test-action@v0.0.1
with:
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
url: https://staging.myapp.com
description: Test checkout flow
target-duration-minutes: 10
output-schema: |
{
"checkoutWorks": { "type": "boolean", "description": "Checkout works correctly" }
}
Conditional Testing - Only run tests when relevant files change:
on:
pull_request:
paths:
- 'src/checkout/**'
- 'src/payment/**'
Handling Failures - Continue workflow even if test fails:
- uses: volter-ai/runhuman-qa-test-action@v0.0.1
continue-on-error: true
with:
api-key: ${{ secrets.RUNHUMAN_API_KEY }}
url: https://staging.myapp.com
description: Test the application
output-schema: |
{
"appWorks": { "type": "boolean", "description": "Application works correctly" }
}
Troubleshooting
QA Test Action
| Problem | Solution |
|---|---|
| Action times out | Increase target-duration-minutes |
| URL not accessible | Ensure URL is publicly accessible |
| Test not starting | Verify API key is set correctly in secrets |
| High costs | Run tests only on specific branches or paths |
Issue Tester Action
| Problem | Solution |
|---|---|
| Issue not being tested | Check the issue has qa-test label or auto-detect is enabled |
| Test URL not found | Add test-url input or include “Test URL:” in issue body |
| Issue not detected | Use closing keywords (“Fixes #123”) in PR description |
| Authentication failed | Verify API key starts with qa_live_ and is set in secrets |
Next Steps
| Topic | Link |
|---|---|
| More testing recipes and patterns | Cookbook |
| Full technical specification | Reference |
| Direct API integration | REST API |