Pricing Docs MCP

GitHub Actions

Add human QA testing to your CI/CD pipeline with two specialized actions.

ActionPurpose
Issue Tester ActionAuto-validate GitHub issues
QA Test ActionTest any URL on demand

Setup: Add RUNHUMAN_API_KEY to your repository secrets (get your key) and RUNHUMAN_TESTING_URL to your repository variables.


Issue Tester Action

Automatically test GitHub issues with human QA and video recordings.

This action detects issues linked to merged PRs, uses AI to analyze whether each issue is human-testable (skipping code refactors, documentation, etc.), extracts test URLs and generates test instructions from the issue body, runs human QA tests, and manages the issue lifecycle—reopening failed issues and updating labels automatically.

Quick Setup

After your deploy workflow completes, this workflow automatically tests any issues linked to the merged PR and provides video recordings to validate the fixes.

# .github/workflows/test-issues.yml
name: Test Linked Issues

on:
  workflow_run:
    workflows: [CI]  # Change to your deploy workflow name
    types: [completed]
    branches: [main]

jobs:
  test-issues:
    if: github.event.workflow_run.conclusion == 'success'
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          ref: ${{ github.event.workflow_run.head_sha }}
      - uses: volter-ai/runhuman-issue-tester-action@0.0.6
        with:
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}
          test-url: ${{ vars.RUNHUMAN_TESTING_URL }}

Configuration

InputRequiredDefaultDescription
api-keyYes-Runhuman API key (starts with qa_live_)
github-tokenNogithub.tokenGitHub token for API access
issue-numberNo-Test a specific issue (bypasses PR detection)
test-urlNo-Base URL for testing (AI appends paths from issues)
qa-labelNoqa-testLabel that marks issues for testing
auto-detectNotrueLet AI decide which issues are testable
target-duration-minutesNo5Target test duration (1-60 minutes)
reopen-on-failureNotrueReopen issue if test fails
failure-labelNoqa-failedLabel added when test fails
remove-failure-label-on-successNotrueRemove failure label on pass
issue-patternNo-Custom regex for issue numbers in commits

Action Outputs

OutputDescription
tested-issuesJSON array of tested issue numbers
passed-issuesJSON array of passed issue numbers
failed-issuesJSON array of failed issue numbers
skipped-issuesJSON array of skipped issue numbers
total-cost-usdTotal cost of all tests in USD
resultsFull results object as JSON

Workflow Triggers

After CI/Deploy (Recommended)

Wait for your deploy workflow to complete, then test:

on:
  workflow_run:
    workflows: [CI]  # Change to your deploy workflow name
    types: [completed]
    branches: [main]

jobs:
  test-issues:
    if: github.event.workflow_run.conclusion == 'success'
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          ref: ${{ github.event.workflow_run.head_sha }}
      - uses: volter-ai/runhuman-issue-tester-action@0.0.6
        with:
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}
          test-url: ${{ vars.RUNHUMAN_TESTING_URL }}

On PR Merge

Test immediately when a PR is merged:

on:
  pull_request:
    types: [closed]
    branches: [main]

jobs:
  test-issues:
    if: github.event.pull_request.merged == true
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: volter-ai/runhuman-issue-tester-action@0.0.6
        with:
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}

Manual Testing

Test any issue on demand:

name: Manual Issue Test

on:
  workflow_dispatch:
    inputs:
      issue-number:
        description: 'Issue number to test'
        required: true
        type: number
      test-url:
        description: 'Test URL (optional)'
        required: false
        type: string

jobs:
  test-issue:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: volter-ai/runhuman-issue-tester-action@0.0.6
        with:
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}
          issue-number: ${{ inputs.issue-number }}
          test-url: ${{ inputs.test-url }}

Issue Detection

The action finds linked issues from two sources:

PR Closing References

Issues linked in the PR description or sidebar:

  • “Closes #123”
  • “Fixes #456”

Commit Message Keywords

Issues referenced in commit messages:

  • fix #123, fixes #123, fixed #123
  • close #123, closes #123, closed #123
  • resolve #123, resolves #123, resolved #123

Custom Patterns

Use issue-pattern for project-specific references:

- uses: volter-ai/runhuman-issue-tester-action@0.0.6
  with:
    api-key: ${{ secrets.RUNHUMAN_API_KEY }}
    issue-pattern: 'PROJ-(\d+)'  # Match "PROJ-123" style references

Writing Testable Issues

Include a test URL and clear reproduction steps:

## Bug Description
The checkout button is unresponsive on Safari.

## Test URL
https://staging.myapp.com/checkout

## Steps to Reproduce
1. Add items to cart
2. Go to checkout
3. Click "Place Order"
4. Nothing happens

## Expected Behavior
Order should be submitted and confirmation shown.

Test Results

On Pass: Results posted as comment, issue stays closed, qa-failed label removed.

On Fail: Detailed findings posted with screenshots/video, issue reopened, qa-failed label added.


QA Test Action

Test any URL with human testers. You provide the URL, test instructions, and what data to extract.

Quick Setup

# .github/workflows/qa-test.yml
name: QA Test
on: [pull_request]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: volter-ai/runhuman-qa-test-action@v0.0.1
        with:
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}
          url: https://staging.myapp.com
          description: Verify the homepage loads and navigation works
          output-schema: |
            {
              "pageLoads": { "type": "boolean", "description": "Page loads correctly" },
              "navigationWorks": { "type": "boolean", "description": "Navigation works" }
            }

Configuration

InputRequiredDefaultDescription
api-keyYes-Your Runhuman API key
urlYes-URL to test (must be publicly accessible)
descriptionYes-Instructions for the tester
output-schemaYes-JSON schema for structured results
target-duration-minutesNo5Time limit for the tester (1-60 minutes)
allow-duration-extensionNofalseAllow tester to request more time
max-extension-minutesNo-Maximum extension minutes allowed
fail-on-errorNotrueFail workflow if test fails

Using Output Schema

Define structured data you want returned:

- uses: volter-ai/runhuman-qa-test-action@v0.0.1
  with:
    api-key: ${{ secrets.RUNHUMAN_API_KEY }}
    url: https://staging.myapp.com/checkout
    description: Test the checkout flow
    output-schema: |
      {
        "checkoutWorks": {
          "type": "boolean",
          "description": "Does checkout complete?"
        },
        "issues": {
          "type": "array",
          "description": "Any issues found"
        }
      }

Action Outputs

Use outputs in subsequent steps:

- name: Run QA Test
  id: qa
  uses: volter-ai/runhuman-qa-test-action@v0.0.1
  with:
    api-key: ${{ secrets.RUNHUMAN_API_KEY }}
    url: https://staging.myapp.com
    description: Test the application
    output-schema: |
      {
        "appWorks": { "type": "boolean", "description": "Application works correctly" }
      }

- name: Report Results
  run: |
    echo "Status: ${{ steps.qa.outputs.status }}"
    echo "Cost: ${{ steps.qa.outputs.cost-usd }}"
    echo "Data: ${{ steps.qa.outputs.data }}"
OutputDescription
statusJob status (completed, error, incomplete, abandoned)
successWhether the test passed (true/false as string)
resultFull result object as JSON string
explanationTester’s findings
dataExtracted structured data (JSON)
cost-usdCost in USD
duration-secondsTest duration
job-idJob ID for reference

Advanced Usage

Parallel Tests - Test multiple flows simultaneously:

jobs:
  test-login:
    runs-on: ubuntu-latest
    steps:
      - uses: volter-ai/runhuman-qa-test-action@v0.0.1
        with:
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}
          url: https://staging.myapp.com/login
          description: Test login flow
          output-schema: |
            {
              "loginWorks": { "type": "boolean", "description": "Login works correctly" }
            }

  test-checkout:
    runs-on: ubuntu-latest
    steps:
      - uses: volter-ai/runhuman-qa-test-action@v0.0.1
        with:
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}
          url: https://staging.myapp.com
          description: Test checkout flow
          target-duration-minutes: 10
          output-schema: |
            {
              "checkoutWorks": { "type": "boolean", "description": "Checkout works correctly" }
            }

Conditional Testing - Only run tests when relevant files change:

on:
  pull_request:
    paths:
      - 'src/checkout/**'
      - 'src/payment/**'

Handling Failures - Continue workflow even if test fails:

- uses: volter-ai/runhuman-qa-test-action@v0.0.1
  continue-on-error: true
  with:
    api-key: ${{ secrets.RUNHUMAN_API_KEY }}
    url: https://staging.myapp.com
    description: Test the application
    output-schema: |
      {
        "appWorks": { "type": "boolean", "description": "Application works correctly" }
      }

Troubleshooting

QA Test Action

ProblemSolution
Action times outIncrease target-duration-minutes
URL not accessibleEnsure URL is publicly accessible
Test not startingVerify API key is set correctly in secrets
High costsRun tests only on specific branches or paths

Issue Tester Action

ProblemSolution
Issue not being testedCheck the issue has qa-test label or auto-detect is enabled
Test URL not foundAdd test-url input or include “Test URL:” in issue body
Issue not detectedUse closing keywords (“Fixes #123”) in PR description
Authentication failedVerify API key starts with qa_live_ and is set in secrets

Next Steps

TopicLink
More testing recipes and patternsCookbook
Full technical specificationReference
Direct API integrationREST API