Pricing Docs MCP Learn

Cookbook

Step-by-step guides for common Runhuman use cases.


Issue Testing Automation

Automatically verify that issues are fixed when PRs are merged or commits that fix issues are pushed.

How It Works

  1. A PR with “Fixes #123” in the description is merged or a commit that closes an issue is pushed to main
  2. The action analyzes the linked issue to generate test instructions
  3. A human tester verifies the fix on your deployment URL
  4. Results are posted as a comment on the issue
  5. If the test fails, the issue is reopened

Setup

Add these secrets and variables to your repository:

NameTypeDescription
RUNHUMAN_API_KEYSecretYour API key
RUNHUMAN_TESTING_URLVariableYour staging/preview URL

Create the workflow file:

# .github/workflows/test-issues.yml
name: Test Linked Issues

on:
  workflow_run:
    workflows: [CI]  # Replace with your deploy workflow name
    types: [completed]
    branches: [main]

concurrency:
  group: test-issues-${{ github.event.workflow_run.head_sha }}
  cancel-in-progress: true

jobs:
  test-issues:
    if: github.event.workflow_run.conclusion == 'success'
    runs-on: ubuntu-latest
    steps:
      - uses: volter-ai/runhuman-action@v1
        with:
          url: ${{ vars.RUNHUMAN_TESTING_URL }}
          pr-numbers: '[${{ github.event.workflow_run.pull_request_number }}]'
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}
          on-success-add-labels: '["qa:passed"]'
          on-failure-add-labels: '["qa:failed"]'
          fail-on-failure: true

Configuration

See the GitHub Actions documentation for full configuration options including:

  • Label management (add/remove labels on success, failure, timeout)
  • Workflow control (fail-on-error, fail-on-failure, fail-on-timeout)
  • Test configuration (target-duration-minutes, screen-size, output-schema)

Writing Testable Issues

Include a test URL and clear reproduction steps:

## Bug Description
The checkout button is unresponsive on Safari.

## Test URL
https://staging.myapp.com/checkout

## Steps to Reproduce
1. Add items to cart
2. Go to checkout
3. Click "Place Order"
4. Nothing happens

## Expected Behavior
Order should be submitted and confirmation shown.

Bulk Issue Testing

Test all open issues in your repository on a schedule or on-demand.

How It Works

  1. Workflow fetches all open issues with the qa-test label
  2. Each issue is tested in parallel using matrix strategy
  3. Results are posted as comments on each issue
  4. Failed issues get reopened and labeled

Setup

# .github/workflows/test-all-issues.yml
name: Test All Open Issues

on:
  # Option 1: Run daily at 9 AM UTC
  schedule:
    - cron: '0 9 * * *'
  # Option 2: Manual trigger only
  # workflow_dispatch: {}
  # Option 3: Both scheduled and manual (recommended)
  workflow_dispatch:

jobs:
  find-issues:
    runs-on: ubuntu-latest
    outputs:
      issues: ${{ steps.get-issues.outputs.issues }}
      count: ${{ steps.get-issues.outputs.count }}
    steps:
      - name: Get open issues with qa-test label
        id: get-issues
        env:
          GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        run: |
          issues=$(gh issue list \
            --repo ${{ github.repository }} \
            --label "qa-test" \
            --state open \
            --json number \
            --jq '[.[].number]')
          echo "issues=$issues" >> $GITHUB_OUTPUT
          echo "count=$(echo $issues | jq length)" >> $GITHUB_OUTPUT

  test-issue:
    needs: find-issues
    if: needs.find-issues.outputs.count != '0'
    runs-on: ubuntu-latest
    strategy:
      fail-fast: false
      max-parallel: 3  # Limit concurrent tests to control costs
      matrix:
        issue: ${{ fromJson(needs.find-issues.outputs.issues) }}
    steps:
      - uses: volter-ai/runhuman-action@v1
        with:
          url: ${{ vars.RUNHUMAN_TESTING_URL }}
          issue-numbers: '[${{ matrix.issue }}]'
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}
          on-failure-add-labels: '["qa:failed"]'

Cost Considerations

  • Each test costs ~$0.32-0.54 (3-5 minutes)
  • 10 issues = ~$3-5 per run
  • Use max-parallel to control concurrent spending
  • Consider running less frequently for large issue counts

Preview Deployment Testing

Test Vercel, Netlify, or other preview deployments automatically.

Vercel

name: Test Vercel Preview
on:
  deployment_status

jobs:
  test:
    if: github.event.deployment_status.state == 'success'
    runs-on: ubuntu-latest
    steps:
      - uses: volter-ai/runhuman-action@v1
        with:
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}
          url: ${{ github.event.deployment_status.target_url }}
          description: Test the preview deployment
          output-schema: |
            {
              "pageLoads": { "type": "boolean", "description": "Page loads correctly?" },
              "noErrors": { "type": "boolean", "description": "No console errors?" }
            }

Netlify

name: Test Netlify Preview
on:
  pull_request

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - name: Wait for Netlify
        uses: jakepartusch/wait-for-netlify-action@v1.4
        id: netlify
        with:
          site_name: your-site-name
          max_timeout: 300

      - uses: volter-ai/runhuman-action@v1
        with:
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}
          url: ${{ steps.netlify.outputs.url }}
          description: Test the Netlify preview

Custom Preview URLs

If your preview URLs follow a pattern:

url: https://pr-${{ github.event.pull_request.number }}.preview.myapp.com

Visual Regression Testing

Catch UI bugs before they reach production.

Basic Visual Check

const result = await fetch('https://runhuman.com/api/run', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${API_KEY}`,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    url: 'https://staging.myapp.com/product/123',
    description: 'Check for visual issues: broken images, layout problems, text overflow, color contrast issues',
    outputSchema: {
      imagesLoad: { type: 'boolean', description: 'All images load correctly?' },
      layoutCorrect: { type: 'boolean', description: 'Layout looks correct, no overflow?' },
      textReadable: { type: 'boolean', description: 'All text is readable?' },
      visualIssues: { type: 'string', description: 'Describe any visual problems found' }
    }
  })
});

Mobile Responsiveness

const result = await fetch('https://runhuman.com/api/run', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${API_KEY}`,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    url: 'https://staging.myapp.com',
    description: 'Test on mobile: check navigation menu, forms, buttons. Look for overflow, tiny text, unreachable elements.',
    outputSchema: {
      navigationWorks: { type: 'boolean', description: 'Mobile nav opens and closes correctly?' },
      formsUsable: { type: 'boolean', description: 'Forms are usable on mobile?' },
      mobileIssues: { type: 'array', description: 'List of mobile-specific issues' }
    }
  })
});

Multi-Step Flow Testing

Test complex user journeys that span multiple pages.

Checkout Flow

const result = await fetch('https://runhuman.com/api/run', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${API_KEY}`,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    url: 'https://staging.myapp.com/products',
    description: `
      1. Browse products and add one to cart
      2. Go to cart and verify the item is there
      3. Proceed to checkout
      4. Fill shipping information
      5. Select payment method
      6. Verify order summary shows correct total
      7. Do not submit the final order
    `,
    targetDurationMinutes: 10,
    outputSchema: {
      addToCartWorks: { type: 'boolean', description: 'Product added to cart successfully?' },
      cartShowsItem: { type: 'boolean', description: 'Cart displays the added item?' },
      checkoutLoads: { type: 'boolean', description: 'Checkout page loads?' },
      shippingFormWorks: { type: 'boolean', description: 'Shipping form accepts input?' },
      totalCorrect: { type: 'boolean', description: 'Order total looks correct?' },
      issues: { type: 'array', description: 'Any issues encountered' }
    }
  })
});

User Onboarding

const result = await fetch('https://runhuman.com/api/run', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${API_KEY}`,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    url: 'https://staging.myapp.com/signup',
    description: `
      1. Create account with email test-${Date.now()}@example.com
      2. Complete the onboarding wizard
      3. Set up profile with sample data
      4. Verify you reach the dashboard
    `,
    targetDurationMinutes: 8,
    outputSchema: {
      signupWorks: { type: 'boolean', description: 'Account created successfully?' },
      onboardingCompletes: { type: 'boolean', description: 'Onboarding wizard completes?' },
      profileSaves: { type: 'boolean', description: 'Profile changes save?' },
      dashboardReached: { type: 'boolean', description: 'User reaches dashboard?' },
      confusingSteps: { type: 'array', description: 'Any confusing or unclear steps' }
    }
  })
});

Authentication Flows

const result = await fetch('https://runhuman.com/api/run', {
  method: 'POST',
  headers: {
    'Authorization': `Bearer ${API_KEY}`,
    'Content-Type': 'application/json'
  },
  body: JSON.stringify({
    url: 'https://staging.myapp.com/login',
    description: `
      Test authentication:
      1. Try login with valid credentials (test@example.com / demo123)
      2. Verify redirect to dashboard
      3. Log out
      4. Try login with wrong password
      5. Verify error message is shown
      6. Try forgot password link
    `,
    targetDurationMinutes: 8,
    outputSchema: {
      loginWorks: { type: 'boolean', description: 'Valid login succeeds?' },
      logoutWorks: { type: 'boolean', description: 'Logout works?' },
      errorShown: { type: 'boolean', description: 'Error shown for wrong password?' },
      errorMessage: { type: 'string', description: 'What error message is displayed?' },
      forgotPasswordWorks: { type: 'boolean', description: 'Forgot password link works?' }
    }
  })
});

Scheduled Daily Testing with Templates

Run comprehensive daily tests at a specific time with reusable templates, detailed checklists, and full video/event review.

Use Case

You want to run the same comprehensive test every day at 5 PM (or any specific time) to catch regressions early. The test should use a detailed checklist that testers fill out, and you should be able to review video recordings and activity logs in the dashboard.

How It Works

  1. Create a reusable template with a detailed output schema (checkboxes for each verification item)
  2. Set up a GitHub Actions workflow with cron scheduling
  3. Review test results in the dashboard: watch the video recording and see all captured events

Step 1: Create a Template with Detailed Output Schema

Templates let you reuse test configurations. Create one with detailed checkbox-based output:

# Install the CLI if you haven't already
npm install -g runhuman

# Login with your API key
runhuman login

# Create a template with detailed output schema
runhuman templates create "Daily Smoke Test" \
  --project proj_abc123 \
  -d "Comprehensive daily test of core functionality" \
  --duration 600 \
  --schema ./daily-test-schema.json

Example schema file (daily-test-schema.json):

{
  "homePageLoads": {
    "type": "boolean",
    "description": "Home page loads without errors"
  },
  "navigationWorks": {
    "type": "boolean",
    "description": "All navigation links work correctly"
  },
  "loginFlowWorks": {
    "type": "boolean",
    "description": "Can log in with test credentials"
  },
  "dashboardDisplays": {
    "type": "boolean",
    "description": "Dashboard displays correctly after login"
  },
  "searchFunctional": {
    "type": "boolean",
    "description": "Search feature returns results"
  },
  "checkoutWorks": {
    "type": "boolean",
    "description": "Checkout flow completes successfully"
  },
  "mobileResponsive": {
    "type": "boolean",
    "description": "Site is responsive on mobile screen sizes"
  },
  "noConsoleErrors": {
    "type": "boolean",
    "description": "No JavaScript console errors"
  },
  "issues": {
    "type": "array",
    "items": { "type": "string" },
    "description": "List any issues encountered"
  },
  "additionalNotes": {
    "type": "string",
    "description": "Any other observations or feedback"
  }
}

Save the template ID from the output - you’ll need it for the GitHub Action:

Created template: tmpl_daily_smoke_test_xyz

Step 2: Schedule with GitHub Actions

Create a workflow file that runs daily at 5 PM UTC (adjust timezone as needed):

# .github/workflows/daily-qa-test.yml
name: Daily QA Test

on:
  schedule:
    # Runs at 5 PM UTC every day (cron format: minute hour day month weekday)
    # For 5 PM EST (10 PM UTC), use: '0 22 * * *'
    # For 5 PM PST (1 AM UTC next day), use: '0 1 * * *'
    - cron: '0 17 * * *'
  # Also allow manual triggering for testing
  workflow_dispatch:

jobs:
  daily-test:
    runs-on: ubuntu-latest
    steps:
      - name: Run Daily QA Test
        uses: volter-ai/runhuman-action@v1
        with:
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}
          url: ${{ vars.RUNHUMAN_TESTING_URL }}
          template: tmpl_daily_smoke_test_xyz  # Use your template ID

      - name: Comment on Failure
        if: failure()
        run: |
          echo "Daily QA test failed! Check the dashboard for details."

Cron Syntax Reference:

┌───────────── minute (0-59)
│ ┌───────────── hour (0-23)
│ │ ┌───────────── day of month (1-31)
│ │ │ ┌───────────── month (1-12)
│ │ │ │ ┌───────────── day of week (0-6, Sunday to Saturday)
│ │ │ │ │
0 17 * * *  # 5 PM UTC daily

Common Schedules:

TimeCron ExpressionDescription
5 PM UTC daily0 17 * * *Every day at 5 PM UTC
9 AM UTC weekdays0 9 * * 1-5Monday-Friday at 9 AM UTC
Every 6 hours0 */6 * * *12 AM, 6 AM, 12 PM, 6 PM UTC
Twice daily0 9,17 * * *9 AM and 5 PM UTC

Note: GitHub Actions runs on UTC time. Convert your local time to UTC:

  • EST: Add 5 hours (5 PM EST = 10 PM UTC = 0 22 * * *)
  • PST: Add 8 hours (5 PM PST = 1 AM UTC next day = 0 1 * * *)

Required Secrets and Variables:

Add these to your repository settings:

NameTypeDescription
RUNHUMAN_API_KEYSecretYour API key from the dashboard
RUNHUMAN_TESTING_URLVariableYour staging/production URL to test

Step 3: Review Results in Dashboard

After the scheduled test runs:

  1. Open the Dashboard

  2. View Test Details

    • Click on the job to see full details
    • Watch the video: See exactly what the tester did, recorded from their screen
    • Review the checklist: See which items passed/failed based on your output schema
    • Check the events: See all browser interactions, clicks, navigation, console logs
  3. Dashboard Features

    • Video playback: Scrub through the recording to see specific moments
    • Event timeline: See timestamps for every action taken
    • Console logs: Review any JavaScript errors or warnings
    • Network activity: See API calls and their responses
    • Screenshots: View captured screenshots at key moments
  4. Results Structure

The test results will show your schema fields as checkboxes:

✅ homePageLoads: true
✅ navigationWorks: true
✅ loginFlowWorks: true
✅ dashboardDisplays: true
❌ searchFunctional: false
✅ checkoutWorks: true
✅ mobileResponsive: true
❌ noConsoleErrors: false

Issues found:
- Search returns 500 error for special characters
- Console error: "Cannot read property 'map' of undefined" in ProductList.tsx

Additional notes:
- Login is slower than usual (3-4 seconds)
- Mobile menu animation is laggy on iPhone 12

Full Workflow Example

1. Create the template:

runhuman templates create "Production Health Check" \
  --project proj_abc123 \
  -d "Daily comprehensive test of production environment" \
  --duration 600 \
  --schema ./health-check-schema.json

2. Add the workflow file (.github/workflows/daily-health-check.yml):

name: Production Health Check

on:
  schedule:
    - cron: '0 17 * * *'  # 5 PM UTC
  workflow_dispatch:

jobs:
  health-check:
    runs-on: ubuntu-latest
    steps:
      - uses: volter-ai/runhuman-action@v1
        with:
          api-key: ${{ secrets.RUNHUMAN_API_KEY }}
          url: https://myapp.com
          template: tmpl_health_check_abc

3. Wait for scheduled run (or trigger manually):

# Manually trigger the workflow for testing
gh workflow run daily-health-check.yml

4. Review in dashboard:


Tips and Best Practices

Template Design:

  • Keep output schemas focused (8-12 items maximum)
  • Use boolean fields for yes/no checks
  • Include an issues array for detailed problem descriptions
  • Add an additionalNotes string field for tester observations

Scheduling:

  • Run during low-traffic periods to avoid affecting real users
  • Consider timezone differences when scheduling
  • Use workflow_dispatch to allow manual triggering for testing

Cost Management:

  • Each 10-minute test costs approximately $0.54-0.90
  • Daily tests = ~$16-27/month
  • Use shorter durations for simple smoke tests (5 minutes = $0.27-0.45)

Notifications:

  • Set up Slack/email notifications for test failures
  • Use GitHub Actions’ built-in notifications
  • Consider creating GitHub issues automatically for failed tests

Video Review:

  • Videos are essential for debugging visual issues
  • Scrub to specific timestamps using the event timeline
  • Share video links with your team for collaborative debugging

Next Steps

TopicLink
Full technical specificationReference
REST API integrationREST API
CI/CD integrationGitHub Actions