Sensor SDK
Embed the Runhuman sensor in your mobile or web app to capture network requests, console errors, and crashes during human QA testing. The captured telemetry is automatically included in the AI analysis, producing richer and more accurate bug reports.
The sensor has zero overhead when no test is running — it only activates when a test is running.
Enterprise Pro — The Sensor SDK is available on the Enterprise Pro plan. Learn more.
Installation
npm install @runhuman/sensor
The sensor works with React Native (0.70+) and web apps. React (18+) is required for the provider component.
Quick Start
Wrap your app with <RunhumanProvider> — that’s it. No separate initialization needed:
import { RunhumanProvider } from '@runhuman/sensor';
export default function App() {
return (
<RunhumanProvider apiKey="rh_your_api_key">
<YourApp />
</RunhumanProvider>
);
}
When a Runhuman test runs against your app, the sensor activates automatically and starts capturing.
Configuration
Pass options as props to <RunhumanProvider>:
<RunhumanProvider
apiKey="rh_your_api_key"
debug={true}
platform="react-native"
>
<YourApp />
</RunhumanProvider>
| Prop | Type | Default | Description |
|---|---|---|---|
apiKey | string | required | Your Runhuman API key (rh_...) |
position | 'top-left' | 'top-right' | 'bottom-left' | 'bottom-right' | 'bottom-right' | Corner for the overlay UI |
baseUrl | string | Production URL | Runhuman API server URL |
jobId | string | — | If known upfront, start polling for this job immediately |
platform | 'react-native' | 'web' | 'ios' | 'android' | 'react-native' | Target platform |
flushIntervalMs | number | 5000 | How often to send event batches (ms) |
pollIntervalMs | number | 10000 | How often to check for active jobs (ms) |
maxBufferSize | number | 1000 | Max buffered events before oldest are dropped |
enableDeepLinks | boolean | true | Auto-activate via URL scheme |
debug | boolean | false | Log all events to console |
Activation Methods
The sensor needs to know which job to capture telemetry for. There are three ways to activate it.
Default (Provider)
Just wrap your app with <RunhumanProvider> — the sensor activates automatically when a test runs. No extra configuration needed.
Deep Links
For apps with custom URL schemes, the sensor can auto-activate via deep link for a smoother experience. Deep links are enabled by default (enableDeepLinks: true).
To use deep links, configure your app’s URL scheme in your React Native project (e.g., in app.json for Expo or Info.plist / AndroidManifest.xml for bare React Native).
Programmatic
For custom flows, activate directly:
const sensor = Runhuman.getInstance();
sensor.activate('job-id-here');
To end a session:
await sensor.deactivate();
Provider Component
<RunhumanProvider> wraps your app, handles sensor lifecycle, and provides the activation UI and status indicator.
import { RunhumanProvider } from '@runhuman/sensor';
<RunhumanProvider apiKey="rh_your_api_key" position="bottom-right">
<App />
</RunhumanProvider>
What it shows in each state:
| State | UI |
|---|---|
| Idle (no active test) | Floating code entry panel (dismissible to a small “RH” button) |
| Active (capturing) | Small green pulsing dot |
| Ending (flushing events) | Amber dot |
The overlay is invisible when minimized — it won’t interfere with your app’s UI during normal use.
React Hook: useSensorState
Build custom activation UI instead of using the overlay:
import { useSensorState } from '@runhuman/sensor';
function MyStatusBar() {
const { state, activeJobId, sessionId } = useSensorState();
if (state === 'idle') return null;
return (
<View style={styles.statusBar}>
<Text>Sensor: {state}</Text>
{activeJobId && <Text>Job: {activeJobId}</Text>}
</View>
);
}
SensorState fields:
| Field | Type | Description |
|---|---|---|
state | 'idle' | 'polling' | 'active' | 'ending' | Current sensor state |
activeJobId | string | null | The job being tested |
sessionId | string | null | Current telemetry session ID |
The hook is safe to use before Runhuman.init() is called — it returns idle state until the sensor is initialized.
What Gets Captured
When active, the sensor captures:
| Category | Details |
|---|---|
| Network requests | URLs, HTTP methods, status codes, timing, response sizes |
| Console errors | Error and warning messages |
| JavaScript crashes | Unhandled exceptions with stack traces |
| UI events | Taps, navigation, form interactions |
All telemetry is session-scoped — the sensor only captures while a test is active. No data is collected or sent when idle.
Events are buffered locally and flushed to the Runhuman API in batches (every 5 seconds by default). If the network is unavailable, events queue locally until connectivity resumes.
Debug Mode
Enable debug logging during development to verify the sensor is working:
<RunhumanProvider apiKey="rh_your_api_key" debug>
<YourApp />
</RunhumanProvider>
With debug, the sensor logs all captured events to the console prefixed with [Runhuman]. This is useful for:
- Verifying interceptors are installed correctly
- Checking that events are being captured
- Debugging activation issues
Turn off debug mode in production builds.
Full Example
A complete React Native app with the sensor integrated:
import React from 'react';
import { View, Text, StyleSheet } from 'react-native';
import { RunhumanProvider, useSensorState } from '@runhuman/sensor';
function StatusBanner() {
const { state } = useSensorState();
if (state !== 'active') return null;
return (
<View style={styles.banner}>
<Text style={styles.bannerText}>QA test in progress</Text>
</View>
);
}
export default function App() {
return (
<RunhumanProvider apiKey="rh_your_api_key" debug={__DEV__} position="bottom-right">
<StatusBanner />
<View style={styles.app}>
<Text>My App</Text>
</View>
</RunhumanProvider>
);
}
const styles = StyleSheet.create({
banner: {
backgroundColor: '#22c55e',
padding: 8,
alignItems: 'center',
},
bannerText: {
color: '#fff',
fontWeight: '600',
fontSize: 12,
},
app: {
flex: 1,
justifyContent: 'center',
alignItems: 'center',
},
});
Next Steps
| Want to… | Read |
|---|---|
| Get your API key | Setup |
| Create tests via the API | REST API |
| Use with AI coding agents | Agent Skills |
| Look up technical details | Reference |