When your API goes down at 2 AM, you do not want to find out from an angry customer email the next morning. Synthetic monitoring solves this problem by continuously testing your services from the outside, catching failures before real users are affected. It is one of the most effective ways to maintain API reliability, and understanding how it works is the first step toward building a robust monitoring strategy.
This guide covers what synthetic monitoring is, how it works under the hood, the different types of synthetic tests, and where this approach falls short. If you are already familiar with the basics and want to compare tools, jump to our guide to the best synthetic monitoring tools.
What Is Synthetic Monitoring?
Synthetic monitoring is a proactive testing method that uses automated, scripted requests to check the availability and performance of your websites, APIs, and applications at regular intervals. The word "synthetic" means these tests are artificial -- they are generated by a monitoring service rather than by real users. The monitoring platform sends requests from one or more external locations, measures the response, and alerts you when something goes wrong.
Unlike real user monitoring (RUM), which passively collects data from actual visitors, synthetic monitoring actively generates traffic on a fixed schedule. This means you get performance data 24 hours a day, 7 days a week, regardless of whether anyone is using your application. A synthetic monitor running every minute will detect an outage within 60 seconds, even if it happens at 3 AM on a holiday when your traffic is zero.
The concept is straightforward: instead of waiting for problems to affect users, you create automated tests that continuously verify your service is working. When a test fails or response time exceeds a threshold, the system sends an alert so your team can respond before the impact spreads.
How Synthetic Monitoring Works Step by Step
Understanding the mechanics of synthetic monitoring helps you configure it correctly and interpret results accurately. Here is how a typical synthetic monitoring workflow operates:
- Define the test. You specify what to check: an HTTP endpoint, a multi-step API transaction, or a browser-based user flow. You configure the URL, expected status code, request headers, authentication credentials, and any assertions (for example, "response body must contain
status: ok"). - Select check locations. You choose one or more geographic locations from which the monitoring service will execute the test. Running from multiple regions helps you detect location-specific issues like CDN misconfigurations or DNS propagation failures.
- Set the check interval. You define how often the test runs -- every 30 seconds, 1 minute, 5 minutes, or another interval. Shorter intervals detect problems faster but cost more and generate more data.
- The monitoring service executes the test. At each interval, the service sends the configured request from the selected locations. It records timing data: DNS resolution, TCP connection, TLS handshake, time to first byte (TTFB), and total response time.
- Results are evaluated against thresholds. The service compares the response against your assertions. Did the endpoint return HTTP 200? Was the response time under 500ms? Did the body contain the expected content? If any assertion fails, the check is marked as failed.
- Alerts fire on failure. When a check fails, the monitoring platform sends notifications through your configured channels: email, Slack, SMS, PagerDuty, webhooks, or other integrations. Most platforms support escalation policies and on-call schedules.
- Historical data is stored. Every check result is logged, creating a historical record of availability and performance. This data feeds into dashboards, SLA reports, and trend analysis.
Types of Synthetic Monitoring
Not all synthetic tests are the same. The complexity and scope of each test type varies significantly, and choosing the right type depends on what you need to validate.
HTTP Ping Checks
The simplest form of synthetic monitoring. An HTTP ping check sends a GET or HEAD request to a URL and verifies that the server responds with an expected status code (usually 200). It measures basic availability and response time. This is sufficient for most API health checks and is the type of monitoring most teams start with.
Use case: Confirming that your API endpoints, landing pages, and health check routes are reachable and responding.
Browser Script Checks
Browser-based synthetic checks use a headless browser (typically Chromium) to load a full web page and execute JavaScript. They measure frontend performance metrics like page load time, largest contentful paint (LCP), cumulative layout shift (CLS), and time to interactive. These tests can also validate that specific elements render correctly on the page.
Use case: Validating that your web application loads correctly and meets Core Web Vitals thresholds from multiple locations.
API Transaction Checks
API transaction checks go beyond single-request pings. They execute a sequence of API calls that represent a business workflow: authenticate, create a resource, read it back, update it, and delete it. Each step validates the response before proceeding to the next. If any step fails, the entire transaction is marked as failed.
Use case: Validating that a complete API workflow (signup, payment, data retrieval) functions end to end.
Multi-Step Browser Checks
The most complex type of synthetic monitoring. Multi-step browser checks script an entire user journey: navigate to a page, fill in a form, click a button, wait for a response, navigate to another page, and verify the result. These are typically written using frameworks like Playwright or Puppeteer and executed in headless browsers at each check interval.
Use case: Validating critical user flows like checkout, account creation, or search functionality where multiple pages and interactions are involved.
Synthetic Transaction Monitoring Explained
Synthetic transaction monitoring deserves special attention because it represents the most valuable form of synthetic testing for business-critical APIs. While a simple ping tells you whether an endpoint is alive, a synthetic transaction tells you whether your service actually works.
Consider an e-commerce API. A ping check on /api/health might return 200 even when the payment processing endpoint is broken. A synthetic transaction that executes the full purchase flow -- add to cart, apply discount code, submit payment, verify confirmation -- will catch that failure because it tests the actual business logic, not just the server's ability to respond.
Synthetic transaction monitoring is particularly valuable for:
- Payment flows -- Verifying that the complete payment pipeline works, from cart to confirmation, including third-party payment gateway integration.
- Authentication chains -- Testing OAuth flows, token refresh, and session management across multiple API calls.
- Data pipeline validation -- Confirming that data written through one endpoint is correctly retrievable through another, validating consistency across your service.
- Third-party integration health -- When your workflow depends on external APIs (shipping calculators, tax services, email providers), transaction monitoring catches failures in those dependencies.
The trade-off is complexity. Synthetic transactions require more setup, more maintenance (scripts break when APIs change), and more compute resources than simple pings. But for critical business workflows, the investment pays for itself the first time it catches a failure that a simple health check would have missed.
Limitations of Synthetic Monitoring
Synthetic monitoring is powerful, but it is not a complete solution on its own. Understanding its limitations helps you build a monitoring strategy that covers all the gaps.
- Only tests predefined paths. Synthetic monitors only check what you explicitly configure. If you have 500 API endpoints and only monitor 20, failures in the other 480 go undetected. You must continuously update your test suite as your API evolves.
- Does not reflect real user conditions. Synthetic tests run from datacenter environments with fast, stable connections. They cannot replicate the slow 3G connection a user has on a crowded train, the outdated browser on a corporate machine, or the unusual request patterns generated by real user behavior.
- No behavioral data. Synthetic monitoring tells you whether your service works but not how users actually interact with it. It cannot reveal which endpoints receive the most traffic, which workflows users abandon, or which error messages users encounter most frequently.
- Script maintenance burden. Complex synthetic tests (multi-step transactions, browser scripts) break when your application changes. A redesigned checkout page, a renamed API field, or a changed authentication flow will cause test failures that require script updates. This maintenance cost scales with the number of tests.
- False positives from network issues. Synthetic checks can fail due to transient network problems between the monitoring service and your server. Most platforms mitigate this with retry logic and multi-location confirmation, but false alerts still occur.
Alternatives and Complementary Approaches
Synthetic monitoring works best when combined with other monitoring approaches that fill in its blind spots.
Internal Health Checks
Instead of relying solely on external synthetic tests, you can build health check endpoints into your application that report on internal subsystem status: database connectivity, cache availability, queue depth, memory usage. Tools like Nurbak Watch can monitor these internal health endpoints from multiple global regions, combining the benefits of internal awareness with external validation. For a detailed walkthrough, see our endpoint monitoring guide.
Real User Monitoring (RUM)
RUM collects performance data from actual users, capturing the full range of devices, browsers, networks, and geographic conditions that synthetic tests cannot replicate. Where synthetic monitoring is proactive and controlled, RUM is passive and representative. Most mature teams use both. For a detailed comparison, read our guide on synthetic monitoring vs real user monitoring.
Application Performance Monitoring (APM)
APM tools instrument your application code to trace requests through your backend, identify slow database queries, measure function execution time, and map service dependencies. APM provides the "why" behind performance problems that synthetic monitoring detects. If a synthetic check shows increased latency, APM traces help you pinpoint whether the bottleneck is in your code, your database, or a third-party service.
Log-Based Monitoring
Structured logging combined with log aggregation tools (ELK stack, Grafana Loki, Datadog Logs) lets you detect errors and anomalies from within your application. Log-based alerts can catch issues that synthetic tests miss, like intermittent errors that only affect a fraction of requests or business logic failures that still return HTTP 200.
Getting Started with Synthetic Monitoring
If you are new to synthetic monitoring, start with the simplest approach and expand as your needs grow:
- Identify your critical endpoints. List the 5 to 10 API endpoints or pages that, if they go down, would have the biggest impact on your business. Your health check route, authentication endpoint, and primary data retrieval endpoints are good starting points.
- Set up basic HTTP checks. Configure a monitoring tool to send GET requests to each critical endpoint every 1 to 5 minutes. Set assertions for expected status codes and maximum response times.
- Enable multi-region checks. Run your checks from at least 2 to 3 geographic locations to catch region-specific outages and reduce false positives.
- Configure alerts. Route failure notifications to Slack, email, or your on-call system. Set up escalation policies so failures that persist beyond a few minutes reach the right people.
- Add transaction tests gradually. Once your basic checks are stable, add multi-step synthetic transactions for your most critical business workflows: authentication, payment, and core data operations.
For a comparison of the best tools available to implement this approach, check our synthetic monitoring tools guide.
Frequently Asked Questions
What is synthetic monitoring in simple terms?
Synthetic monitoring is a method of testing your website or API by sending automated, scripted requests at regular intervals from external locations. Instead of waiting for real users to encounter a problem, synthetic monitors proactively check whether your service is available and performing within acceptable thresholds. Think of it as a robot that visits your site every few minutes and reports back on what it finds.
What is the difference between synthetic monitoring and synthetic transaction monitoring?
Basic synthetic monitoring typically involves single-step checks like pinging an endpoint to confirm it returns HTTP 200. Synthetic transaction monitoring goes further by scripting multi-step workflows -- for example, logging in, adding an item to a cart, and completing checkout. Transaction monitoring validates that an entire business process works end to end, not just that individual endpoints are reachable.
What are the limitations of synthetic monitoring?
Synthetic monitoring only tests predefined paths from predefined locations, so it cannot capture the full range of real-world conditions your users experience. It misses issues that only appear on specific devices, browsers, or network conditions. It also generates no data about actual user behavior, so it cannot tell you which pages are slow for real visitors or which workflows users actually follow.
How often should synthetic monitoring checks run?
For production APIs and critical endpoints, 1-minute check intervals are recommended. This limits your maximum detection time to 60 seconds. For staging environments or non-critical services, 5-minute intervals are usually sufficient. Some enterprise tools offer 30-second or even 10-second intervals for mission-critical services, though this increases cost.

