7-day risk-free trial

Performance & Uptime Monitoring for APIs and Websites

Detect outages, prove SLA violations, debug infra issues and optimize performance. Set up 24/7 monitoring with alerts to Slack, PagerDuty, and OpsGenie.

Integrated alerting
All HTTP methods supported
Custom headers & request body

Everything you need to monitor API performance

From one-off latency checks to continuous uptime monitoring with multi-channel alerting.

24/7 Uptime Monitoring

Monitor your endpoints around the clock and receive instant alerts via email, Slack, OpsGenie, or PagerDuty the moment issues arise.

Detailed Timing Breakdown

Break requests into DNS lookup, TCP connection, TLS handshake, TTFB, and content transfer. Pinpoint bottlenecks instantly.

Custom HTTP Requests

Test all HTTP methods with custom headers and request body — ideal for authenticated APIs and private endpoints.

Shareable Performance Reports

Generate public report links for any monitored endpoint — covering 24h, 7d, 30d, or 90d windows. Share latency trends and uptime data with clients or stakeholders, no login required.

For developers

Build on Top of Your Monitoring Data

Access latency and uptime data programmatically via REST API, or query it directly from AI tools via MCP.

REST API

Query your projects, metrics, and alerts programmatically. Authenticate with a Bearer token and pipe monitoring data into your own dashboards, scripts, or CI pipelines.

# Fetch latency metrics
curl -H "Authorization: Bearer $TOKEN" \
  https://www.latencytest.me/api/v1/projects
View API docs

MCP Server

Query your monitoring data directly from Claude, Cursor, or any MCP-compatible AI tool. Ask natural language questions about endpoint performance without leaving your editor.

# claude_desktop_config.json
"latencytest": {
  "command": "npx latencytest-mcp"
}
View MCP server

How It Works

Performance insights in under 10 seconds.

Enter Your Endpoint URL

Paste any API endpoint or website URL. Choose your HTTP method and optionally add authentication headers or a JSON request body.

https://api.example.com/v1/users
GETAuthorization: Bearer …

Get a Per-Phase Latency Waterfall

See each stage of the request lifecycle broken out individually. Instantly identify whether latency comes from DNS, TLS negotiation, or your application.

DNS Lookup12ms
TCP Connection28ms
TLS Handshake45ms
Server Processing118ms
Content Transfer23ms

Monitor Uptime & Get Alerts

With a Pro plan, schedule automated checks and configure smart alerts. Get notified the moment your endpoint goes down or latency exceeds your threshold.

api.example.com/v1/users
Healthy
99.9%
uptime
143ms
avg latency
2 min
last check

Share Performance Reports

Generate a public report link for any monitored endpoint. Choose a 24h, 7d, 30d, or 90d window and share latency trends with clients or teammates — no account required to view.

Shareable report link
latencytest.me/reports/31f119…f770de
24h
7d
30d
90d

Anyone with the link can view — no login needed.

Built for Engineering Teams

From solo devs debugging slow APIs to SRE teams managing SLAs.

API Performance Debugging

Measure response times across all request phases. Identify whether slowdowns originate in DNS resolution, TLS negotiation, or your application code — without guessing.

CDN & Infrastructure Validation

Verify CDN effectiveness by comparing content transfer times before and after integration. Ensure your infrastructure changes actually improve performance.

SRE & On-Call Alerting

Wire up PagerDuty or OpsGenie alerts for latency threshold breaches or consecutive failures. Auto-resolving incidents reduce alert fatigue for on-call teams.

SLA Compliance Tracking

Track 6 months of historical latency and uptime data. Generate evidence for SLA compliance reports and detect performance trends before they become incidents.

Trusted by Developers & SRE Teams

Used daily by engineers who care about API performance.

The waterfall breakdown immediately showed our DNS resolution was taking 180ms — three times our TCP connection time. Switched providers and cut total latency by 40%.
Eric Briggs
Senior Backend Engineer · Fintech startup
We run a latency check after every deploy. The PagerDuty integration means our on-call rotation gets notified before users even notice degradation.
Natalie Sexton
SRE Lead · Series B SaaS
Found a TLS misconfiguration adding 200ms of handshake overhead that had been invisible for months. Per-phase breakdowns are invaluable for this kind of debugging.
Lauren Carter
Platform Engineer · Developer tools company
Simple Pricing
$79/year
7-day risk-free trial
$225/lifetime
Pay once, use forever

Comprehensive Monitoring

15 endpoints/URLs

Monitor performance and uptime for multiple endpoints

Advanced Features

Support for all HTTP methods, custom headers, and request body data

Up to 788,400 Requests / month

Generous quota for your monitoring needs

6 Month History

Extended data retention for trend analysis

Alert Integrations

EmailEmail
PagerDutyPagerDuty
OpsGenieOpsGenie
SlackSlack

Analytics

Detailed Metrics

Comprehensive breakdowns of response times, latencies and status codes

Flexible Reporting

View trends and patterns with hourly to monthly reporting

Need a different integration?

We're constantly adding new integrations. Let us know what you need at [email protected]

Understanding HTTP Latency Metrics

latencytest.me breaks HTTP requests into five distinct phases so you can identify exactly where delays occur. All HTTP methods are supported — including GET, POST, PUT, PATCH, and DELETE — with custom headers such as Content-Type and Authorization.

PhaseTimelineDuration
DNS Lookup
12 ms
TCP Connection
28 ms
TLS Handshake
45 ms
Server Processing
118 ms
Content Transfer
23 ms
Total
226 ms

DNS Lookup

DNS Lookup represents the time taken to resolve the domain name to an IP address — the first step in any HTTP request. High DNS latency typically indicates a slow DNS provider. Compare providers at dnsperf.com and consider switching to a fast provider such as Cloudflare.

TCP Connection

TCP Connection is the time for the three-way handshake (SYN → SYN-ACK → ACK). High TCP time often indicates network congestion or geographic distance from the server. Our servers are located in Europe — endpoints closer to Europe will show lower TCP latency.

TLS Handshake

TLS Handshake measures secure connection setup for HTTPS requests (not shown for HTTP). High TLS latency commonly indicates SSL configuration issues. Inspect cipher performance with openssl speed and review the MaxClients value in your web server configuration.

Server Processing (TTFB)

Server Processing is the time your server takes to process the request and begin sending a response — also known as Time to First Byte (TTFB). This includes application code execution, database queries, and response generation. High TTFB indicates application-level bottlenecks; use a profiler to identify the cause.

Content Transfer

Content Transfer is the time to download the full response body. It depends on response size and geographic distance. Since our servers are in Europe, endpoints further away show higher latency here. CDN services such as Cloudflare or Amazon CloudFront reduce this significantly by serving content from edge locations.

Total Latency Time

The sum of all phases above represents the total HTTP request/response cycle time — what your users actually experience as the response time of your application. Optimizing each phase compounds into significant overall improvements.

Frequently Asked Questions

Is the latency test free to use?

Yes — the one-off latency test on this page is completely free. Measure DNS, TCP, TLS, TTFB, and content transfer times for any URL instantly. However, to set up continuous uptime monitoring with alerts, you'll need to create an account.

Can I monitor endpoint uptime automatically?

Yes, with a Pro plan. Automated monitoring checks your endpoints on a schedule and sends alerts via email, Slack, PagerDuty, or OpsGenie when they go down or exceed latency thresholds.

What HTTP methods and features are supported?

All standard HTTP methods are supported: GET, POST, PUT, PATCH, and DELETE. You can set custom headers (useful for Authorization tokens or Content-Type), provide a JSON request body, and configure redirect-following behavior.

How are alerts triggered and resolved?

Alerts fire when a metric (e.g. total latency, DNS time, or consecutive errors) exceeds your configured threshold within a time window. They automatically resolve — no manual intervention needed — when metrics return to normal.

Where are your monitoring servers located?

Our servers are currently located in Europe. Endpoints closer to Europe will show lower TCP and content transfer latencies. Account for this baseline when interpreting results for Asia-Pacific or Americas-hosted services.

Can I share performance reports with clients or stakeholders?

Yes — Pro plan users can generate a public report link for any monitored endpoint. Reports show latency trends and uptime data across 24h, 7d, 30d, or 90d windows. Anyone with the link can view the report without logging in, making it easy to share proof-of-performance with clients or your team.

Start Monitoring Your APIs Today

Get 24/7 uptime monitoring, per-phase latency alerts, and 6 months of historical data.