A Histogram is a type of metric that:
· Tracks the count of observations
· Tracks the sum of all observations
· Groups observations into buckets (ranges)
Default Buckets Example
If you don’t specify custom buckets, the Java client uses these default bucket boundaries (in seconds):
[0.005, 0.01, 0.025, 0.05, 0.075, 0.1, 0.25, 0.5, 0.75, 1, 2.5, 5, 7.5, 10, +Inf]
HistogramDefaultBucketDemo.java
package com.sample.app; import io.prometheus.client.Histogram; import io.prometheus.client.exporter.HTTPServer; import io.prometheus.client.hotspot.DefaultExports; import java.io.IOException; import java.util.Random; public class HistogramDefaultBucketDemo { // Define the Histogram metric static final Histogram requestDuration = Histogram.build().name("http_request_duration_seconds") .help("Duration of HTTP requests in seconds").register(); public static void main(String[] args) throws IOException { // Start Prometheus metrics HTTP server on port 8080 HTTPServer server = new HTTPServer(8080); // Optional: Expose JVM metrics (GC, memory, threads) // DefaultExports.initialize(); // Simulate continuous request processing HistogramDefaultBucketDemo example = new HistogramDefaultBucketDemo(); Random rand = new Random(); while (true) { example.simulateRequest(rand.nextInt(400) + 100); // simulate 100–500 ms processing try { Thread.sleep(1000); // simulate 1 request per second } catch (InterruptedException e) { Thread.currentThread().interrupt(); } } } // Simulates handling a request and records duration in Histogram public void simulateRequest(long processingTimeMillis) { Histogram.Timer timer = requestDuration.startTimer(); try { Thread.sleep(processingTimeMillis); // simulate processing time System.out.println("Processed request in " + processingTimeMillis + " ms"); } catch (InterruptedException e) { Thread.currentThread().interrupt(); } finally { timer.observeDuration(); // records time in seconds } } }
Run the Application, you can see messages like below.
Processed request in 173 ms
Processed request in 279 ms
Open the url ‘http://localhost:8080/metrics’ in browser, you can observe that the metrics count is bucketed properly.
Let’s understand the histogram output for the below samples.
Processed request in 173 ms
Processed request in 279 ms
Histogram Output
http_request_duration_seconds_bucket{le="0.005",} 0.0 http_request_duration_seconds_bucket{le="0.01",} 0.0 http_request_duration_seconds_bucket{le="0.025",} 0.0 http_request_duration_seconds_bucket{le="0.05",} 0.0 http_request_duration_seconds_bucket{le="0.075",} 0.0 http_request_duration_seconds_bucket{le="0.1",} 0.0 http_request_duration_seconds_bucket{le="0.25",} 1.0 http_request_duration_seconds_bucket{le="0.5",} 2.0 http_request_duration_seconds_bucket{le="0.75",} 2.0 http_request_duration_seconds_bucket{le="1.0",} 2.0 http_request_duration_seconds_bucket{le="2.5",} 2.0 http_request_duration_seconds_bucket{le="5.0",} 2.0 http_request_duration_seconds_bucket{le="7.5",} 2.0 http_request_duration_seconds_bucket{le="10.0",} 2.0 http_request_duration_seconds_bucket{le="+Inf",} 2.0 http_request_duration_seconds_count 2.0 http_request_duration_seconds_sum 0.458356792
http_request_duration_seconds_bucket{le="0.25"} 1.0
Above line states there is 1 request took ≤ 0.25 seconds.
Now let’s explain each line based on the two durations (0.173s, 0.279s):
Bucket (le) |
Count |
Explanation |
0.005 to 0.1 |
0.0 |
No requests were that fast (i.e., ≤ 100 ms) |
0.25 |
1.0 |
1 request (173 ms) ≤ 250 ms |
0.5 |
2.0 |
Both requests (173 ms & 279 ms) ≤ 500 ms |
0.75 to +Inf |
20 |
All upper buckets also show 2.0 because both requests are ≤ 750ms, 1s, etc. |
In summary, Buckets are cumulative. So each upper bucket includes all requests from previous buckets.
Custom Buckets Example
You can specify your own bucket sizes:
Histogram requestDuration = Histogram.build() .name("http_request_duration_seconds") .help("Duration of HTTP requests in seconds") .buckets(0.1, 0.2, 0.5, 1, 2, 5) .register();
Find the below working application.
HistogramCustomBucketDemo.java
package com.sample.app; import java.io.IOException; import java.util.Random; import io.prometheus.client.Histogram; import io.prometheus.client.exporter.HTTPServer; public class HistogramCustomBucketDemo { // Define the Histogram metric static final Histogram requestDuration = Histogram.build().name("http_request_duration_seconds") .help("Duration of HTTP requests in seconds").buckets(0.1, 0.2, 0.5, 1, 2, 5).register(); public static void main(String[] args) throws IOException { // Start Prometheus metrics HTTP server on port 8080 HTTPServer server = new HTTPServer(8080); // Optional: Expose JVM metrics (GC, memory, threads) // DefaultExports.initialize(); // Simulate continuous request processing HistogramCustomBucketDemo example = new HistogramCustomBucketDemo(); Random rand = new Random(); while (true) { example.simulateRequest(rand.nextInt(5000) + 100); try { Thread.sleep(1000); // simulate 1 request per second } catch (InterruptedException e) { Thread.currentThread().interrupt(); } } } // Simulates handling a request and records duration in Histogram public void simulateRequest(long processingTimeMillis) { Histogram.Timer timer = requestDuration.startTimer(); try { Thread.sleep(processingTimeMillis); // simulate processing time System.out.println("Processed request in " + processingTimeMillis + " ms"); } catch (InterruptedException e) { Thread.currentThread().interrupt(); } finally { timer.observeDuration(); // records time in seconds } } }
Run above application and open the url ‘http://localhost:8080/metrics’ in browser, you can see following kind of stats.
From the above output, you can observe that the request duration is split into custom buckets
Previous Next Home
No comments:
Post a Comment