Many companies with production cloud environments use the Prometheus open-source project as a part of their monitoring system. Prometheus is a good, low-cost way to get started, as long as you have the development resources available for implementation and instrumentation.
A typical Prometheus environment consists of integrations that scrape your applications for four metric types; counters, gauges, summaries, and aggregated histograms. A central server is required to pull each of the endpoint resources and aggregate them. The Prometheus Expression Browser then allows you to view the collected data in graphs – or to create triggered automation events.
Our customers have been asking for direct integration of Prometheus into SignalFx to help with their metric consolidation efforts.
Getting set up
No changes are required on the SignalFx platform in order to accept Prometheus. We treat it like any other data source and time series. And the configuration changes required to Prometheus are minimal: Only three lines of code in two configuration files need updates.
Configuration Update 1
Add remote end point using the SignalFx Metricproxy :
Configure Prometheus remote storage to send metric data to a proxy. To do this, you’ll need to specify the port to bind to. An example config:
If you want something different than the default endpoint of “/write”, you can specify it with “ListenPath”. An alternative example config:
Configuration update 2
Add <remote_write> to SignalFx for your Prometheus scrape jobs. The documentation below comes from Prometheus.io
write_relabel_configs is relabeling applied to samples before sending them to the remote endpoint. Write relabeling is applied after external labels. This could be used to limit which samples are sent.
# The URL of the endpoint to send samples to.
# Timeout for requests to the remote write endpoint.
[ remote_timeout: <duration> | default = 30s ]
# List of remote write relabel configurations.
[ – <relabel_config> … ]
# Sets the `Authorization` header on every remote write request with
# the configured bearer token. It is mutually exclusive with `bearer_token_file`.
[ bearer_token: <string> ]
# Sets the `Authorization` header on every remote write request with the bearer token
# read from the configured file. It is mutually exclusive with `bearer_token`.
[ bearer_token_file: /path/to/bearer/token/file ]
# Configures the remote write request’s TLS settings.
[ <tls_config> ]
# Optional proxy URL.
[ proxy_url: <string> ]
With these two configuration updates, your Prometheus data collection will now be mirrored to your SignalFx account for use with SignalFlow streaming analytics and smart alerting. You’ll also have the benefits of long term data retention and easy user management, enabling your teams to have a consistent view into their applications.
The next steps are up to you: Continue to use both SignalFx and Prometheus, or standardize to the more configurable collectd OSS agent for better resolution and lower latency for your metric data.
In many cases, the metrics you collect from Prometheus are just one part of your wider infrastructure, services, and applications landscape. An easy next step is to consolidate your AWS Cloudwatch and GCP Stackdriver metrics into SignalFx for a more complete view of your overall environment.
Prometheus is just one of several new integrations we’ve added to SignalFx in the last month – and we’ll continue to add more native integrations as new technologies emerge and our customers ask for them. We’re happy to discuss your monitoring, alerting, and automation needs. You can reach us at [email protected]