mirror of
https://gitlab.com/pulsechaincom/prysm-pulse.git
synced 2024-12-26 05:17:22 +00:00
d061c784e0
* Added prometheus client and p2p metrics * Avoid run the adapter if the metrics are disabled * fix visibility issue * Fix invalid p2p.Message sent to Adapters The middlewares (adapters) must receive the complete message to avoid problems and the main Handler must get the values from middlewares Also, added tests and comments for metrics package * Added logrus hook collector This collector is used to collect counters of log messages. The main purpose of these metric is to know how many warnings and errors the system are getting. * Add hook when register the prometheus service * update bazel builds * fix emit tests and remove unused imports * gazelle --fix * remove unused logger * move prometheus package to shared directory * better metric names and fix metric paths * improve metric tests and start to use promauto * added prometheus initial documentation * fix tests * fix type differences with go get and bazel * Fix service test |
||
---|---|---|
.. | ||
BUILD.bazel | ||
logrus_collector_test.go | ||
logrus_collector.go | ||
README.md | ||
service_test.go | ||
service.go |
How to monitor with prometheus
Prerequisites:
- Prometheus (Instal to scrap metrics and start to monitor)
- (optional) Grafana (For better graphs)
- (optional) Setup prometheus+grafana
Start scrapping services
To start scrapping with prometheus you must create or edit the prometheus config file and add all the services you want to scrap, like these:
global:
scrape_interval: 15s # By default, scrape targets every 15 seconds.
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'codelab-monitor'
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
# Override the global default and scrape targets from this job every 5 seconds.
scrape_interval: 5s
static_configs:
- targets: ['localhost:9090']
+ - job_name: 'beacon-chain'
+ static_configs:
+ - targets: ['localhost:8080']
After creating/updating the prometheus file run it:
$ prometheus --config.file=your-prometheus-file.yml
Now, you can add the prometheus server as a data source on grafana and start building your dashboards.
How to add additional metrics
The prometheus service export the metrics from the DefaultRegisterer
so just need to register your metrics with the prometheus
or promauto
libraries.
To know more Go application guide