Error reading prometheus an error occurred within the plugin

1.when i use grafana to configure the prometheus, the prometheus service is normal, i can visit the ui from http://localhost:9090,all the targets is up. the error info is : Error reading Prometheus...

@Geek-Joey

1.when i use grafana to configure the prometheus,
the prometheus service is normal, i can visit the ui from http://localhost:9090,all the targets is up.

  • the error info is :
 Error reading Prometheus: Post "http://localhost:9090/api/v1/query": dial tcp 127.0.0.1:9090: connect: connection refused
  • the prometheus.yml is:
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"
    static_configs:
      - targets: ["localhost:9090"]
# Push Gateway
  - job_name: "pushgateway"
    static_configs:
      - targets: ["localhost:9091"]
        labels:
          instance: pushgateway

  # Node Exporter
  - job_name: "node exporter"
    static_configs:
      - targets: ["localhost:9100"]

# Kafka Exporter
  - job_name: "kafka exporter"
    static_configs:
      - targets: ["localhost:9308"]
  • the granfan service which is docker deploymed is normal , i remain the default configuration (/etc/grafana/grafana.ini).
  1. i configured granfan use another prometheus address ,it worked success.

  2. environment info :

  • Grafana version: v8.4.3
  • Prometheus version: 2.33.5
  • OS : Mac M1

can you give me some advice,I will be very grateful。

@DasSkelett

Grafana and Prometheus are both deployed using Docker?

The localhost of one container is not the localhost of another container, even if you published the port to the host – you can’t reach the Prometheus container or the host using localhost from the Grafana container. You need to use the IP address of the Prometheus container, or the hostname if you are using Docker Compose.

If you need further help, please post your Docker configs (docker-compose.yml or docker run commands).

Geek-Joey, pawKer, ZJUGuoShuai, NoOneHero, mykbas, Coding-Deng, jgchenu, rishabh911996, Danialzz, spriya21, and 14 more reacted with thumbs up emoji
jgchenu, yerassylaitkazy, and javara reacted with heart emoji

@Geek-Joey

Grafana and Prometheus are both deployed using Docker?

The localhost of one container is not the localhost of another container, even if you published the port to the host – you can’t reach the Prometheus container or the host using localhost from the Grafana container. You need to use the IP address of the Prometheus container, or the hostname if you are using Docker Compose.

If you need further help, please post your Docker configs (docker-compose.yml or docker run commands).

thank you for your reply , i cofigure the ip of prometheus in the grafana, it worked !

@AhmadS7

I had the same problem, I found the host IP address in C:ProgrammaticDockerDesktoptmp folder and then set the URL of Prometheus to that IP.

@PhakornKiong

As alternative, you can use http://host.docker.internal:9090 to access from docker

1Bitcoin, mykbas, robertcao, mhdstk, Amansg4, choikangjae, Apri1221, timakamystery, ronannnn, Javokhir-tech, and 25 more reacted with thumbs up emoji
1Bitcoin, miraccan00, nkuba, developiaa, and waseemafzal70 reacted with laugh emoji
borodatych, MartinDeLaTorre, 1Bitcoin, Amansg4, bruno-s-coelho, asonnino, miraccan00, wind57, developiaa, WoodenStone, and 2 more reacted with hooray emoji
waseemafzal70 reacted with heart emoji

@miraccan00

@geetha7447

As alternative, you can use http://host.docker.internal:9090 to access from docker

This worked for me.. Thank you

@anasbn3issa

@ivanzito

I solved my problem. Inspectioning the docker network and I found the Gateway there’re used. I’m using the network bridge

@mhassaankhokhar

I am working with kubernetes and deployed both grafana and prometheus via helm charts but still getting same error.
grafana app version 8.5.5
prometheus v2.41.0

Prometheus is known for it’s metric scrapping graphing and alerting capabilities.
When combined with grafana it can be a very powerful tool to visualize IT systems and routers statistics and to know about trends in a system over time.

I have created a container for Mikrotik RouterOS devices metric scrapping.
The sources are at:
https://github.com/elico/mikrotik-exporter-container
The exporter is based on the repository at:
https://github.com/nshttpd/mikrotik-exporter

An example of spinning a container:

/interface/bridge/add name=dockers
/ip/address/add address=172.17.0.254/24 interface=dockers

/interface/veth/add name=veth5 address=172.17.0.5/24 gateway=172.17.0.254
/interface/bridge/port add bridge=dockers interface=veth5

/ip/firewall/filter/add chain=input protocol=tcp dst-port=8728 connection-state=new src-address=172.17.0.5 action=accept place-before=0

/container/config/set registry-url=https://registry-1.docker.io tmpdir=disk1/pull

/container/envs/add name=prom_exporter_envs key=TZ value="Asia/Jerusalem"
/container/envs/add name=prom_exporter_envs key=ROUTER_NAME value="RB4011"
/container/envs/add name=prom_exporter_envs key=ROUTER_ADDRESS value="172.17.0.254"
/container/envs/add name=prom_exporter_envs key=USER value="prometheus"
/container/envs/add name=prom_exporter_envs key=PASSWORD value="changeme"
/container/envs/add name=prom_exporter_envs key=PORT value="8728"

/container/envs/add name=prom_exporter_envs key=CONNTRACK_METRICS value="1"
/container/envs/add name=prom_exporter_envs key=DHCP_METRICS value="1"
/container/envs/add name=prom_exporter_envs key=DHCPL_METRICS value="1"
/container/envs/add name=prom_exporter_envs key=FIRMWARE_METRICS value="1"
/container/envs/add name=prom_exporter_envs key=HEALTH_METRICS value="1"
/container/envs/add name=prom_exporter_envs key=ROUTES_METRICS value="1"
/container/envs/add name=prom_exporter_envs key=NETWATCH_METRICS value="1"

/container/add dns=172.17.0.254 remote-image=elicro/mikrotik-exporter:latest interface=veth5 root-dir=disk1/mt-exporter envlist=prom_exporter_envs start-on-boot=yes

And don’t forget to create and change the password accordingly:

/user group add name=prometheus policy=api,read,winbox
/user add name=prometheus group=prometheus password=changeme

The exporter is listening at tcp port: 9436
so in the case you will want to access/scrap it you would need to access the target:
172.17.0.5:9436
ie:
http://172.17.0.5:9436/metrics

The container is based on alpine linux to allow the option of debugging in shell if needed.

The exporter is written to handle two scenarios:
to export a single system or more then one system.
It can be a very powerfull tool if used properly.

The container contains a default config file at: /config/config.yaml
https://github.com/elico/mikrotik-expor … onfig.yaml
If you wish to use your customized config.yaml file to export more then one system or for any other reason you can use a mount directory for that.
The mount directory inside the container is: /config
and it should contain two files:
/config/config.yaml
/config/custom

The file: /config/custom
is being used to instruct the container to spin with the plain configuration and to not transform the container default config.yaml which is really a single node template.

An example for multi RouterOS devices scrapping via a single config file can be seen here:
https://github.com/nshttpd/mikrotik-exp … g.test.yml

By default all special metrics are disabled unless enabled using environment variable.
The start script with all the available variables can be seen at:
https://github.com/elico/mikrotik-expor … r/start.sh

The exporter features can be seen at the sources config file at:
https://github.com/nshttpd/mikrotik-exp … fig.go#L13
To enable a specific metric you need to set the variable with any value, ie not empty like:

/container/envs/add name=prom_exporter_envs key=CONNTRACK_METRICS value="1"

And the key name is composed of «feature_METRICS» while the feature is capitalized.
So for example the feature name for wireless interfaces metrics is listed at:
https://github.com/nshttpd/mikrotik-exp … fig.go#L28
and is: wlanif
So to enable the wireless interface metrics exporting you would need to define the next key value

/container/envs/add name=prom_exporter_envs key=WLANIF_METRICS value="1"

and instead of «1» you can define any string you want.

Just pay attention that this and other Prometheus exporters do not support authentication and as long they are not firewalled with strict rules anyone can get these metrics, so be aware of what you are running.

I have not tried to run Prometheus on-top of a RouterOS device but I assume that it’s possible as well.

Я установил prometheus на инстанс Amazon linux 2, и вот мои конфигурации, которые я использую в пользовательских данных:

cat << EOF > /etc/systemd/system/prometheus.service 
[Unit] 
Description=Prometheus Server 
Documentation=https://prometheus.io/docs/introduction/overview/ 
Wants=network-online.target
After=network-online.target

[Service] 
User=prometheus 
Restart=on-failure 

#Change this line if you download the  
#Prometheus on different path user 
ExecStart=/home/prometheus/prometheus/prometheus --config.file=/home/prometheus/prometheus/prometheus.yml --storage.tsdb.path=/app/prometheus/data

[Install] 
WantedBy=multi-user.target 
EOF

cat << EOF > /home/prometheus/prometheus/prometheus.yml 
# my global config 
global: 
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute. 
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute. 
  # scrape_timeout is set to the global default (10s). 

# Alertmanager configuration 
alerting: 
  alertmanagers: 
  - static_configs: 
    - targets: 
      # - alertmanager:9093 

# Load rules once and periodically evaluate them according to the global evaluation_interval. 
rule_files: 
  # - "first_rules.yml" 
  # - "second_rules.yml" 

# A scrape configuration containing exactly one endpoint to scrape: 
# Here it's Prometheus itself. 
scrape_configs: 
  # The job name is added as a label job=<job_name> to any timeseries scraped from this config. 
  - job_name: 'prometheus' 

    # metrics_path defaults to '/metrics' 
    # scheme defaults to 'http'. 

    static_configs: 
    - targets: ['localhost:9090'] 
  - job_name: 'node_prometheus' 

    # metrics_path defaults to '/metrics' 
    # scheme defaults to 'http'. 

    static_configs: 
    - targets: ['localhost:9100'] 
  - job_name: 'grafana' 

    # metrics_path defaults to '/metrics' 
    # scheme defaults to 'http'. 

    static_configs: 
# mettre ALB grafana 
    - targets: ['${grafana_dns}'] 

  - job_name: 'sqs_exporter' 
    scrape_interval: 30s 
    scrape_timeout: 30s 
    static_configs: 
    - targets: ['localhost:9434'] 

  - job_name: 'cloudwatch_exporter' 
    scrape_interval: 5m 
    scrape_timeout: 60s 
    static_configs: 
    - targets: ['localhost:9106'] 

  - job_name: '_metrics' 
    metric_relabel_configs: 
    relabel_configs: 
     - source_labels: 
       - __meta_ec2_platform 
       action: keep 
       regex: .*windows.* 
     - action: labelmap 
       regex: __meta_ec2_tag_(.*) 
       replacement: $1 
    ec2_sd_configs: 
      - region: eu-west-1 
        port: 9543 

  - job_name: 'cadvisor' 
    static_configs: 
    - targets: ['localhost:8080'] 

  - job_name: 'elasticbeanstalk_exporter' 
    static_configs: 
    - targets: ['localhost:9552'] 

EOF



systemctl daemon-reload 
systemctl enable prometheus
systemctl start prometheus

Когда я проверяю, работает ли Прометей, я получаю следующее:

[ec2-user@ip-10-193-192-49 ~]$  sudo systemctl status prometheus
● prometheus.service - Prometheus Server
   Loaded: loaded (/etc/systemd/system/prometheus.service; enabled; vendor preset: disabled)
   Active: failed (Result: start-limit) since Mon 2019-12-02 11:12:33 UTC; 4s ago
     Docs: https://prometheus.io/docs/introduction/overview/
  Process: 22507 ExecStart=/home/prometheus/prometheus/prometheus --config.file=/home/prometheus/prometheus/prometheus.yml --storage.tsdb.path=/app/prometheus/data (code=exited, status=2)
 Main PID: 22507 (code=exited, status=2)

Dec 02 11:12:33 ip-10-193-192-49.service.app systemd[1]: Unit prometheus.service entered failed state.
Dec 02 11:12:33 ip-10-193-192-49.service.app systemd[1]: prometheus.service failed.
Dec 02 11:12:33 ip-10-193-192-49.service.app systemd[1]: prometheus.service holdoff time over, scheduling restart.
Dec 02 11:12:33 ip-10-193-192-49.service.app systemd[1]: start request repeated too quickly for prometheus.service
Dec 02 11:12:33 ip-10-193-192-49.service.app systemd[1]: Failed to start Prometheus Server.
Dec 02 11:12:33 ip-10-193-192-49.service.app systemd[1]: Unit prometheus.service entered failed state.
Dec 02 11:12:33 ip-10-193-192-49.service.app systemd[1]: prometheus.service failed.
[ec2-user@ip-10-193-192-49 ~]$

Я установил Prometheus версии 2.14.0. Любая помощь, пожалуйста?

Я прокомментировал строку Restart=on-failure внутри файла /etc/systemd/system/prometheus.service, а затем:

systemctl daemon-reload 
systemctl status prometheus

И я получил вот что:

Dec 02 12:57:52 ip-10-193-192-58.service.app systemd[1]: start request repeated too quickly for prometheus.service
Dec 02 12:57:52 ip-10-193-192-58.service.app systemd[1]: Failed to start Prometheus Server.
Dec 02 12:57:52 ip-10-193-192-58.service.app systemd[1]: Unit prometheus.service entered failed state.
Dec 02 12:57:52 ip-10-193-192-58.service.app systemd[1]: prometheus.service failed.
Dec 02 12:58:03 ip-10-193-192-58.service.app systemd[1]: Started Prometheus Server.
Dec 02 12:58:03 ip-10-193-192-58.service.app systemd[1]: Starting Prometheus Server...
Dec 02 12:58:03 ip-10-193-192-58.service.app prometheus[23391]: level=info ts=2019-12-02T12:58:03.686Z caller=main.go:296 msg="no time or size retention was set so
Dec 02 12:58:03 ip-10-193-192-58.service.app prometheus[23391]: level=info ts=2019-12-02T12:58:03.687Z caller=main.go:332 msg="Starting Prometheus" version="(versio
Dec 02 12:58:03 ip-10-193-192-58.service.app prometheus[23391]: level=info ts=2019-12-02T12:58:03.687Z caller=main.go:333 build_context="(go=go1.13.4, user=root@df2
Dec 02 12:58:03 ip-10-193-192-58.service.app prometheus[23391]: level=info ts=2019-12-02T12:58:03.687Z caller=main.go:334 host_details="(Linux 4.14.77-81.59.amzn2.x
Dec 02 12:58:03 ip-10-193-192-58.service.app prometheus[23391]: level=info ts=2019-12-02T12:58:03.687Z caller=main.go:335 fd_limits="(soft=1024, hard=4096)"
Dec 02 12:58:03 ip-10-193-192-58.service.app lor prometheus[23391]: level=info ts=2019-12-02T12:58:03.687Z caller=main.go:336 vm_limits="(soft=unlimited, hard=unlimited
Dec 02 12:58:03 ip-10-193-192-58.service.app prometheus[23391]: level=error ts=2019-12-02T12:58:03.692Z caller=query_logger.go:85 component=activeQueryTracker msg="
Dec 02 12:58:03 ip-10-193-192-58.service.app systemd[1]: prometheus.service: main process exited, code=exited, status=2/INVALIDARGUMENT
Dec 02 12:58:03 ip-10-193-192-58.service.app systemd[1]: Unit prometheus.service entered failed state.
Dec 02 12:58:03 ip-10-193-192-58.service.app systemd[1]: prometheus.service failed.

2 ответа

Лучший ответ

У меня была такая же проблема, проблема в том, что разрешения на / data / prometheus должны быть установлены для пользователя и группы prometheus.

Поэтому решение: sudo chown -R prometheus: prometheus / data / prometheus /

На самом деле в вашем случае этот путь / app / prometheus / data


8

Pixa
3 Дек 2019 в 17:03

Была такая же ошибка, у меня был плохой отступ, пожалуйста, проверьте отступ на Prometheus.yml.

Также для удаленных машин http: // перед IP-адресом не поддерживается в поле целей.

Всегда начинайте с venilla / basic config.


2

cross_handle
9 Фев 2021 в 07:27

Понравилась статья? Поделить с друзьями:
  • Error reading comcombobox1
  • Error pyobjc requires macos to build
  • Error protocol pxf does not exist
  • Error please select a valid python interpreter как исправить pycharm
  • Error please reinstall ivcam