Error failed to determine the health of the cluster

Error: # /usr/local/elasticsearch/bin/elasticsearch-setup-passwords auto Failed to determine the health of the cluster running at http://10.0.0.1:9200 Unexpected response code [503] from calling GE...

👋 Hello visitors.

This issue is closed because the original problem description was not a fault in the «elasticsearch-setup-passwords» tool. The reporter’s cluster had not formed correctly (master_not_discovered_exception) and it is impossible for the tool to configure passwords on a cluster that does not have an elected master node because it is impossible to make any changes to the data on such a cluster. The issue with cluster needs to be resolved before the setup-passwords tool can be used.

If you have landed on this issue because you have the same situation, commenting here won’t help. But here’s what you can do…

Firstly, if you are using Elasticsearch 8 or higher, you should try to use elasticsearch-reset-password instead of elasticsearch-setup-passwords. That has similar error conditions, so the resolution steps below may be still be relevant.

If you use elasticsearch-setup-passwords and have the error message:

Failed to determine the health of the cluster running at <<your node's URL>>
Unexpected response code [503] from calling GET <<your node's URL>>

Then your cluster was not able to form correctly, and you need to try and work out why.
It will not be due to password problems (the nodes in a cluster do not use password to authenticate with each other), and you cannot fix the problem by running the «elasticsearch-setup-passwords» tool (or the newer «elasticsearch-reset-password» tool).
Instead, you should first inspect your Elasticsearch logs. If you don’t know how to find your log files, then check the information in the Elasticsearch docs based on your platform and the type of Elasticsearch package you installed. Then follow the troubleshooting guidance in the Elasticsearch docs, and if, after reading and following those docs, you cannot resolve the problem, then ask in the Elasticsearch discussion forums.

Alternatively, if you have the error message:

Your cluster health is currently RED.

Then your cluster has formed correctly, but is not healthy. Again, you should check your Elasticsearch logs, and then follow the troubleshooting guides.

The message:

Connection failure to: <<your node's URL>>/_security/_authenticate?pretty failed: Connection refused

means that the tool cannot find a node to connect to. There are 2 likely causes:

  1. Your node isn’t actually running. Check your Elasticsearch logs to see if you have a running node, or if it was shutdown for some reason.
  2. Your node is running, but the URL that the tool picked isn’t correct. That can happen sometimes if you have an unusual networking setup. You can use the -u option to the tool to specify the correct URL.

Finally, the message:

Failed to authenticate user 'elastic' against <<your node's URL>> 

mean that the tool is trying to use the bootstrap password but the password does not work. Typically this is because you have already run elasticsearch-setup-passwords on this cluster, and the bootstrap password is no longer in use. You cannot use elasticsearch-setup-passwords to reset the cluster’s passwords after they have been set. This tool in only for setting up the passwords for the first time. In Elasticsearch 8 (and later), you can use elasticsearch-reset-password to reset an existing password that you have lost/forgotten.

Я использую Elasticsearch и kibana, я не уверен в статусе своего кластера elasticsearsh (красный, желтый или зеленый), но, похоже, мне нужно получить токен, сгенерированный elasticsearch, как на скриншоте, когда я запустил bin/elasticsearch-create-enrollment-token --scope kibana из правильного каталога выдает ошибку ERROR: Failed to determine the health of the cluster..

enter image description here

4 ответа

Я столкнулся с той же проблемой, и я просто переделал процесс — снова разархивировал zip-файлы ES ​​и kibana и запустил bin/elasticsearch во вновь созданном каталоге. Найдите сообщение, заключенное в отформатированное поле, содержащее как пароль для гибкого пользователя, так и маркер регистрации для Kibana (токен действителен только в течение 30 минут). Это сообщение появится только один раз, при первом запуске elasticsearch.

Я запустил bin/kibana для Kibana и настроил его в браузере, и оттуда все заработало. Надеюсь это поможет!


2

Shirley Ow
1 Июл 2022 в 11:44

По словам Иоанниса Какаваса в обсудить.эластик, «Инструменты CLI, расширяющие BaseRunAsSuperuserCommand, должны подключаться только к локальному узлу». Когда я запускаю локальный узел, он работает. Но когда я запускаю контейнер elasticsearch в кластере, он не работает. Решение заключалось в выполнении скриптов elastic-search-reset-password и elasticsearch-create-enrollment-token соответственно, вот так (внутри контейнера elasticsearch):

/usr/share/elasticsearch/bin/elasticsearch-reset-password -i -u elastic --url https://localhost:9200

/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana --url https://localhost:9200


2

Davi Carrano
12 Июл 2022 в 23:40

Токен регистрации будет присутствовать в самом терминале. Вам просто нужно прокрутить вверх, пока вы не найдете его при установке.

Причина ошибки — ERROR: Failed to determine the health of the cluster связана с тем, что Elastic еще не установлен, и выполнение этой команды похоже на вызов функции без ее определения.


0

Kshitij Agarwal
12 Апр 2022 в 18:03

Два возможных решения:

  • Убедитесь, что у вас достаточно места на диске.
  • Ваша VPN может быть причиной проблемы.


0

Chiel
11 Июл 2022 в 15:28

Overview

An Elasticsearch cluster consists of a number of servers (nodes) working together as one. Clustering is a technology which enables Elasticsearch to scale up to hundreds of nodes that together are able to store many terabytes of data and respond coherently to large numbers of requests at the same time.

Search or indexing requests will usually be load-balanced across the Elasticsearch data nodes, and the node that receives the request will relay requests to other nodes as necessary and coordinate the response back to the user.

Notes and good things to know

The key elements to clustering are:

Cluster State – Refers to information about which indices are in the cluster, their data mappings and other information that must be shared between all the nodes to ensure that all operations across the cluster are coherent.

Master Node – Each cluster must elect a single master node responsible for coordinating the cluster and ensuring that each node contains an up-to-date copy of the cluster state.

Cluster Formation – Elasticsearch requires a set of configurations to determine how the cluster is formed, which nodes can join the cluster, and how the nodes collectively elect a master node responsible for controlling the cluster state. These configurations are usually held in the elasticsearch.yml config file, environment variables on the node, or within the cluster state.

Node Roles – In small clusters it is common for all nodes to fill all roles; all nodes can store data, become master nodes or process ingestion pipelines. However as the cluster grows, it is common to allocate specific roles to specific nodes in order to simplify configuration and to make operation more efficient. In particular, it is common to define a limited number of dedicated master nodes.

Replication – Data may be replicated across a number of data nodes. This means that if one node goes down, data is not lost. It also means that a search request can be dealt with by more than one node.

Common problems

Many Elasticsearch problems are caused by operations which place an excessive burden on the cluster because they require an excessive amount of information to be held and transmitted between the nodes as part of the cluster state. For example:

  • Shards too small
  • Too many fields (field explosion)

Problems may also be caused by inadequate configurations causing situations where the Elasticsearch cluster is unable to safely elect a Master node. This situation is discussed further in: 

  • Master node not discovered
  • Split brain problem

Backups

Because Elasticsearch is a clustered technology, it is not sufficient to have backups of each node’s data directory. This is because the backups will have been made at different times and so there may not be complete coherency between them. As such, the only way to backup an Elasticsearch cluster is through the use of snapshots, which contain the full picture of an index at any one time.

Cluster resilience

When designing an Elasticsearch cluster, it is important to think about cluster resilience. In particular – what happens when a single node goes down? And for larger clusters where several nodes may share common services such as a network or power supply – what happens if that network or power supply goes down? This is where it is useful to ensure that the master eligible nodes are spread across availability zones, and to use shard allocation awareness to ensure that shards are spread across different racks or availability zones in your data center.

In this tutorial, we will understand the steps and queries required to check the Elasticsearch cluster health status. Sometimes you might have noticed that frontend app which was fetching data from Elasticsearch cluster suddenly not showing any data and it goes completely blank. Although the issue could be anywhere but just for the understanding we will see it from the Elasticsearch context where data might not be available because Elasticsearch cluster went down and are now unable to service any further requests. More on Elasticsearch Cluster Service official documentation.

The other issue that could have happened that Cluster might be up and running fine but data is missing from Elasticsearch cluster or is not available due to by mistake deletion. Well here we are more on looking from cluster perspective so we will ignore all the other possibilities and will take up those in later articles.

To understand more on this topic first you need to understand what is Elasticsearch, how it works, how to talk to Elasticsearch and what are the queries needs to be used to check the cluster status and other important things. We will see all of them one by one in below sections.

What is Elasticsearch

Elasticsearch is a free distributed, open source search and analytics engine built on top of Apache Lucene Library. It was developed in Java and is now the most popular analytics engine currently in use. It currently supports the client in many languages like C#, PHP, Python etc.

Elasticsearch takes the idea of database to save data to the next level where it saves everything as a document collection of whose are known as indices. Each index can be further subdivided into smaller units called shards where each shard can act as a fully independent index. So a document saved in an index can be distributed among clusters and each index with multiple shards can be distributed among different nodes in a cluster.

How to Talk to Elasticsearch

Over the time there are many developments happen in this field but only few of them are supported and in use currently. Those are mentioned below.

  • HTTP Client: It is the most general way to connect and talk to the Elasticsearch.
  • Native Client: It is also one of the method used by few developers to talk to Elasticsearch.
  • Other Client: It is always possible to write your own plugin based on current environment and use it to run your Elasticsearch queries.

How to Check Elasticsearch Cluster Health Status in Linux Using 3 Easy Steps

Check Elasticsearch Cluster Health Status

Also Read: Concept of Data Encapsulation in Python Explained with Best Examples

Step 1: Check Elasticsearch Version

You can always verify the Elasticsearch version first by running curl -XGET ‘http://localhost:9200’ query from command line as shown below. It is just to check that Elasticsearch queries are running fine without any issue. By default Elasticsearch always runs on Port 9200 hence we are using this port in our query. As you can see from below output current Elasticsearch version is 6.6.1.

[root@localhost ~]# curl -XGET 'http://localhost:9200'
{
"name" : "books-data",
"cluster_name" : "books-data-cluster",
"cluster_uuid" : "i2mphs3gSVO8NqZqkpF6SQ",
"version" : {
"number" : "6.6.1",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "a9861f4",
"build_date" : "2019-01-24T11:27:09.439740Z",
"build_snapshot" : false,
"lucene_version" : "7.6.0",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}

Step 2: Check Elasticsearch Cluster Health Status

In the next step if you go and check the Elasticsearch cluster health status by running curl http://localhost:9200/_cluster/health?pretty query then it should show something like below where you can see the Cluster Name, Status, Number of Nodes, Active Shards, Active shards percentage etc. Here you can see that the status is currently green on the output which means your Elasticsearch cluster is up and running fine. In case you see the status as yellow or red then you need to further check the root cause for this.

Sometimes all the shards are not initialized or may be some of them got corrupted then status will either show yellow or red depends on the total number of shards allocated to the nodes. If it is showing yellow then it means atleast one primary shard and its replica are not allocated to the node and if it is showing red then replica shards for atleast one index are not allocated to the node. You can check more on How to Delete Elasticsearch Unassigned Shards in 4 Easy Steps to Know more about this issue.

[root@localhost ~]# curl http://localhost:9200/_cluster/health?pretty
{
"cluster_name" : "books-data-cluster",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 0,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}

Step 3: Restart Elasticsearch Cluster Service

You can also restart the Elasticsearch service once to check if it resolves the cluster issue. There are multiple ways to restart the service depends on how you are using it. If you are using SysV init then you need to use service elasticsearch restart command to restart the service.

[root@localhost ~]# service elasticsearch restart

If you are using SystemD then you need to use systemctl restart elasticsearch command to restart the service.

[root@localhost ~]# systemctl restart elasticsearch

If you are using Supervisord then you need to use supervisorctl restart elasticsearch command to restart the service.

[root@localhost ~]# supervisorctl restart elasticsearch

Popular Recommendations:-

How to Delete Elasticsearch Red Status Indices in 3 Easy Steps

Popular 30 Elasticsearch Interview Questions and Answers[Recent-2020] For Beginners 

10 Popular Examples of sudo command in Linux(RedHat/CentOS 7/8)

9 useful w command in Linux with Examples

12 Most Popular rm command in Linux with Examples

8 Useful Linux watch command examples (RedHat/CentOS 7/8)

Troubleshooting Gitaly and Gitaly Cluster (FREE SELF)

Refer to the information below when troubleshooting Gitaly and Gitaly Cluster.

Troubleshoot Gitaly

The following sections provide possible solutions to Gitaly errors.

See also Gitaly timeout settings,
and our advice on parsing the gitaly/current file.

Check versions when using standalone Gitaly servers

When using standalone Gitaly servers, you must make sure they are the same version
as GitLab to ensure full compatibility:

  1. On the top bar, select Main menu > Admin on your GitLab instance.
  2. On the left sidebar, select Overview > Gitaly Servers.
  3. Confirm all Gitaly servers indicate that they are up to date.

Find storage resource details

You can run the following commands in a Rails console
to determine the available and used space on a Gitaly storage:

Gitlab::GitalyClient::ServerService.new("default").storage_disk_statistics
# For Gitaly Cluster
Gitlab::GitalyClient::ServerService.new("<storage name>").disk_statistics

Use gitaly-debug

The gitaly-debug command provides «production debugging» tools for Gitaly and Git
performance. It is intended to help production engineers and support
engineers investigate Gitaly performance problems.

To see the help page of gitaly-debug for a list of supported sub-commands, run:

Commits, pushes, and clones return a 401

remote: GitLab: 401 Unauthorized

You need to sync your gitlab-secrets.json file with your GitLab
application nodes.

500 and fetching folder content errors on repository pages

Fetching folder content, and in some cases 500, errors indicate
connectivity problems between GitLab and Gitaly.
Consult the client-side gRPC logs
for details.

Client side gRPC logs

Gitaly uses the gRPC RPC framework. The Ruby gRPC
client has its own log file which may contain useful information when
you are seeing Gitaly errors. You can control the log level of the
gRPC client with the GRPC_LOG_LEVEL environment variable. The
default level is WARN.

You can run a gRPC trace with:

sudo GRPC_TRACE=all GRPC_VERBOSITY=DEBUG gitlab-rake gitlab:gitaly:check

If this command fails with a failed to connect to all addresses error,
check for an SSL or TLS problem:

/opt/gitlab/embedded/bin/openssl s_client -connect <gitaly-ipaddress>:<port> -verify_return_error

Check whether Verify return code field indicates a
known Omnibus GitLab configuration problem.

If openssl succeeds but gitlab-rake gitlab:gitaly:check fails,
check certificate requirements for Gitaly.

Server side gRPC logs

gRPC tracing can also be enabled in Gitaly itself with the GODEBUG=http2debug
environment variable. To set this in an Omnibus GitLab install:

  1. Add the following to your gitlab.rb file:

    gitaly['env'] = {
      "GODEBUG=http2debug" => "2"
    }
  2. Reconfigure GitLab.

Correlating Git processes with RPCs

Sometimes you need to find out which Gitaly RPC created a particular Git process.

One method for doing this is by using DEBUG logging. However, this needs to be enabled
ahead of time and the logs produced are quite verbose.

A lightweight method for doing this correlation is by inspecting the environment
of the Git process (using its PID) and looking at the CORRELATION_ID variable:

PID=<Git process ID>
sudo cat /proc/$PID/environ | tr '' 'n' | grep ^CORRELATION_ID=

This method isn’t reliable for git cat-file processes, because Gitaly
internally pools and re-uses those across RPCs.

Observing gitaly-ruby traffic

gitaly-ruby is an internal implementation detail of Gitaly,
so, there’s not that much visibility into what goes on inside
gitaly-ruby processes.

If you have Prometheus set up to scrape your Gitaly process, you can see
request rates and error codes for individual RPCs in gitaly-ruby by
querying grpc_client_handled_total.

All gRPC calls made by gitaly-ruby itself are internal calls from the main Gitaly process to one of its gitaly-ruby
sidecars.

Assuming your grpc_client_handled_total counter only observes Gitaly,
the following query shows you RPCs are (most likely) internally
implemented as calls to gitaly-ruby:

sum(rate(grpc_client_handled_total[5m])) by (grpc_method) > 0

Repository changes fail with a 401 Unauthorized error

If you run Gitaly on its own server and notice these conditions:

  • Users can successfully clone and fetch repositories by using both SSH and HTTPS.
  • Users can’t push to repositories, or receive a 401 Unauthorized message when attempting to
    make changes to them in the web UI.

Gitaly may be failing to authenticate with the Gitaly client because it has the
wrong secrets file.

Confirm the following are all true:

  • When any user performs a git push to any repository on this Gitaly server, it
    fails with a 401 Unauthorized error:

    remote: GitLab: 401 Unauthorized
    To <REMOTE_URL>
    ! [remote rejected] branch-name -> branch-name (pre-receive hook declined)
    error: failed to push some refs to '<REMOTE_URL>'
  • When any user adds or modifies a file from the repository using the GitLab
    UI, it immediately fails with a red 401 Unauthorized banner.

  • Creating a new project and initializing it with a README
    successfully creates the project but doesn’t create the README.

  • When tailing the logs
    on a Gitaly client and reproducing the error, you get 401 errors
    when reaching the /api/v4/internal/allowed endpoint:

    # api_json.log
    {
      "time": "2019-07-18T00:30:14.967Z",
      "severity": "INFO",
      "duration": 0.57,
      "db": 0,
      "view": 0.57,
      "status": 401,
      "method": "POST",
      "path": "/api/v4/internal/allowed",
      "params": [
        {
          "key": "action",
          "value": "git-receive-pack"
        },
        {
          "key": "changes",
          "value": "REDACTED"
        },
        {
          "key": "gl_repository",
          "value": "REDACTED"
        },
        {
          "key": "project",
          "value": "/path/to/project.git"
        },
        {
          "key": "protocol",
          "value": "web"
        },
        {
          "key": "env",
          "value": "{"GIT_ALTERNATE_OBJECT_DIRECTORIES":[],"GIT_ALTERNATE_OBJECT_DIRECTORIES_RELATIVE":[],"GIT_OBJECT_DIRECTORY":null,"GIT_OBJECT_DIRECTORY_RELATIVE":null}"
        },
        {
          "key": "user_id",
          "value": "2"
        },
        {
          "key": "secret_token",
          "value": "[FILTERED]"
        }
      ],
      "host": "gitlab.example.com",
      "ip": "REDACTED",
      "ua": "Ruby",
      "route": "/api/:version/internal/allowed",
      "queue_duration": 4.24,
      "gitaly_calls": 0,
      "gitaly_duration": 0,
      "correlation_id": "XPUZqTukaP3"
    }
    
    # nginx_access.log
    [IP] - - [18/Jul/2019:00:30:14 +0000] "POST /api/v4/internal/allowed HTTP/1.1" 401 30 "" "Ruby"

To fix this problem, confirm that your gitlab-secrets.json file
on the Gitaly server matches the one on Gitaly client. If it doesn’t match,
update the secrets file on the Gitaly server to match the Gitaly client, then
reconfigure.

If you’ve confirmed that your gitlab-secrets.json file is the same on all Gitaly servers and clients,
the application might be fetching this secret from a different file. Your Gitaly server’s
config.toml file indicates the secrets file in use.
If that setting is missing, GitLab defaults to using .gitlab_shell_secret under
/opt/gitlab/embedded/service/gitlab-rails/.gitlab_shell_secret.

Repository pushes fail

When attempting git push, you can see:

  • 401 Unauthorized errors.

  • The following in server logs:

    {
      ...
      "exception.class":"JWT::VerificationError",
      "exception.message":"Signature verification raised",
      ...
    }

This error occurs when the GitLab server has been upgraded to GitLab 15.5 or later but Gitaly has not yet been upgraded.

From GitLab 15.5, GitLab authenticates with GitLab Shell using a JWT token instead of a shared secret.
You should follow the recommendations on upgrading external Gitaly and upgrade Gitaly before the GitLab
server.

Repository pushes fail with a deny updating a hidden ref error

Due to a change
introduced in GitLab 13.12, Gitaly has read-only, internal GitLab references that users are not
permitted to update. If you attempt to update internal references with git push --mirror, Git
returns the rejection error, deny updating a hidden ref.

The following references are read-only:

  • refs/environments/
  • refs/keep-around/
  • refs/merge-requests/
  • refs/pipelines/

To mirror-push branches and tags only, and avoid attempting to mirror-push protected refs, run:

git push origin +refs/heads/*:refs/heads/* +refs/tags/*:refs/tags/*

Any other namespaces that the administrator wants to push can be included there as well via additional patterns.

Command line tools cannot connect to Gitaly

gRPC cannot reach your Gitaly server if:

  • You can’t connect to a Gitaly server with command-line tools.
  • Certain actions result in a 14: Connect Failed error message.

Verify you can reach Gitaly by using TCP:

sudo gitlab-rake gitlab:tcp_check[GITALY_SERVER_IP,GITALY_LISTEN_PORT]

If the TCP connection:

  • Fails, check your network settings and your firewall rules.
  • Succeeds, your networking and firewall rules are correct.

If you use proxy servers in your command line environment such as Bash, these can interfere with
your gRPC traffic.

If you use Bash or a compatible command line environment, run the following commands to determine
whether you have proxy servers configured:

echo $http_proxy
echo $https_proxy

If either of these variables have a value, your Gitaly CLI connections may be getting routed through
a proxy which cannot connect to Gitaly.

To remove the proxy setting, run the following commands (depending on which variables had values):

unset http_proxy
unset https_proxy

Permission denied errors appearing in Gitaly or Praefect logs when accessing repositories

You might see the following in Gitaly and Praefect logs:

{
  ...
  "error":"rpc error: code = PermissionDenied desc = permission denied",
  "grpc.code":"PermissionDenied",
  "grpc.meta.client_name":"gitlab-web",
  "grpc.request.fullMethod":"/gitaly.ServerService/ServerInfo",
  "level":"warning",
  "msg":"finished unary call with code PermissionDenied",
  ...
}

This is a gRPC call
error response code.

If this error occurs, even though
the Gitaly auth tokens are set up correctly,
it’s likely that the Gitaly servers are experiencing
clock drift.

Ensure the Gitaly clients and servers are synchronized, and use an NTP time
server to keep them synchronized.

Gitaly not listening on new address after reconfiguring

When updating the gitaly['listen_addr'] or gitaly['prometheus_listen_addr'] values, Gitaly may
continue to listen on the old address after a sudo gitlab-ctl reconfigure.

When this occurs, run sudo gitlab-ctl restart to resolve the issue. This should no longer be
necessary because this issue is resolved.

Permission denied errors appearing in Gitaly logs when accessing repositories from a standalone Gitaly node

If this error occurs even though file permissions are correct, it’s likely that the Gitaly node is
experiencing clock drift.

Ensure that the GitLab and Gitaly nodes are synchronized and use an NTP time
server to keep them synchronized if possible.

Health check warnings

The following warning in /var/log/gitlab/praefect/current can be ignored.

"error":"full method name not found: /grpc.health.v1.Health/Check",
"msg":"error when looking up method info"

File not found errors

The following errors in /var/log/gitlab/gitaly/current can be ignored.
They are caused by the GitLab Rails application checking for specific files
that do not exist in a repository.

"error":"not found: .gitlab/route-map.yml"
"error":"not found: Dockerfile"
"error":"not found: .gitlab-ci.yml"

Git pushes are slow when Dynatrace is enabled

Dynatrace can cause the /opt/gitlab/embedded/bin/gitaly-hooks reference transaction hook,
to take several seconds to start up and shut down. gitaly-hooks is executed twice when users
push, which causes a significant delay.

If Git pushes are too slow when Dynatrace is enabled, disable Dynatrace.

Troubleshoot Praefect (Gitaly Cluster)

The following sections provide possible solutions to Gitaly Cluster errors.

Check cluster health

Introduced in GitLab 14.5.

The check Praefect sub-command runs a series of checks to determine the health of the Gitaly Cluster.

gitlab-ctl praefect check

The following sections describe the checks that are run.

Praefect migrations

Because Database migrations must be up to date for Praefect to work correctly, checks if Praefect migrations are up to date.

If this check fails:

  1. See the schema_migrations table in the database to see which migrations have run.
  2. Run praefect sql-migrate to bring the migrations up to date.

Node connectivity and disk access

Checks if Praefect can reach all of its Gitaly nodes, and if each Gitaly node has read and write access to all of its storages.

If this check fails:

  1. Confirm the network addresses and tokens are set up correctly:
    • In the Praefect configuration.
    • In each Gitaly node’s configuration.
  2. On the Gitaly nodes, check that the gitaly process being run as git. There might be a permissions issue that is preventing Gitaly from
    accessing its storage directories.
  3. Confirm that there are no issues with the network that connects Praefect to Gitaly nodes.

Database read and write access

Checks if Praefect can read from and write to the database.

If this check fails:

  1. See if the Praefect database is in recovery mode. In recovery mode, tables may be read only. To check, run:

    select pg_is_in_recovery()
  2. Confirm that the user that Praefect uses to connect to PostgreSQL has read and write access to the database.

  3. See if the database has been placed into read-only mode. To check, run:

    show default_transaction_read_only

Inaccessible repositories

Checks how many repositories are inaccessible because they are missing a primary assignment, or their primary is unavailable.

If this check fails:

  1. See if any Gitaly nodes are down. Run praefect ping-nodes to check.
  2. Check if there is a high load on the Praefect database. If the Praefect database is slow to respond, it can lead health checks failing to persist
    to the database, leading Praefect to think nodes are unhealthy.

Check clock synchronization

Introduced in GitLab 14.8.

Authentication between Praefect and the Gitaly servers requires the server times to be
in sync so the token check succeeds.

This check helps identify the root cause of permission denied
errors being logged by Praefect.

For offline environments where access to public pool.ntp.org servers is not possible, the Praefect check sub-command fails this
check with an error message similar to:

checking with NTP service at  and allowed clock drift 60000ms [correlation_id: <XXX>]
Failed (fatal) error: gitaly node at tcp://[gitlab.example-instance.com]:8075: rpc error: code = DeadlineExceeded desc = context deadline exceeded

To resolve this issue, set an environment variable on all Praefect servers to point to an accessible internal NTP server. For example:

export NTP_HOST=ntp.example.com

Praefect errors in logs

If you receive an error, check /var/log/gitlab/gitlab-rails/production.log.

Here are common errors and potential causes:

  • 500 response code
    • ActionView::Template::Error (7:permission denied)

      • praefect['auth_token'] and gitlab_rails['gitaly_token'] do not match on the GitLab server.
    • Unable to save project. Error: 7:permission denied

      • Secret token in praefect['storage_nodes'] on GitLab server does not match the
        value in gitaly['auth_token'] on one or more Gitaly servers.
  • 503 response code
    • GRPC::Unavailable (14:failed to connect to all addresses)

      • GitLab was unable to reach Praefect.
    • GRPC::Unavailable (14:all SubCons are in TransientFailure...)

      • Praefect cannot reach one or more of its child Gitaly nodes. Try running
        the Praefect connection checker to diagnose.

Praefect database experiencing high CPU load

Some common reasons for the Praefect database to experience elevated CPU usage include:

  • Prometheus metrics scrapes running an expensive query. If you have GitLab 14.2
    or above, set praefect['separate_database_metrics'] = true in gitlab.rb.
  • Read distribution caching is disabled, increasing the number of queries made to the
    database when user traffic is high. Ensure read distribution caching is enabled.

Determine primary Gitaly node

To determine the primary node of a repository:

  • In GitLab 14.6 and later, use the praefect metadata subcommand.

  • In GitLab 13.12 to GitLab 14.5 with repository-specific primaries,
    use the gitlab:praefect:replicas Rake task.

  • With legacy election strategies in GitLab 13.12 and earlier, the primary was the same for all repositories in a virtual storage.
    To determine the current primary Gitaly node for a specific virtual storage:

    • Use the Shard Primary Election Grafana chart on the
      Gitlab Omnibus - Praefect dashboard.
      This is recommended.

    • If you do not have Grafana set up, use the following command on each host of each
      Praefect node:

      curl localhost:9652/metrics | grep gitaly_praefect_primaries`

View repository metadata

Introduced in GitLab 14.6.

Gitaly Cluster maintains a metadata database about the repositories stored on the cluster. Use the praefect metadata subcommand
to inspect the metadata for troubleshooting.

You can retrieve a repository’s metadata by its Praefect-assigned repository ID:

sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml metadata -repository-id <repository-id>

You can also retrieve a repository’s metadata by its virtual storage and relative path:

sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml metadata -virtual-storage <virtual-storage> -relative-path <relative-path>

Examples

To retrieve the metadata for a repository with a Praefect-assigned repository ID of 1:

sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml metadata -repository-id 1

To retrieve the metadata for a repository with virtual storage default and relative path @hashed/b1/7e/b17ef6d19c7a5b1ee83b907c595526dcb1eb06db8227d650d5dda0a9f4ce8cd9.git:

sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml metadata -virtual-storage default -relative-path @hashed/b1/7e/b17ef6d19c7a5b1ee83b907c595526dcb1eb06db8227d650d5dda0a9f4ce8cd9.git

Either of these examples retrieve the following metadata for an example repository:

Repository ID: 54771
Virtual Storage: "default"
Relative Path: "@hashed/b1/7e/b17ef6d19c7a5b1ee83b907c595526dcb1eb06db8227d650d5dda0a9f4ce8cd9.git"
Replica Path: "@hashed/b1/7e/b17ef6d19c7a5b1ee83b907c595526dcb1eb06db8227d650d5dda0a9f4ce8cd9.git"
Primary: "gitaly-1"
Generation: 1
Replicas:
- Storage: "gitaly-1"
  Assigned: true
  Generation: 1, fully up to date
  Healthy: true
  Valid Primary: true
  Verified At: 2021-04-01 10:04:20 +0000 UTC
- Storage: "gitaly-2"
  Assigned: true
  Generation: 0, behind by 1 changes
  Healthy: true
  Valid Primary: false
  Verified At: unverified
- Storage: "gitaly-3"
  Assigned: true
  Generation: replica not yet created
  Healthy: false
  Valid Primary: false
  Verified At: unverified

Available metadata

The metadata retrieved by praefect metadata includes the fields in the following tables.

Field Description
Repository ID Permanent unique ID assigned to the repository by Praefect. Different to the ID GitLab uses for repositories.
Virtual Storage Name of the virtual storage the repository is stored in.
Relative Path Repository’s path in the virtual storage.
Replica Path Where on the Gitaly node’s disk the repository’s replicas are stored.
Primary Current primary of the repository.
Generation Used by Praefect to track repository changes. Each write in the repository increments the repository’s generation.
Replicas A list of replicas that exist or are expected to exist.

For each replica, the following metadata is available:

Replicas Field Description
Storage Name of the Gitaly storage that contains the replica.
Assigned Indicates whether the replica is expected to exist in the storage. Can be false if a Gitaly node is removed from the cluster or if the storage contains an extra copy after the repository’s replication factor was decreased.
Generation Latest confirmed generation of the replica. It indicates:

— The replica is fully up to date if the generation matches the repository’s generation.
— The replica is outdated if the replica’s generation is less than the repository’s generation.
replica not yet created if the replica does not yet exist at all on the storage.

Healthy Indicates whether the Gitaly node that is hosting this replica is considered healthy by the consensus of Praefect nodes.
Valid Primary Indicates whether the replica is fit to serve as the primary node. If the repository’s primary is not a valid primary, a failover occurs on the next write to the repository if there is another replica that is a valid primary. A replica is a valid primary if:

— It is stored on a healthy Gitaly node.
— It is fully up to date.
— It is not targeted by a pending deletion job from decreasing replication factor.
— It is assigned.

Verified At Indicates last successful verification of the replica by the verification worker. If the replica has not yet been verified, unverified is displayed in place of the last successful verification time. Introduced in GitLab 15.0.

Command fails with ‘repository not found’

If the supplied value for -virtual-storage is incorrect, the command returns the following error:

get metadata: rpc error: code = NotFound desc = repository not found

The documented examples specify -virtual-storage default. Check the Praefect server setting praefect['virtual_storages'] in /etc/gitlab/gitlab.rb.

Check that repositories are in sync

Is some cases the Praefect database can get out of sync with the underlying Gitaly nodes. To check that
a given repository is fully synced on all nodes, run the gitlab:praefect:replicas Rake task
that checksums the repository on all Gitaly nodes.

The Praefect dataloss command only checks the state of the repository in the Praefect database, and cannot
be relied to detect sync problems in this scenario.

Relation does not exist errors

By default Praefect database tables are created automatically by gitlab-ctl reconfigure task.

However, the Praefect database tables are not created on initial reconfigure and can throw
errors that relations do not exist if either:

  • The gitlab-ctl reconfigure command isn’t executed.
  • There are errors during the execution.

For example:

  • ERROR: relation "node_status" does not exist at character 13

  • ERROR: relation "replication_queue_lock" does not exist at character 40

  • This error:

    {"level":"error","msg":"Error updating node: pq: relation "node_status" does not exist","pid":210882,"praefectName":"gitlab1x4m:0.0.0.0:2305","time":"2021-04-01T19:26:19.473Z","virtual_storage":"praefect-cluster-1"}

To solve this, the database schema migration can be done using sql-migrate sub-command of
the praefect command:

$ sudo /opt/gitlab/embedded/bin/praefect -config /var/opt/gitlab/praefect/config.toml sql-migrate
praefect sql-migrate: OK (applied 21 migrations)

Requests fail with ‘repository scoped: invalid Repository’ errors

This indicates that the virtual storage name used in the
Praefect configuration does not match the storage name used in
git_data_dirs setting for GitLab.

Resolve this by matching the virtual storage names used in Praefect and GitLab configuration.

Gitaly Cluster performance issues on cloud platforms

Praefect does not require a lot of CPU or memory, and can run on small virtual machines.
Cloud services may place other limits on the resources that small VMs can use, such as
disk IO and network traffic.

Praefect nodes generate a lot of network traffic. The following symptoms can be observed if their network bandwidth has
been throttled by the cloud service:

  • Poor performance of Git operations.
  • High network latency.
  • High memory use by Praefect.

Possible solutions:

  • Provision larger VMs to gain access to larger network traffic allowances.
  • Use your cloud service’s monitoring and logging to check that the Praefect nodes are not exhausting their traffic allowances.

Profiling Gitaly

Gitaly exposes several of Golang’s built-in performance profiling tools on the Prometheus listen port. For example, if Prometheus is listening
on port 9236 of the GitLab server:

  • Get a list of running goroutines and their backtraces:

    curl --output goroutines.txt "http://<gitaly_server>:9236/debug/pprof/goroutine?debug=2"
  • Run a CPU profile for 30 seconds:

    curl --output cpu.bin "http://<gitaly_server>:9236/debug/pprof/profile"
  • Profile heap memory usage:

    curl --output heap.bin "http://<gitaly_server>:9236/debug/pprof/heap"
  • Record a 5 second execution trace. This will impact Gitaly’s performance while running:

    curl --output trace.bin "http://<gitaly_server>:9236/debug/pprof/trace?seconds=5"

On a host with go installed, the CPU profile and heap profile can be viewed in a browser:

go tool pprof -http=:8001 cpu.bin
go tool pprof -http=:8001 heap.bin

Execution traces can be viewed by running:

Понравилась статья? Поделить с друзьями:
  • Error failed to create toolchain
  • Error failed to create d3d surface
  • Error failed to create cache file nemesis
  • Error failed to create bootfiles bfsvc error 0xc1 errorcode c1
  • Error failed to create bootfiles bfsvc error 0x57 параметр задан неверно