Error rpc error code permissiondenied desc you are not authorized for this operation

Troubleshooting Gitaly and Gitaly Cluster (FREE SELF) Refer to the information below when troubleshooting Gitaly and Gitaly Cluster. Troubleshoot Gitaly The following sections provide possible solutions to Gitaly errors. Check versions when using standalone Gitaly servers When using standalone Gitaly servers, you must make sure they are the same version as GitLab to ensure […]

Содержание

  1. Troubleshooting Gitaly and Gitaly Cluster (FREE SELF)
  2. Troubleshoot Gitaly
  3. Check versions when using standalone Gitaly servers
  4. Find storage resource details
  5. Use gitaly-debug
  6. Commits, pushes, and clones return a 401
  7. 500 and fetching folder content errors on repository pages
  8. Client side gRPC logs
  9. Server side gRPC logs
  10. Correlating Git processes with RPCs
  11. Observing gitaly-ruby traffic
  12. Repository changes fail with a 401 Unauthorized error
  13. Repository pushes fail with a deny updating a hidden ref error
  14. Command line tools cannot connect to Gitaly
  15. Permission denied errors appearing in Gitaly or Praefect logs when accessing repositories
  16. Gitaly not listening on new address after reconfiguring
  17. Permission denied errors appearing in Gitaly logs when accessing repositories from a standalone Gitaly node
  18. Health check warnings
  19. File not found errors
  20. Git pushes are slow when Dynatrace is enabled
  21. Troubleshoot Praefect (Gitaly Cluster)
  22. Check cluster health
  23. Praefect migrations
  24. Node connectivity and disk access
  25. Database read and write access
  26. Inaccessible repositories
  27. Check clock synchronization
  28. Praefect errors in logs
  29. Praefect database experiencing high CPU load
  30. Determine primary Gitaly node
  31. View repository metadata
  32. Examples
  33. Available metadata
  34. Command fails with ‘repository not found’
  35. Check that repositories are in sync
  36. Relation does not exist errors
  37. Requests fail with ‘repository scoped: invalid Repository’ errors
  38. Gitaly Cluster performance issues on cloud platforms
  39. Profiling Gitaly

Troubleshooting Gitaly and Gitaly Cluster (FREE SELF)

Refer to the information below when troubleshooting Gitaly and Gitaly Cluster.

Troubleshoot Gitaly

The following sections provide possible solutions to Gitaly errors.

Check versions when using standalone Gitaly servers

When using standalone Gitaly servers, you must make sure they are the same version as GitLab to ensure full compatibility:

  1. On the top bar, select Main menu > Admin on your GitLab instance.
  2. On the left sidebar, select Overview > Gitaly Servers.
  3. Confirm all Gitaly servers indicate that they are up to date.

Find storage resource details

You can run the following commands in a Rails console to determine the available and used space on a Gitaly storage:

Use gitaly-debug

The gitaly-debug command provides «production debugging» tools for Gitaly and Git performance. It is intended to help production engineers and support engineers investigate Gitaly performance problems.

To see the help page of gitaly-debug for a list of supported sub-commands, run:

Commits, pushes, and clones return a 401

You need to sync your gitlab-secrets.json file with your GitLab application nodes.

500 and fetching folder content errors on repository pages

Fetching folder content , and in some cases 500 , errors indicate connectivity problems between GitLab and Gitaly. Consult the client-side gRPC logs for details.

Client side gRPC logs

Gitaly uses the gRPC RPC framework. The Ruby gRPC client has its own log file which may contain useful information when you are seeing Gitaly errors. You can control the log level of the gRPC client with the GRPC_LOG_LEVEL environment variable. The default level is WARN .

You can run a gRPC trace with:

If this command fails with a failed to connect to all addresses error, check for an SSL or TLS problem:

Check whether Verify return code field indicates a known Omnibus GitLab configuration problem.

If openssl succeeds but gitlab-rake gitlab:gitaly:check fails, check certificate requirements for Gitaly.

Server side gRPC logs

gRPC tracing can also be enabled in Gitaly itself with the GODEBUG=http2debug environment variable. To set this in an Omnibus GitLab install:

Add the following to your gitlab.rb file:

Correlating Git processes with RPCs

Sometimes you need to find out which Gitaly RPC created a particular Git process.

One method for doing this is by using DEBUG logging. However, this needs to be enabled ahead of time and the logs produced are quite verbose.

A lightweight method for doing this correlation is by inspecting the environment of the Git process (using its PID ) and looking at the CORRELATION_ID variable:

This method isn’t reliable for git cat-file processes, because Gitaly internally pools and re-uses those across RPCs.

Observing gitaly-ruby traffic

gitaly-ruby is an internal implementation detail of Gitaly, so, there’s not that much visibility into what goes on inside gitaly-ruby processes.

If you have Prometheus set up to scrape your Gitaly process, you can see request rates and error codes for individual RPCs in gitaly-ruby by querying grpc_client_handled_total .

All gRPC calls made by gitaly-ruby itself are internal calls from the main Gitaly process to one of its gitaly-ruby sidecars.

Assuming your grpc_client_handled_total counter only observes Gitaly, the following query shows you RPCs are (most likely) internally implemented as calls to gitaly-ruby :

If you run Gitaly on its own server and notice these conditions:

  • Users can successfully clone and fetch repositories by using both SSH and HTTPS.
  • Users can’t push to repositories, or receive a 401 Unauthorized message when attempting to make changes to them in the web UI.

Gitaly may be failing to authenticate with the Gitaly client because it has the wrong secrets file.

Confirm the following are all true:

When any user performs a git push to any repository on this Gitaly server, it fails with a 401 Unauthorized error:

When any user adds or modifies a file from the repository using the GitLab UI, it immediately fails with a red 401 Unauthorized banner.

Creating a new project and initializing it with a README successfully creates the project but doesn’t create the README.

When tailing the logs on a Gitaly client and reproducing the error, you get 401 errors when reaching the /api/v4/internal/allowed endpoint:

To fix this problem, confirm that your gitlab-secrets.json file on the Gitaly server matches the one on Gitaly client. If it doesn’t match, update the secrets file on the Gitaly server to match the Gitaly client, then reconfigure.

If you’ve confirmed that your gitlab-secrets.json file is the same on all Gitaly servers and clients, the application might be fetching this secret from a different file. Your Gitaly server’s config.toml file indicates the secrets file in use. If that setting is missing, GitLab defaults to using .gitlab_shell_secret under /opt/gitlab/embedded/service/gitlab-rails/.gitlab_shell_secret .

Repository pushes fail with a deny updating a hidden ref error

Due to a change introduced in GitLab 13.12, Gitaly has read-only, internal GitLab references that users are not permitted to update. If you attempt to update internal references with git push —mirror , Git returns the rejection error, deny updating a hidden ref .

The following references are read-only:

  • refs/environments/
  • refs/keep-around/
  • refs/merge-requests/
  • refs/pipelines/

To mirror-push branches and tags only, and avoid attempting to mirror-push protected refs, run:

Any other namespaces that the administrator wants to push can be included there as well via additional patterns.

Command line tools cannot connect to Gitaly

gRPC cannot reach your Gitaly server if:

  • You can’t connect to a Gitaly server with command-line tools.
  • Certain actions result in a 14: Connect Failed error message.

Verify you can reach Gitaly by using TCP:

If the TCP connection:

  • Fails, check your network settings and your firewall rules.
  • Succeeds, your networking and firewall rules are correct.

If you use proxy servers in your command line environment such as Bash, these can interfere with your gRPC traffic.

If you use Bash or a compatible command line environment, run the following commands to determine whether you have proxy servers configured:

If either of these variables have a value, your Gitaly CLI connections may be getting routed through a proxy which cannot connect to Gitaly.

To remove the proxy setting, run the following commands (depending on which variables had values):

Permission denied errors appearing in Gitaly or Praefect logs when accessing repositories

You might see the following in Gitaly and Praefect logs:

If this error occurs, even though the Gitaly auth tokens are set up correctly, it’s likely that the Gitaly servers are experiencing clock drift.

Ensure the Gitaly clients and servers are synchronized, and use an NTP time server to keep them synchronized.

Gitaly not listening on new address after reconfiguring

When updating the gitaly[‘listen_addr’] or gitaly[‘prometheus_listen_addr’] values, Gitaly may continue to listen on the old address after a sudo gitlab-ctl reconfigure .

When this occurs, run sudo gitlab-ctl restart to resolve the issue. This should no longer be necessary because this issue is resolved.

Permission denied errors appearing in Gitaly logs when accessing repositories from a standalone Gitaly node

If this error occurs even though file permissions are correct, it’s likely that the Gitaly node is experiencing clock drift.

Ensure that the GitLab and Gitaly nodes are synchronized and use an NTP time server to keep them synchronized if possible.

Health check warnings

The following warning in /var/log/gitlab/praefect/current can be ignored.

File not found errors

The following errors in /var/log/gitlab/gitaly/current can be ignored. They are caused by the GitLab Rails application checking for specific files that do not exist in a repository.

Git pushes are slow when Dynatrace is enabled

Dynatrace can cause the /opt/gitlab/embedded/bin/gitaly-hooks reference transaction hook, to take several seconds to start up and shut down. gitaly-hooks is executed twice when users push, which causes a significant delay.

If Git pushes are too slow when Dynatrace is enabled, disable Dynatrace.

Troubleshoot Praefect (Gitaly Cluster)

The following sections provide possible solutions to Gitaly Cluster errors.

Check cluster health

The check Praefect sub-command runs a series of checks to determine the health of the Gitaly Cluster.

The following sections describe the checks that are run.

Praefect migrations

Because Database migrations must be up to date for Praefect to work correctly, checks if Praefect migrations are up to date.

If this check fails:

  1. See the schema_migrations table in the database to see which migrations have run.
  2. Run praefect sql-migrate to bring the migrations up to date.

Node connectivity and disk access

Checks if Praefect can reach all of its Gitaly nodes, and if each Gitaly node has read and write access to all of its storages.

If this check fails:

  1. Confirm the network addresses and tokens are set up correctly:
    • In the Praefect configuration.
    • In each Gitaly node’s configuration.
  2. On the Gitaly nodes, check that the gitaly process being run as git . There might be a permissions issue that is preventing Gitaly from accessing its storage directories.
  3. Confirm that there are no issues with the network that connects Praefect to Gitaly nodes.

Database read and write access

Checks if Praefect can read from and write to the database.

If this check fails:

See if the Praefect database is in recovery mode. In recovery mode, tables may be read only. To check, run:

Confirm that the user that Praefect uses to connect to PostgreSQL has read and write access to the database.

See if the database has been placed into read-only mode. To check, run:

Inaccessible repositories

Checks how many repositories are inaccessible because they are missing a primary assignment, or their primary is unavailable.

If this check fails:

  1. See if any Gitaly nodes are down. Run praefect ping-nodes to check.
  2. Check if there is a high load on the Praefect database. If the Praefect database is slow to respond, it can lead health checks failing to persist to the database, leading Praefect to think nodes are unhealthy.

Check clock synchronization

Authentication between Praefect and the Gitaly servers requires the server times to be in sync so the token check succeeds.

This check helps identify the root cause of permission denied errors being logged by Praefect.

Praefect errors in logs

If you receive an error, check /var/log/gitlab/gitlab-rails/production.log .

Here are common errors and potential causes:

  • 500 response code
    • ActionView::Template::Error (7:permission denied)
      • praefect[‘auth_token’] and gitlab_rails[‘gitaly_token’] do not match on the GitLab server.
    • Unable to save project. Error: 7:permission denied
      • Secret token in praefect[‘storage_nodes’] on GitLab server does not match the value in gitaly[‘auth_token’] on one or more Gitaly servers.
  • 503 response code
    • GRPC::Unavailable (14:failed to connect to all addresses)
      • GitLab was unable to reach Praefect.
    • GRPC::Unavailable (14:all SubCons are in TransientFailure. )
      • Praefect cannot reach one or more of its child Gitaly nodes. Try running the Praefect connection checker to diagnose.

Praefect database experiencing high CPU load

Some common reasons for the Praefect database to experience elevated CPU usage include:

  • Prometheus metrics scrapes running an expensive query. If you have GitLab 14.2 or above, set praefect[‘separate_database_metrics’] = true in gitlab.rb .
  • Read distribution caching is disabled, increasing the number of queries made to the database when user traffic is high. Ensure read distribution caching is enabled.

Determine primary Gitaly node

To determine the primary node of a repository:

In GitLab 14.6 and later, use the praefect metadata subcommand.

With legacy election strategies in GitLab 13.12 and earlier, the primary was the same for all repositories in a virtual storage. To determine the current primary Gitaly node for a specific virtual storage:

Use the Shard Primary Election Grafana chart on the Gitlab Omnibus — Praefect dashboard. This is recommended.

If you do not have Grafana set up, use the following command on each host of each Praefect node:

View repository metadata

Gitaly Cluster maintains a metadata database about the repositories stored on the cluster. Use the praefect metadata subcommand to inspect the metadata for troubleshooting.

You can retrieve a repository’s metadata by its Praefect-assigned repository ID:

You can also retrieve a repository’s metadata by its virtual storage and relative path:

Examples

To retrieve the metadata for a repository with a Praefect-assigned repository ID of 1:

To retrieve the metadata for a repository with virtual storage default and relative path @hashed/b1/7e/b17ef6d19c7a5b1ee83b907c595526dcb1eb06db8227d650d5dda0a9f4ce8cd9.git :

Either of these examples retrieve the following metadata for an example repository:

Available metadata

The metadata retrieved by praefect metadata includes the fields in the following tables.

Field Description
Repository ID Permanent unique ID assigned to the repository by Praefect. Different to the ID GitLab uses for repositories.
Virtual Storage Name of the virtual storage the repository is stored in.
Relative Path Repository’s path in the virtual storage.
Replica Path Where on the Gitaly node’s disk the repository’s replicas are stored.
Primary Current primary of the repository.
Generation Used by Praefect to track repository changes. Each write in the repository increments the repository’s generation.
Replicas A list of replicas that exist or are expected to exist.

For each replica, the following metadata is available:

Replicas Field Description
Storage Name of the Gitaly storage that contains the replica.
Assigned Indicates whether the replica is expected to exist in the storage. Can be false if a Gitaly node is removed from the cluster or if the storage contains an extra copy after the repository’s replication factor was decreased.
Generation Latest confirmed generation of the replica. It indicates:

— The replica is fully up to date if the generation matches the repository’s generation.
— The replica is outdated if the replica’s generation is less than the repository’s generation.
— replica not yet created if the replica does not yet exist at all on the storage.

Healthy Indicates whether the Gitaly node that is hosting this replica is considered healthy by the consensus of Praefect nodes.
Valid Primary Indicates whether the replica is fit to serve as the primary node. If the repository’s primary is not a valid primary, a failover occurs on the next write to the repository if there is another replica that is a valid primary. A replica is a valid primary if:

— It is stored on a healthy Gitaly node.
— It is fully up to date.
— It is not targeted by a pending deletion job from decreasing replication factor.
— It is assigned.

Verified At Indicates last successful verification of the replica by the verification worker. If the replica has not yet been verified, unverified is displayed in place of the last successful verification time. Introduced in GitLab 15.0.

Command fails with ‘repository not found’

If the supplied value for -virtual-storage is incorrect, the command returns the following error:

The documented examples specify -virtual-storage default . Check the Praefect server setting praefect[‘virtual_storages’] in /etc/gitlab/gitlab.rb .

Check that repositories are in sync

Is some cases the Praefect database can get out of sync with the underlying Gitaly nodes. To check that a given repository is fully synced on all nodes, run the gitlab:praefect:replicas Rake task that checksums the repository on all Gitaly nodes.

The Praefect dataloss command only checks the state of the repository in the Praefect database, and cannot be relied to detect sync problems in this scenario.

Relation does not exist errors

By default Praefect database tables are created automatically by gitlab-ctl reconfigure task.

However, the Praefect database tables are not created on initial reconfigure and can throw errors that relations do not exist if either:

  • The gitlab-ctl reconfigure command isn’t executed.
  • There are errors during the execution.

ERROR: relation «node_status» does not exist at character 13

ERROR: relation «replication_queue_lock» does not exist at character 40

To solve this, the database schema migration can be done using sql-migrate sub-command of the praefect command:

Requests fail with ‘repository scoped: invalid Repository’ errors

This indicates that the virtual storage name used in the Praefect configuration does not match the storage name used in git_data_dirs setting for GitLab.

Resolve this by matching the virtual storage names used in Praefect and GitLab configuration.

Gitaly Cluster performance issues on cloud platforms

Praefect does not require a lot of CPU or memory, and can run on small virtual machines. Cloud services may place other limits on the resources that small VMs can use, such as disk IO and network traffic.

Praefect nodes generate a lot of network traffic. The following symptoms can be observed if their network bandwidth has been throttled by the cloud service:

  • Poor performance of Git operations.
  • High network latency.
  • High memory use by Praefect.
  • Provision larger VMs to gain access to larger network traffic allowances.
  • Use your cloud service’s monitoring and logging to check that the Praefect nodes are not exhausting their traffic allowances.

Profiling Gitaly

Gitaly exposes several of Golang’s built-in performance profiling tools on the Prometheus listen port. For example, if Prometheus is listening on port 9236 of the GitLab server:

Get a list of running goroutines and their backtraces:

Run a CPU profile for 30 seconds:

Profile heap memory usage:

Record a 5 second execution trace. This will impact Gitaly’s performance while running:

On a host with go installed, the CPU profile and heap profile can be viewed in a browser:

Источник

I realize that this is closed but leaving this here anyway for anyone else who finds it. Sorry the test code is in Go but it should be easy enough to translate to Node or just test using the CLI.

Bug Report

NOTE: Not really sure if this is a «bug» as it seems to be working the way it was designed but it’s very unclear on how to use the standard roles provided to subscribe and consume messages from a topic.

NOTE: All roles were granted to the service account from the subscription not from the project

When creating service accounts to use our Pub/Sub topics I gave the accounts that only need to read from topics the Pub/Sub Subscriber role. The name of the role implies that any account that has that role should be able to read from a topic. This however wasn’t the case, I encountered PermissionDenied errors. So I began experimenting with different combinations of standard roles trying to find the least permissive set of roles that would allow this.

The least permissive standard role that works is Pub/Sub Editor. Just looking at the name, this role would not be my first choice for a read only action. This role also contains pubsub.subscriptions.delete which is a dangerous permission to have for a service account meant to be «read» only (it does actually need to update the topic to ack the message which is actually an update to the topic but it’s expected for a consumer in this context).

The least permissive set of permissions to achieve the goal of reading form a topic, processing and acking can be accomplished with the following custom role:

$ gcloud iam roles describe --project my-project my_custom_role
description: 'Created on: 2018-08-09'
etag: ...
includedPermissions:
- pubsub.subscriptions.consume
- pubsub.subscriptions.get
- pubsub.subscriptions.update
name: projects/my-project/roles/my_custom_role
stage: ALPHA
title: my_custom_role

Repro Steps

  • Create a topic, my-topic
  • Create a subscription for my-topic named my-topic-subscription
  • Create a service account my-svc-acct
  • Grant the Pub/Sub Editor role to my-svc-acct via the subscription, not via the project.
  • Attempt to do a receive

Expected

A message is received

Actual

An error is received:

rpc error: code = PermissionDenied desc = User not authorized to perform this action.

Minimum Perms Needed to Subscribe

After some testing, I determined this is the minimum set of permissions needed to subscribe and process messages. Code used for testing provided below.

$ gcloud iam roles describe --project my-project my-custom-role
description: 'Created on: 2018-08-09'
etag: ...
includedPermissions:
- pubsub.subscriptions.consume
- pubsub.subscriptions.get
- pubsub.subscriptions.update
name: projects/my-project/roles/my-custom-role
stage: ALPHA
title: SRE Service Subscriber

$ GOOGLE_APPLICATION_CREDENTIALS=./creds.json go run tester.go my-topic-subscription my-topic

Testing topic perms projects/my-project/topics/my-topic
	NO PERMS!
================================================================================

Testing subscription perms projects/my-project/subscriptions/my-topic-subscription
	Allowed: pubsub.subscriptions.consume
	Allowed: pubsub.subscriptions.get
	Allowed: pubsub.subscriptions.update
================================================================================

Starting Receive...
hello

Test Code

Pushing data via CLI:

gcloud pubsub topics publish my-topic --message "hello"
package main

import (
	"context"
	"fmt"
	"os"
	"errors"
	"strings"
	"cloud.google.com/go/pubsub"
)

const (
	project = "my-project"
)

var (
	topicPerms = []string{
		"pubsub.topics.publish",
		"pubsub.topics.delete",
		"pubsub.topics.update",
		"pubsub.topics.getIamPolicy",
		"pubsub.topics.setIamPolicy",
	}

	subPerms = []string{
		"pubsub.subscriptions.consume",
		"pubsub.subscriptions.get",
		"pubsub.subscriptions.delete",
		"pubsub.subscriptions.update",
		"pubsub.subscriptions.getIamPolicy",
		"pubsub.subscriptions.setIamPolicy",
	}
)

func hr() {
	fmt.Println("")
	fmt.Println(strings.Repeat("=", 80))
	fmt.Println("")
}

func printPerms(perms []string) {
	if len(perms) > 0 {
		for _, perm := range perms {
			fmt.Printf("tAllowed: %vn", perm)
		}
	} else {
		fmt.Println("tNO PERMS!")
	}
}

func main() {
	subName := os.Args[1]
	topicName := os.Args[2]

	client, err := pubsub.NewClient(context.Background(), project)
	if err != nil {
		fmt.Println(err.Error())
		return
	}

	topic := client.TopicInProject(topicName, project)
	sub := client.SubscriptionInProject(subName, project)

	hr()
	fmt.Println("Testing topic perms " + topic.String())
	tperms, err := topic.IAM().TestPermissions(context.Background(), topicPerms)
	printPerms(tperms)

	hr()
	fmt.Println("Testing subscription perms " + sub.String())
	perms, _ := sub.IAM().TestPermissions(context.Background(), subPerms)
	printPerms(perms)

	hr()
	fmt.Println("Starting Receive...")
	err = sub.Receive(context.Background(), func(ctx context.Context, msg *pubsub.Message) {
		fmt.Println(string(msg.Data))
		msg.Ack()
		return
	})
	if err != nil {
		err = errors.New(fmt.Sprintf("error receiving from %s: %s", subName, err.Error()))
		fmt.Println(err)
		return
	}
}

Name already in use

gitlabhq / doc / administration / gitaly / troubleshooting.md

  • Go to file T
  • Go to line L
  • Copy path
  • Copy permalink

Copy raw contents

Copy raw contents

Troubleshooting Gitaly and Gitaly Cluster (FREE SELF)

Refer to the information below when troubleshooting Gitaly and Gitaly Cluster.

The following sections provide possible solutions to Gitaly errors.

Check versions when using standalone Gitaly servers

When using standalone Gitaly servers, you must make sure they are the same version as GitLab to ensure full compatibility:

  1. On the top bar, select Main menu > Admin on your GitLab instance.
  2. On the left sidebar, select Overview > Gitaly Servers.
  3. Confirm all Gitaly servers indicate that they are up to date.

Find storage resource details

You can run the following commands in a Rails console to determine the available and used space on a Gitaly storage:

The gitaly-debug command provides «production debugging» tools for Gitaly and Git performance. It is intended to help production engineers and support engineers investigate Gitaly performance problems.

To see the help page of gitaly-debug for a list of supported sub-commands, run:

Commits, pushes, and clones return a 401

You need to sync your gitlab-secrets.json file with your GitLab application nodes.

500 and fetching folder content errors on repository pages

Fetching folder content , and in some cases 500 , errors indicate connectivity problems between GitLab and Gitaly. Consult the client-side gRPC logs for details.

Client side gRPC logs

Gitaly uses the gRPC RPC framework. The Ruby gRPC client has its own log file which may contain useful information when you are seeing Gitaly errors. You can control the log level of the gRPC client with the GRPC_LOG_LEVEL environment variable. The default level is WARN .

You can run a gRPC trace with:

If this command fails with a failed to connect to all addresses error, check for an SSL or TLS problem:

Check whether Verify return code field indicates a known Omnibus GitLab configuration problem.

If openssl succeeds but gitlab-rake gitlab:gitaly:check fails, check certificate requirements for Gitaly.

Server side gRPC logs

gRPC tracing can also be enabled in Gitaly itself with the GODEBUG=http2debug environment variable. To set this in an Omnibus GitLab install:

Add the following to your gitlab.rb file:

Correlating Git processes with RPCs

Sometimes you need to find out which Gitaly RPC created a particular Git process.

One method for doing this is by using DEBUG logging. However, this needs to be enabled ahead of time and the logs produced are quite verbose.

A lightweight method for doing this correlation is by inspecting the environment of the Git process (using its PID ) and looking at the CORRELATION_ID variable:

This method isn’t reliable for git cat-file processes, because Gitaly internally pools and re-uses those across RPCs.

Observing gitaly-ruby traffic

gitaly-ruby is an internal implementation detail of Gitaly, so, there’s not that much visibility into what goes on inside gitaly-ruby processes.

If you have Prometheus set up to scrape your Gitaly process, you can see request rates and error codes for individual RPCs in gitaly-ruby by querying grpc_client_handled_total .

All gRPC calls made by gitaly-ruby itself are internal calls from the main Gitaly process to one of its gitaly-ruby sidecars.

Assuming your grpc_client_handled_total counter only observes Gitaly, the following query shows you RPCs are (most likely) internally implemented as calls to gitaly-ruby :

Repository changes fail with a 401 Unauthorized error

If you run Gitaly on its own server and notice these conditions:

  • Users can successfully clone and fetch repositories by using both SSH and HTTPS.
  • Users can’t push to repositories, or receive a 401 Unauthorized message when attempting to make changes to them in the web UI.

Gitaly may be failing to authenticate with the Gitaly client because it has the wrong secrets file.

Confirm the following are all true:

When any user performs a git push to any repository on this Gitaly server, it fails with a 401 Unauthorized error:

When any user adds or modifies a file from the repository using the GitLab UI, it immediately fails with a red 401 Unauthorized banner.

Creating a new project and initializing it with a README successfully creates the project but doesn’t create the README.

When tailing the logs on a Gitaly client and reproducing the error, you get 401 errors when reaching the /api/v4/internal/allowed endpoint:

To fix this problem, confirm that your gitlab-secrets.json file on the Gitaly server matches the one on Gitaly client. If it doesn’t match, update the secrets file on the Gitaly server to match the Gitaly client, then reconfigure.

If you’ve confirmed that your gitlab-secrets.json file is the same on all Gitaly servers and clients, the application might be fetching this secret from a different file. Your Gitaly server’s config.toml file indicates the secrets file in use. If that setting is missing, GitLab defaults to using .gitlab_shell_secret under /opt/gitlab/embedded/service/gitlab-rails/.gitlab_shell_secret .

Repository pushes fail

When attempting git push , you can see:

401 Unauthorized errors.

The following in server logs:

This error occurs when the GitLab server has been upgraded to GitLab 15.5 or later but Gitaly has not yet been upgraded.

Repository pushes fail with a deny updating a hidden ref error

Due to a change introduced in GitLab 13.12, Gitaly has read-only, internal GitLab references that users are not permitted to update. If you attempt to update internal references with git push —mirror , Git returns the rejection error, deny updating a hidden ref .

The following references are read-only:

  • refs/environments/
  • refs/keep-around/
  • refs/merge-requests/
  • refs/pipelines/

To mirror-push branches and tags only, and avoid attempting to mirror-push protected refs, run:

Any other namespaces that the administrator wants to push can be included there as well via additional patterns.

Command line tools cannot connect to Gitaly

gRPC cannot reach your Gitaly server if:

  • You can’t connect to a Gitaly server with command-line tools.
  • Certain actions result in a 14: Connect Failed error message.

Verify you can reach Gitaly by using TCP:

If the TCP connection:

  • Fails, check your network settings and your firewall rules.
  • Succeeds, your networking and firewall rules are correct.

If you use proxy servers in your command line environment such as Bash, these can interfere with your gRPC traffic.

If you use Bash or a compatible command line environment, run the following commands to determine whether you have proxy servers configured:

If either of these variables have a value, your Gitaly CLI connections may be getting routed through a proxy which cannot connect to Gitaly.

To remove the proxy setting, run the following commands (depending on which variables had values):

Permission denied errors appearing in Gitaly or Praefect logs when accessing repositories

You might see the following in Gitaly and Praefect logs:

If this error occurs, even though the Gitaly auth tokens are set up correctly, it’s likely that the Gitaly servers are experiencing clock drift.

Ensure the Gitaly clients and servers are synchronized, and use an NTP time server to keep them synchronized.

Gitaly not listening on new address after reconfiguring

When updating the gitaly[‘listen_addr’] or gitaly[‘prometheus_listen_addr’] values, Gitaly may continue to listen on the old address after a sudo gitlab-ctl reconfigure .

When this occurs, run sudo gitlab-ctl restart to resolve the issue. This should no longer be necessary because this issue is resolved.

Permission denied errors appearing in Gitaly logs when accessing repositories from a standalone Gitaly node

If this error occurs even though file permissions are correct, it’s likely that the Gitaly node is experiencing clock drift.

Ensure that the GitLab and Gitaly nodes are synchronized and use an NTP time server to keep them synchronized if possible.

Health check warnings

The following warning in /var/log/gitlab/praefect/current can be ignored.

File not found errors

The following errors in /var/log/gitlab/gitaly/current can be ignored. They are caused by the GitLab Rails application checking for specific files that do not exist in a repository.

Git pushes are slow when Dynatrace is enabled

Dynatrace can cause the /opt/gitlab/embedded/bin/gitaly-hooks reference transaction hook, to take several seconds to start up and shut down. gitaly-hooks is executed twice when users push, which causes a significant delay.

If Git pushes are too slow when Dynatrace is enabled, disable Dynatrace.

Troubleshoot Praefect (Gitaly Cluster)

The following sections provide possible solutions to Gitaly Cluster errors.

Check cluster health

The check Praefect sub-command runs a series of checks to determine the health of the Gitaly Cluster.

The following sections describe the checks that are run.

Because Database migrations must be up to date for Praefect to work correctly, checks if Praefect migrations are up to date.

If this check fails:

  1. See the schema_migrations table in the database to see which migrations have run.
  2. Run praefect sql-migrate to bring the migrations up to date.

Node connectivity and disk access

Checks if Praefect can reach all of its Gitaly nodes, and if each Gitaly node has read and write access to all of its storages.

If this check fails:

  1. Confirm the network addresses and tokens are set up correctly:
    • In the Praefect configuration.
    • In each Gitaly node’s configuration.
  2. On the Gitaly nodes, check that the gitaly process being run as git . There might be a permissions issue that is preventing Gitaly from accessing its storage directories.
  3. Confirm that there are no issues with the network that connects Praefect to Gitaly nodes.

Database read and write access

Checks if Praefect can read from and write to the database.

If this check fails:

See if the Praefect database is in recovery mode. In recovery mode, tables may be read only. To check, run:

Confirm that the user that Praefect uses to connect to PostgreSQL has read and write access to the database.

See if the database has been placed into read-only mode. To check, run:

Checks how many repositories are inaccessible because they are missing a primary assignment, or their primary is unavailable.

If this check fails:

  1. See if any Gitaly nodes are down. Run praefect ping-nodes to check.
  2. Check if there is a high load on the Praefect database. If the Praefect database is slow to respond, it can lead health checks failing to persist to the database, leading Praefect to think nodes are unhealthy.

Check clock synchronization

Authentication between Praefect and the Gitaly servers requires the server times to be in sync so the token check succeeds.

This check helps identify the root cause of permission denied errors being logged by Praefect.

For offline environments where access to public pool.ntp.org servers is not possible, the Praefect check sub-command fails this check with an error message similar to:

To resolve this issue, set an environment variable on all Praefect servers to point to an accessible internal NTP server. For example:

Praefect errors in logs

If you receive an error, check /var/log/gitlab/gitlab-rails/production.log .

Here are common errors and potential causes:

  • 500 response code
    • ActionView::Template::Error (7:permission denied)
      • praefect[‘auth_token’] and gitlab_rails[‘gitaly_token’] do not match on the GitLab server.
    • Unable to save project. Error: 7:permission denied
      • Secret token in praefect[‘storage_nodes’] on GitLab server does not match the value in gitaly[‘auth_token’] on one or more Gitaly servers.
  • 503 response code
    • GRPC::Unavailable (14:failed to connect to all addresses)
      • GitLab was unable to reach Praefect.
    • GRPC::Unavailable (14:all SubCons are in TransientFailure. )
      • Praefect cannot reach one or more of its child Gitaly nodes. Try running the Praefect connection checker to diagnose.

Praefect database experiencing high CPU load

Some common reasons for the Praefect database to experience elevated CPU usage include:

  • Prometheus metrics scrapes running an expensive query. If you have GitLab 14.2 or above, set praefect[‘separate_database_metrics’] = true in gitlab.rb .
  • Read distribution caching is disabled, increasing the number of queries made to the database when user traffic is high. Ensure read distribution caching is enabled.

Determine primary Gitaly node

To determine the primary node of a repository:

In GitLab 14.6 and later, use the praefect metadata subcommand.

With legacy election strategies in GitLab 13.12 and earlier, the primary was the same for all repositories in a virtual storage. To determine the current primary Gitaly node for a specific virtual storage:

Use the Shard Primary Election Grafana chart on the Gitlab Omnibus — Praefect dashboard. This is recommended.

If you do not have Grafana set up, use the following command on each host of each Praefect node:

View repository metadata

Gitaly Cluster maintains a metadata database about the repositories stored on the cluster. Use the praefect metadata subcommand to inspect the metadata for troubleshooting.

You can retrieve a repository’s metadata by its Praefect-assigned repository ID:

You can also retrieve a repository’s metadata by its virtual storage and relative path:

To retrieve the metadata for a repository with a Praefect-assigned repository ID of 1:

To retrieve the metadata for a repository with virtual storage default and relative path @hashed/b1/7e/b17ef6d19c7a5b1ee83b907c595526dcb1eb06db8227d650d5dda0a9f4ce8cd9.git :

Either of these examples retrieve the following metadata for an example repository:

The metadata retrieved by praefect metadata includes the fields in the following tables.

Field Description
Repository ID Permanent unique ID assigned to the repository by Praefect. Different to the ID GitLab uses for repositories.
Virtual Storage Name of the virtual storage the repository is stored in.
Relative Path Repository’s path in the virtual storage.
Replica Path Where on the Gitaly node’s disk the repository’s replicas are stored.
Primary Current primary of the repository.
Generation Used by Praefect to track repository changes. Each write in the repository increments the repository’s generation.
Replicas A list of replicas that exist or are expected to exist.

For each replica, the following metadata is available:

Replicas Field Description
Storage Name of the Gitaly storage that contains the replica.
Assigned Indicates whether the replica is expected to exist in the storage. Can be false if a Gitaly node is removed from the cluster or if the storage contains an extra copy after the repository’s replication factor was decreased.
Generation Latest confirmed generation of the replica. It indicates:

— The replica is fully up to date if the generation matches the repository’s generation.
— The replica is outdated if the replica’s generation is less than the repository’s generation.
— replica not yet created if the replica does not yet exist at all on the storage.

Healthy Indicates whether the Gitaly node that is hosting this replica is considered healthy by the consensus of Praefect nodes.
Valid Primary Indicates whether the replica is fit to serve as the primary node. If the repository’s primary is not a valid primary, a failover occurs on the next write to the repository if there is another replica that is a valid primary. A replica is a valid primary if:

— It is stored on a healthy Gitaly node.
— It is fully up to date.
— It is not targeted by a pending deletion job from decreasing replication factor.
— It is assigned.

Verified At Indicates last successful verification of the replica by the verification worker. If the replica has not yet been verified, unverified is displayed in place of the last successful verification time. Introduced in GitLab 15.0.

Command fails with ‘repository not found’

If the supplied value for -virtual-storage is incorrect, the command returns the following error:

The documented examples specify -virtual-storage default . Check the Praefect server setting praefect[‘virtual_storages’] in /etc/gitlab/gitlab.rb .

Check that repositories are in sync

Is some cases the Praefect database can get out of sync with the underlying Gitaly nodes. To check that a given repository is fully synced on all nodes, run the gitlab:praefect:replicas Rake task that checksums the repository on all Gitaly nodes.

The Praefect dataloss command only checks the state of the repository in the Praefect database, and cannot be relied to detect sync problems in this scenario.

Relation does not exist errors

By default Praefect database tables are created automatically by gitlab-ctl reconfigure task.

However, the Praefect database tables are not created on initial reconfigure and can throw errors that relations do not exist if either:

  • The gitlab-ctl reconfigure command isn’t executed.
  • There are errors during the execution.

ERROR: relation «node_status» does not exist at character 13

ERROR: relation «replication_queue_lock» does not exist at character 40

To solve this, the database schema migration can be done using sql-migrate sub-command of the praefect command:

Requests fail with ‘repository scoped: invalid Repository’ errors

This indicates that the virtual storage name used in the Praefect configuration does not match the storage name used in git_data_dirs setting for GitLab.

Resolve this by matching the virtual storage names used in Praefect and GitLab configuration.

Gitaly Cluster performance issues on cloud platforms

Praefect does not require a lot of CPU or memory, and can run on small virtual machines. Cloud services may place other limits on the resources that small VMs can use, such as disk IO and network traffic.

Praefect nodes generate a lot of network traffic. The following symptoms can be observed if their network bandwidth has been throttled by the cloud service:

  • Poor performance of Git operations.
  • High network latency.
  • High memory use by Praefect.
  • Provision larger VMs to gain access to larger network traffic allowances.
  • Use your cloud service’s monitoring and logging to check that the Praefect nodes are not exhausting their traffic allowances.

Gitaly exposes several of Golang’s built-in performance profiling tools on the Prometheus listen port. For example, if Prometheus is listening on port 9236 of the GitLab server:

Get a list of running goroutines and their backtraces:

Run a CPU profile for 30 seconds:

Profile heap memory usage:

Record a 5 second execution trace. This will impact Gitaly’s performance while running:

On a host with go installed, the CPU profile and heap profile can be viewed in a browser:

Источник

Понравилась статья? Поделить с друзьями:
  • Error router requires newer winbox please upgrade что делать
  • Error round was not declared in this scope
  • Error root internal python error in the inspect module
  • Error room айзек
  • Error rom file size incorrect