Kadmin database error required kadm5 principal missing while initializing kadmin interface

Kadmin database error required kadm5 principal missing while initializing kadmin interface This forum has migrated to Microsoft Q&A. Visit Microsoft Q&A to post new questions. Asked by: Question Hi, I am the sysadmin at a lab at the University of California-Berkeley, and I have an issue that I am unable to resolve. We set […]

Содержание

  1. Kadmin database error required kadm5 principal missing while initializing kadmin interface
  2. Asked by:
  3. Question
  4. Kadmin database error required kadm5 principal missing while initializing kadmin interface
  5. Вопрос
  6. Kadmin database error required kadm5 principal missing while initializing kadmin interface
  7. Asked by:
  8. Question
  9. Kadmin database error required kadm5 principal missing while initializing kadmin interface
  10. Вопрос
  11. Kadmin database error required kadm5 principal missing while initializing kadmin interface
  12. Вопрос

Kadmin database error required kadm5 principal missing while initializing kadmin interface

This forum has migrated to Microsoft Q&A. Visit Microsoft Q&A to post new questions.

Asked by:

Question

Hi,
I am the sysadmin at a lab at the University of California-Berkeley, and I have an issue that I am unable to resolve. We set up a server running windows server 2008 R2 and its running Active Directory. We are transitioning from a ldap/Heimdal kerberos set up to Active Directory. For this reason I think it would be convenient if common *nix kerberos client utilities worked(like kinit,kpasswd, ktutil). On a debian test client, I was able to join the realm, and was able to use the client to «kinit» and change passwords. It would be helpful if I could use utilities like «kadmin -p username» , since its a nuissance to have to set up keytabs on the windows server and transfer them over. When I use kadmin I get this result:

Authenticating as principal sanjayk with password.
Password for sanjayk@WINSERV.OCF.BERKELEY.EDU :
Password for sanjayk@WINSERV.OCF.BERKELEY.EDU :
kadmin: Database error! Required KADM5 principal missing while
initializing kadmin interface

I assume since kinit and kpasswd work I have set up my configuration right, and I think the problem maybe with configuration on the windows server. If I am completely misinterpreting things please enlighten me.

Thanks in advance,
Sanjay Krishnan

Источник

Kadmin database error required kadm5 principal missing while initializing kadmin interface

Вопрос

Hi,
I am the sysadmin at a lab at the University of California-Berkeley, and I have an issue that I am unable to resolve. We set up a server running windows server 2008 R2 and its running Active Directory. We are transitioning from a ldap/Heimdal kerberos set up to Active Directory. For this reason I think it would be convenient if common *nix kerberos client utilities worked(like kinit,kpasswd, ktutil). On a debian test client, I was able to join the realm, and was able to use the client to «kinit» and change passwords. It would be helpful if I could use utilities like «kadmin -p username» , since its a nuissance to have to set up keytabs on the windows server and transfer them over. When I use kadmin I get this result:

Authenticating as principal sanjayk with password.
Password for sanjayk@WINSERV.OCF.BERKELEY.EDU :
Password for sanjayk@WINSERV.OCF.BERKELEY.EDU :
kadmin: Database error! Required KADM5 principal missing while
initializing kadmin interface

I assume since kinit and kpasswd work I have set up my configuration right, and I think the problem maybe with configuration on the windows server. If I am completely misinterpreting things please enlighten me.

Thanks in advance,
Sanjay Krishnan

Источник

Kadmin database error required kadm5 principal missing while initializing kadmin interface

This forum has migrated to Microsoft Q&A. Visit Microsoft Q&A to post new questions.

Asked by:

Question

Hi,
I am the sysadmin at a lab at the University of California-Berkeley, and I have an issue that I am unable to resolve. We set up a server running windows server 2008 R2 and its running Active Directory. We are transitioning from a ldap/Heimdal kerberos set up to Active Directory. For this reason I think it would be convenient if common *nix kerberos client utilities worked(like kinit,kpasswd, ktutil). On a debian test client, I was able to join the realm, and was able to use the client to «kinit» and change passwords. It would be helpful if I could use utilities like «kadmin -p username» , since its a nuissance to have to set up keytabs on the windows server and transfer them over. When I use kadmin I get this result:

Authenticating as principal sanjayk with password.
Password for sanjayk@WINSERV.OCF.BERKELEY.EDU :
Password for sanjayk@WINSERV.OCF.BERKELEY.EDU :
kadmin: Database error! Required KADM5 principal missing while
initializing kadmin interface

I assume since kinit and kpasswd work I have set up my configuration right, and I think the problem maybe with configuration on the windows server. If I am completely misinterpreting things please enlighten me.

Thanks in advance,
Sanjay Krishnan

Источник

Kadmin database error required kadm5 principal missing while initializing kadmin interface

Вопрос

Hi,
I am the sysadmin at a lab at the University of California-Berkeley, and I have an issue that I am unable to resolve. We set up a server running windows server 2008 R2 and its running Active Directory. We are transitioning from a ldap/Heimdal kerberos set up to Active Directory. For this reason I think it would be convenient if common *nix kerberos client utilities worked(like kinit,kpasswd, ktutil). On a debian test client, I was able to join the realm, and was able to use the client to «kinit» and change passwords. It would be helpful if I could use utilities like «kadmin -p username» , since its a nuissance to have to set up keytabs on the windows server and transfer them over. When I use kadmin I get this result:

Authenticating as principal sanjayk with password.
Password for sanjayk@WINSERV.OCF.BERKELEY.EDU :
Password for sanjayk@WINSERV.OCF.BERKELEY.EDU :
kadmin: Database error! Required KADM5 principal missing while
initializing kadmin interface

I assume since kinit and kpasswd work I have set up my configuration right, and I think the problem maybe with configuration on the windows server. If I am completely misinterpreting things please enlighten me.

Thanks in advance,
Sanjay Krishnan

Источник

Kadmin database error required kadm5 principal missing while initializing kadmin interface

Вопрос

Hi,
I am the sysadmin at a lab at the University of California-Berkeley, and I have an issue that I am unable to resolve. We set up a server running windows server 2008 R2 and its running Active Directory. We are transitioning from a ldap/Heimdal kerberos set up to Active Directory. For this reason I think it would be convenient if common *nix kerberos client utilities worked(like kinit,kpasswd, ktutil). On a debian test client, I was able to join the realm, and was able to use the client to «kinit» and change passwords. It would be helpful if I could use utilities like «kadmin -p username» , since its a nuissance to have to set up keytabs on the windows server and transfer them over. When I use kadmin I get this result:

Authenticating as principal sanjayk with password.
Password for sanjayk@WINSERV.OCF.BERKELEY.EDU :
Password for sanjayk@WINSERV.OCF.BERKELEY.EDU :
kadmin: Database error! Required KADM5 principal missing while
initializing kadmin interface

I assume since kinit and kpasswd work I have set up my configuration right, and I think the problem maybe with configuration on the windows server. If I am completely misinterpreting things please enlighten me.

Thanks in advance,
Sanjay Krishnan

Источник

I am trying to enable Kerberos on HDP 3.1 (Ambari 2.7.1) by «Enable Kerberos wizard». However, I am getting error at step 3. I have everything setup is the same and able to enable Kerberos on HDP 2.6.

The error message is

500 status code received on POST method for API: /api/v1/clusters/Horton44/requests

Error message: An internal system exception occurred: Unexpected error condition executing the kadmin command. STDERR: kadmin: Matching credential not found (filename: /tmp/ambari_krb_6976217133962876412cc) while initializing kadmin interface

Here is the step that I am using to enable Kerberos

  1. What type of KDC do you plan on using?
    1. Existing MIT KDC Ambari Server and cluster hosts have network access to both the KDC and KDC admin hosts.
    2. KDC administrative credentials are on-hand.
    3. The Java Cryptography Extensions (JCE) have been setup on the Ambari Server host and all hosts in the cluster.
  2. Unchecked Manage Kerberos client krb5.conf (Did not work with Checked Manage Kerberos client krb5.conf as well)

At this point, the «Test Kerberos Client» is failing and got this message

  • 500 status code received on POST method for API: /api/v1/clusters/Horton44/requests
  • Error message: An internal system exception occurred: Unexpected error condition executing the kadmin command. STDERR: kadmin: Matching credential not found (filename: /tmp/ambari_krb_6976217133962876412cc) while initializing kadmin interface

Does anyone know the problem is?

Here is the log file:

2019-01-09 10:46:24,744 INFO [ambari-client-thread-43] AgentHostDataHolder:108 — Configs update with hash 25d6257c91443c9db6c5a47138a423b1a4f8edfa7ad4f15d2b04ef6eaf81977b369328bb73609b75345c1316d9caf6d15fe63ee0eb55b11d7dc43de8f44ce35c will be sent to host 1

2019-01-09 10:46:25,121 INFO [ambari-client-thread-124] MetricsCollectorHAManager:59 — Adding collector host : horton44.test.domain.com to cluster : Horton44

2019-01-09 10:46:25,123 INFO [ambari-client-thread-124] MetricsCollectorHAClusterState:84 — Refreshing collector host, current collector host : horton44.test.domain.com

2019-01-09 10:46:25,124 INFO [ambari-client-thread-124] MetricsCollectorHAClusterState:105 — After refresh, new collector host : horton44.test.domain.com

2019-01-09 10:46:25,138 INFO [ambari-client-thread-37] ServiceResourceProvider:634 — Received a updateService request, clusterName=Horton44, serviceName=KERBEROS, request=clusterName=Horton44, serviceName=KERBEROS, desiredState=INSTALLED, credentialStoreEnabled=null, credentialStoreSupported=null

2019-01-09 10:46:25,160 INFO [ambari-client-thread-37] RoleGraph:175 — Detecting cycle graphs

2019-01-09 10:46:25,160 INFO [ambari-client-thread-37] RoleGraph:176 — Graph: (KERBEROS_CLIENT, INSTALL, 0)

2019-01-09 10:46:25,311 INFO [ambari-action-scheduler] ServiceComponentHostImpl:1062 — Host role transitioned to a new state, serviceComponentName=KERBEROS_CLIENT, hostName=horton44.test.domain.com, oldState=INIT, currentState=INSTALLING

2019-01-09 10:46:25,320 INFO [ambari-action-scheduler] AgentCommandsPublisher:124 — AgentCommandsPublisher.sendCommands: sending ExecutionCommand for host horton44.test.domain.com, role KERBEROS_CLIENT, roleCommand INSTALL, and command ID 15-0, task ID 152

2019-01-09 10:46:25,515 INFO [agent-message-monitor-0] MessageEmitter:218 — Schedule execution command emitting, retry: 0, messageId: 0

2019-01-09 10:46:25,528 WARN [agent-message-retry-0] MessageEmitter:255 — Reschedule execution command emitting, retry: 1, messageId: 0

2019-01-09 10:46:27,448 INFO [agent-report-processor-0] ServiceComponentHostImpl:1062 — Host role transitioned to a new state, serviceComponentName=KERBEROS_CLIENT, hostName=horton44.test.domain.com, oldState=INSTALLING, currentState=INSTALLED

2019-01-09 10:46:29,470 INFO [ambari-client-thread-43] AmbariManagementControllerImpl:4060 — Received action execution request, clusterName=Horton44, request=isCommand :true, action :null, command :KERBEROS_SERVICE_CHECK, inputs :{HAS_RESOURCE_FILTERS=true}, resourceFilters: [RequestResourceFilter{serviceName=’KERBEROS’, componentName=’null’, hostNames=[]}], exclusive: false, clusterName :Horton44

2019-01-09 10:46:39,667 WARN [ambari-client-thread-43] MITKerberosOperationHandler:291 — Retrying to execute kadmin after a wait of 10 seconds : Command: [/usr/bin/kadmin, -c, /tmp/ambari_krb_5117636388301835326cc, -s, nc-mit-kdc.sso2.raldev.com, -r, MIT.SSO2.RALDEV.COM, -q, get_principal admin/admin@MIT.TESTDOMAIN.COM]

2019-01-09 10:46:49,687 WARN [ambari-client-thread-43] MITKerberosOperationHandler:291 — Retrying to execute kadmin after a wait of 10 seconds : Command: [/usr/bin/kadmin, -c, /tmp/ambari_krb_5117636388301835326cc, -s, nc-mit-kdc.sso2.raldev.com, -r, MIT.SSO2.RALDEV.COM, -q, get_principal admin/admin@MIT.TESTDOMAIN.COM]

2019-01-09 10:46:59,698 WARN [ambari-client-thread-43] MITKerberosOperationHandler:291 — Retrying to execute kadmin after a wait of 10 seconds : Command: [/usr/bin/kadmin, -c, /tmp/ambari_krb_5117636388301835326cc, -s, nc-mit-kdc.sso2.raldev.com, -r, MIT.SSO2.RALDEV.COM, -q, get_principal admin/admin@MIT.TESTDOMAIN.COM]

2019-01-09 10:47:09,709 WARN [ambari-client-thread-43] MITKerberosOperationHandler:291 — Retrying to execute kadmin after a wait of 10 seconds : Command: [/usr/bin/kadmin, -c, /tmp/ambari_krb_5117636388301835326cc, -s, nc-mit-kdc.sso2.raldev.com, -r, MIT.SSO2.RALDEV.COM, -q, get_principal admin/admin@MIT.TESTDOMAIN.COM]

2019-01-09 10:47:09,710 WARN [ambari-client-thread-43] MITKerberosOperationHandler:302 — Failed to execute kadmin:

Command: [/usr/bin/kadmin, -c, /tmp/ambari_krb_5117636388301835326cc, -s, nc-mit-kdc.sso2.raldev.com, -r, MIT.SSO2.RALDEV.COM, -q, get_principal admin/admin@MIT.TESTDOMAIN.COM]

ExitCode: 1

STDOUT: Authenticating as principal admin/admin@MIT.TESTDOMAIN.COM with existing credentials.

STDERR: kadmin: Matching credential not found (filename: /tmp/ambari_krb_5117636388301835326cc) while initializing kadmin interface

2019-01-09 10:47:09,710 ERROR [ambari-client-thread-43] KerberosHelperImpl:2429 — Cannot validate credentials: org.apache.ambari.server.AmbariException: Unexpected error condition executing the kadmin command. STDERR: kadmin: Matching credential not found (filename: /tmp/ambari_krb_5117636388301835326cc) while initializing kadmin interface

2019-01-09 10:47:09,712 ERROR [ambari-client-thread-43] AbstractResourceProvider:295 — Caught AmbariException when creating a resource

org.apache.ambari.server.AmbariException: Unexpected error condition executing the kadmin command. STDERR: kadmin: Matching credential not found (filename: /tmp/ambari_krb_5117636388301835326cc) while initializing kadmin interface

96667-capture.png

Last night I was finally able to get NFS working with Kerberos authentication. Actually, I was successful with security set to any of krb5, krb5i, and krb5p which is ideal. There were many obstacles along the way.

I had to blow away my krb5.conf and start over before kadmin would work on my existing desktop for some as-yet unknown reason. Using kadmin was required in order to generate a keytab with my machine’s host keys in it. That is a requirement for NFS+Kerberos to work.

Additionally, I kept running into an access denied message when attempting to mount my NFS shares once I had my keytab sorted out. I discovered that certain Kerberos principals are required to exist for a session to be able to be established. It’s not just the «nfs/[email protected]» principal that most guides seem to indicate. Specifically, «nfs/[email protected]» must exist. It was further complicated by my network having two different realms. My DHCP server would hand out a default domain that was different than the one maintained by my experimental KDC. As such, I ended up with both «nfs/[email protected]» and «nfs/[email protected]» in domain2’s KDC before it would work.

Anyway, I’ll attempt to summarize how to set up a working NFS server using Kerberos authentication. The KDC and NFS server can be the same machine if desired, but I want to break every role apart for the sake of the demonstration so that the pieces required for each are more visible.

The following section is currently incomplete. I’m still working on it.

Scenario Description
This example uses a domain of «sample.local» with three machines named «kdc», «nfs», and «client». The first serves as the Kerberos server. The second serves as the NFS file server. The third serves as a client wanting to connect and mount an NFS share. The assumption is that we’re wanting to use Kerberos to manage identities and provide security with NFS.

This guide was written and tested with Debian 9.3 in mind. It is assumed that your user is in the sudoers group. If it is not, you’ll have to execute all commands prefaced with sudo as su instead of course.

Install the Necessary Software
Okay. First things first. We’ll need to install some software before the config files we need to edit are available.

Machine 1: Kerberos KDC (FQDN: kdc.sample.local)
sudo apt install -y krb5-admin-server krb5-kdc krb5-user krb5-config libpam-krb5 krb5-locales libgssapi-krb5-2 libkrb5-3 libkrb5support0

Machine 2: NFS Server (FQDN: nfs.sample.local)
sudo apt install -y krb5-user krb5-config nfs-kernel-server nfs-common libpam-krb5 krb5-locales libgssapi-krb5-2 libkrb5-3 libkrb5support0 libnfsidmap2 libnfs8

Machine 3: NFS Client (FQDN: client.sample.local)
sudo apt install -y krb5-user krb5-config nfs-common libpam-krb5 krb5-locales libgssapi-krb5-2 libkrb5-3 libkrb5support0 libnfsidmap2 libnfs8

You Need Working Hostname Resolution (And DHCP if Used)
It is also assumed that you have a working DNS server. If DHCP is used in your environment, you will also need to ensure that it either doesn’t hand out a default domain that overrides how your system is configured or that it hands out a default domain that matches the SAMPLE.LOCAL domain you’re planning to use for NFS.

If your environment does not have a central DNS server, you can define each machine in your /etc/hosts file instead.

Each host needs to know that it is a member of the SAMPLE.LOCAL domain. This is controlled by the «search» line in /etc/resolv.conf. If you have no search line or it’s set to something other than SAMPLE.LOCAL, you’ll need to fix that. If your DHCP server hands out a default domain with its leases, chances are on a modern Debian system this will have been set by network-manager when dhclient obtained its lease. You’ll need to make sure you’ve either disabled network-manager and update «search» manually or ensure that your DHCP server is handing out the «sample.local» domain with its leases. If DNS isn’t happy, nothing else in this guide will be either.

Each host needs to be registered in your DNS server. Part of NFS/Kerberos authentication performs a reverse lookup of the IP address from which an authentication request is made. If the returned hostname doesn’t match or can’t be found at all, authentication will fail. The hostname might not match if a stale DNS entry is found (ie an entry for a different machine) or if you’re in a multi-domain environment.

Many routers can be configured to automatically register clients in DNS when DHCP requests are granted. You will either need to ensure that is enabled in your environment or you will need to manually define where each hostname resolves to in your /etc/hosts file. Since using hosts would be especially messy in a DHCP environment, I would strongly suggest you go the first route.

Environments Without Central DNS Servers
You can operate in an environment without central DNS. This can be accomplished by defining the other hosts in your /etc/hosts file. Doing so requires entries that specify the FQDN first. If you go this route, you’ll want to have your machines use static IP addresses instead of relying on a DHCP server. If a host ever changes its IP address, you’ll need to update it everywhere on your network.

The below example is from the perspective of the «client» machine. It also demonstrates a multi-domain environment.

Code:

# /etc/hosts

	# If this were a single domain environment, you would have something like:
	127.0.0.1		client.sample.local client localhost
	192.168.0.10	kdc.sample.local kdc
	192.168.0.20	nfs.sample.local nfs
	
	# If this were a multi-domain environment, you would have something like:
	#127.0.0.1		client.sample.local client.sample2.local client localhost
	#192.168.0.10	kdc.sample.local kdc.sample2.local kdc
	#192.168.0.20	nfs.sample.local nfs.sample2.local nfs

Each line can specify multiple hostnames. The first match from top to bottom and left to right is returned for a reverse lookup.

DNS, Kerberos, and Multi-Domain Environments
You can still use Kerberos with NFS in a multi-domain environment, but the «nfs/[email protected]» Kerberos host entries discussed later on will need to have an FQDN that matches the domain returned by a DNS reverse IP lookup rather than the Kerberos realm.

Example: You follow this guide and set up a Kerberos realm «SAMPLE.LOCAL» but all of your machines are on a network that hands out DHCP leases to «OTHER.LOCAL». The DHCP server auto-registers those leases in the DNS server used by all clients on the network. As such, your three machines end up being «kdc.other.local», «nfs.other.local», and «client.other.local» even though you follow this guide and configure all three machines to use «SAMPLE.LOCAL». That is okay, but when you create the Kerberos host records you’ll need to register «nfs/[email protected]» instead of «nfs/[email protected]» like you would in a single domain environment. The final REALM part of these entries will remain as whatever the Kerberos realm is named. In our case that is «SAMPLE.LOCAL». The same is true of the principal used when creating local keytabs on each machine. All other steps stay the same.

Configure /etc/resolv.conf for Your Domain
If you are using DHCP in your environment, make sure that your DHCP server is handing out a default domain of SAMPLE.LOCAL. You can confirm that your DHCP client has correctly received and configured your machine’s default search domain by checking /etc/resolv.conf. It should contain a line named «search» that has «SAMPLE.LOCAL». It might also have a «domain» line set to the same thing.

If you are not using DHCP in your environment, you will need to ensure that a nameserver and search parameter are both set. It is also advisable to define a domain parameter.

Code:

# /etc/resolv.conf

nameserver xxx.xxx.xxx.xxx #this should be your network's DNS server via IP (this is probably provided by your router)
domain sample.local
search sample.local

The «domain» line specifies which domain your machine belongs to by default. The «search» line defines the list of domains, in order, that should be searched when a non-fully qualified hostname lookup is performed.

If your machine does a DNS lookup for «kdc» for instance and your «search» line is set to «search domain1.local domain2.local» then your machine will query DNS for «kdc» first, «kdc.domain1.local» second if there was no match, and finally «kdc.domain2.local» third if there was still no match. You may have up to six domains defined with «search».

Configure /etc/nsswitch.conf to Disable mDNS
All three machines need to have their /etc/nsswitch.conf updated to disable the disaster known as mDNS. If the «hosts:» line contains mDNS, get rid of it. It will prevent DNS based hostname of non fully qualified lookup of domain machines if enabled.

If you want to keep mDNS, you can leave it after DNS. Lookup priority goes from left to right. mDNS is an adhoc multicast name resolution service where a broadcast packet is sent out to your subnet whenever your machine needs to know «Who goes by <hostname>?» It’s used in small environments that lack central DNS.

Code:

# /etc/nsswitch.conf

passwd:         compat
group:          compat
shadow:         compat
gshadow:        files

hosts:          files myhostname dns
networks:       files

protocols:      db files
services:       db files
ethers:         db files
rpc:            db files

netgroup:       nis

Test Hostname Resolution Before Continuing
If DNS isn’t working, nothing else is going to work either. Test DNS lookup of both the short and fully qualified names of the other two hosts on each of your three machines.

For example, from «client»:
ping kdc
ping kdc.sample.local
ping nfs
ping nfs.sample.local

Testing with ping is preferable over testing with nslookup. nslookup is strictly a test of your DNS server and doesn’t test the full resolution path as defined in /etc/nsswitch.conf. As such, everything may appear to resolve correctly via DNS even if your system is encountering an issue that prevents it from getting as far as querying DNS.

If any of these tests result in a failure to resolve, you need to stop before you go any further and fix DNS. If you haven’t disabled mDNS (described earlier), the FQDN lookup will probably work but the hostname only lookup probably will not. You’ll have to fix that by disabling mDNS if so.

IF HOSTNAME RESOLUTION DOES NOT WORK, KERBEROS WILL NOT WORK.

Configure /etc/krb5.conf for Your Domain
All three machines also need to have their /etc/krb5.conf updated to let them know about our new domain and where to find the KDC. You can also put this in DNS like Microsoft does for AD domains if you want to omit manually defining the realm.

Code:

# /etc/krb5.conf

[libdefaults]
	default_realm = SAMPLE.LOCAL

	# Require strong encryption (optional)
	default_tgs_enctypes = aes256-cts-hmac-sha1-96
	default_tkt_enctypes = aes256-cts-hmac-sha1-96
	permitted_enctypes = aes256-cts-hmac-sha1-96

	# Standard params
	kdc_timesync = 1
	ccache_type = 4
	forwardable = true
	proxiable = true
	fcc-mit-ticketflags = true

[realms]
	SAMPLE.LOCAL = {
		kdc = kdc
		admin_server = kdc
		default_domain = sample.local
	}

[domain_realm]

Registering Kerberos Servers in DNS (Alternative to Defining [realms] Section Manually)
If you want to register your KDC in DNS instead of manually defining the realm in /etc/krb5.conf, you can do so by creating SRV records for the SAMPLE.LOCAL domain. Note that you will still have to set your default_realm to «SAMPLE.LOCAL» in the «[libdefaults]» section of /etc/krb5.conf.

Word of Warning: Note that this isn’t good enough for kadmin and possibly some other Kerberos related services. They require realms with KDC(s) and admin server(s) as defined in your /etc/krb5.conf file instead. I would instead recommend not bothering with this since it’s just one more invisible thing to maintain if anything ever changes, and those are the least likely to be remembered.

To register your KDC in DNS, you would need to define SRV records with names «_kerberos._tcp.SAMPLE.LOCAL.» and «_kpasswd._tcp.SAMPLE.LOCAL.» both of which should point to your KDC’s DNS FQDN. This should work equally well for single domain and multi-domain environments.

If your KDC is registered in DNS as «kdc.sample.local.» then both records would point there. If instead the KDC is registered in DNS as «kdc.other.local.» then both SRV records would point there even though the Kerberos realm we’re creating is SAMPLE.LOCAL. Basically, we’re making sure the KDC can be found via DNS lookup. The domain of hosts in DNS doesn’t have to be the same as the logical realm.

Configure /etc/defaults/nfs-common on the NFS Client
You’ll need to instruct both the NFS client machine to start the GSSD daemon and IDMAPD daemon on each startup. These are required for Kerberos authenticated NFS mounts. This is accomplished by setting both NEED_IDMAPD and NEED_GSSD to «yes».

GSSD is what performs Kerberos authentication and IDMAPD is what performs UID/GID mapping. ID mapping is what makes a particular file yours regardless of if other systems have your user account defined or not. In a non-Kerberos NFS world, UID 1000 on System1 and UID 1000 on System2 may be entirely different users, but each system will treat files owned by UID 1000 as if it were its own user 1000. Non-Kerberos NFS is gross.

Code:

# /etc/defaults/nfs-common

# If you do not set values for the NEED_ options, they will be attempted
# autodetected; this should be sufficient for most people. Valid alternatives
# for the NEED_ options are "yes" and "no".

# Do you want to start the statd daemon? It is not needed for NFSv4.
NEED_STATD=

# Options for rpc.statd.
#   Should rpc.statd listen on a specific port? This is especially useful
#   when you have a port-based firewall. To use a fixed port, set this
#   this variable to a statd argument like: "--port 4000 --outgoing-port 4001".
#   For more information, see rpc.statd(8) or http://wiki.debian.org/SecuringNFS
STATDOPTS=

# Do you want to start the idmapd daemon? It is only needed for NFSv4.
NEED_IDMAPD=yes

# Do you want to start the gssd daemon? It is required for Kerberos mounts.
NEED_GSSD=yes

Configure /etc/defaults/nfs-kernel-server on the NFS Server
You will need to ensure that the SVCGSSD service is running on the NFS server. This is what GSSD on the client machine will connect to. This is accomplished by setting NEED_SVCGSSD to «yes».

Code:

# /etc/defaults/nfs-kernel-server

# Number of servers to start up
RPCNFSDCOUNT=8

# Runtime priority of server (see nice(1))
RPCNFSDPRIORITY=0

# Options for rpc.mountd.
# If you have a port-based firewall, you might want to set up
# a fixed port here using the --port option. For more information,
# see rpc.mountd(8) or http://wiki.debian.org/SecuringNFS
# To disable NFSv4 on the server, specify '--no-nfs-version 4' here
RPCMOUNTDOPTS="--manage-gids"

# Do you want to start the svcgssd daemon? It is only required for Kerberos
# exports. Valid alternatives are "yes" and "no"; the default is "no".
NEED_SVCGSSD="yes"

# Options for rpc.svcgssd.
RPCSVCGSSDOPTS=""

Restart All Three Machines
Restart kdc, nfs, and client in order to ensure each machine has started the new services used for Kerberos and NFS.

Create Required Principals in Kerberos Database
On kdc, we’ll need to initialize the database since no administrative users have been defined yet. This is done using the «kadmin.local» program as root. The Kerberos database will be modified directly on disk, so that is why we don’t need a working Kerberos account yet. That is also why this step must be done on the KDC itself. But, while we’re there we will go ahead and make some more principals which will be required later.

The first thing to that end that we need to do is to configure a Kerberos administrator account. The administrator will be allowed to remotely login into and modify the Kerberos database.

After that, we’ll need to configure your regular user account. When you login to your computer, PAM will attempt to fetch a Kerberos ticket automatically using the same username (at the default_realm set up in /etc/krb5.conf) so long as libpam-krb5 is installed. Modern Debian systems are pre-configured (/etc/pam.conf) to use libpam-krb5 to simultaneously authenticate against Kerberos when available during the login process. As such, we need to create that user account in the Kerberos database. This identity uniquely identifies your account and is used to determine who owns «your» files on a Kerberos enabled NFS share.

On kdc, open the Kerberos database using kadmin.local:

Code:

$ sudo kadmin.local
[sudo] password for <your username>: <enter your local account password>
Authenticating as principal <your username>/[email protected] with password.
kadmin.local:

This opens the Kerberos database without using Kerberos for authentication. This only works on the KDC since it stores the database. The database file is accessible only to root, so you have to execute this command as root. Once our administrative user is established, you’ll be able to modify the Kerberos database using Kerberos authentication instead. This is done through the «kadmin» command instead of «kadmin.local».

Let’s create the Kerberos admin account:

Any Kerberos account appended with «/admin» is treated as being part of the Kerberos administrators group. These accounts are allowed to log in and modify the Kerberos database. It doesn’t have anything to do with gaining extra permissions on the computer itself, so do not grant yourself administrative access by creating another admin account under the name you login to your computer with.

Go ahead and close the Kerberos database. We’ll then log in again through Kerberos this time to complete the rest of what we need to do. This serves as a test to see if Kerberos is working on the KDC itself.

The login worked, so Kerberos is working and our Kerberos administrator account is also working. We’ll proceed with making our regular user account in the system. I suggest you make an account that is named the same as your existing local computer username. Your local username is assumed to match an entry in the Kerberos database when doing various Kerberos related tasks.

Code:

kadmin:  addprinc <your account name>
WARNING: no policy specified for <your account name>@SAMPLE.LOCAL; defaulting to no policy
Enter password for principal "<your account name>@SAMPLE.LOCAL": <the same password your local account has>
Re-enter password for principal "<your account name>@SAMPLE.LOCAL": <re-enter that password>
Principal "<your account name>@SAMPLE.LOCAL" created.
kadmin:

On a side note, when you change your password in the future using passwd, it will automatically change the password in the Kerberos database. You shouldn’t have to use kadmin in the future to maintain your user account.

With that in the database, now we’ll make the NFS hostname entries we need. A hostname entry is needed for each machine that will act as an NFS client.

In our case, we’ll go ahead and add records for all three machines. This is done with the same «addprinc» command we used to create a regular user account.

The format for an NFS hostname entry is «nfs/<FQDN of host>@REALM», so an example in our scenario would be «nfs/[email protected]». We will also tell kadmin to generate a random key for the account instead of prompting us for a password since we don’t need to know what the key is. This is done with the «-randkey» argument.

Once done, go ahead and exit the program.

Multi-domain reminder: If you’re on a multi-domain environment where the DNS records for your machines have a different domain than the Kerberos realm you’re setting up, you’ll need to use the FQDN in DNS for the hostname instead of what is shown below. If the DNS domain is «OTHER.LOCAL», then «nfs/[email protected]» becomes «nfs/[email protected]» for instance. The REALM remains whatever the Kerberos realm is. In our case, that is «SAMPLE.LOCAL».

Now that the hostname entries are present, we’ll have to create host keytabs on each potential NFS client machine. As before, we’re going to go ahead and do this for all three machines in the scenario so that any member may participate as an NFS client.

On each machine, you will need to run kadmin as root. The keytab is written to a file that is only accessible to root, so this elevated privilege is required. Your keytab should never be readable or writable by a regular user! The machine’s private key is stored in the keytab!

This process is the same for each machine, but the hostname should be changed to match the hostname of the machine you’re on of course. Each machine will contain its own keytab with its own private key.

Multi-domain reminder: If you’re on a multi-domain environment where the DNS records for your machines have a different domain than the Kerberos realm you’re setting up, you’ll need to use the FQDN in DNS for the hostname instead of what is shown below. If the DNS domain is «OTHER.LOCAL», then «nfs/[email protected]» becomes «nfs/[email protected]» for instance. The REALM remains whatever the Kerberos realm is. In our case, that is «SAMPLE.LOCAL».

On the «kdc» machine:

On the «nfs» machine:

On the «client» machine:

Test Kerberos
Now is a good time to ensure that Kerberos is working. To do this, try to obtain a Kerberos ticket on kdc, nfs, and client. This can be done with kinit. Then ensure that the ticket was fetched correctly using klist.

If kinit fails or klist doesn’t show your ticket, you will need to stop and troubleshoot Kerberos. Chances are good that something is wrong in /etc/krb5.conf or DNS is not working.

Code:

$ kinit -V
Using principal: <your username>@SAMPLE.LOCAL
Password for <your username>@SAMPLE.LOCAL: <enter your password>
Authenticated to Kerberos v5

$ klist
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: <your username>@SAMPLE.LOCAL

Valid starting       Expires              Service principal
02/15/2018 14:23:07  02/16/2018 00:23:07  krbtgt/[email protected]
        renew until 02/16/2018 14:23:02

Export a Directory on the NFS Server
To export a share on the NFS server, you will need to edit the /etc/exports config file. One export should be defined per line along with options. This is functionally pretty similar to an fstab.

The format of the config lines goes:
<path to share> <allowed hosts>(<options>)

For example, if we want to share «/exports/music» to anyone that can talk to our NFS server and use Kerberos for authentication, integrity, and encryption, that line would look like:

Code:

# /etc/exports

/exports/music    *(rw,hard,intr,sync,no_subtree_check,sec=krb5p)

Allowed hosts can be a wildcard as shown. On a private network this should be fine.

Three Kerberos security modes are allowed (for the «sec=xxx» argument):
sys : Use UID/GID mapping. No security at all. (Default for NFS)
krb5 : Use Kerberos for authentication and identity management
krb5i : And also ensure integrity of the data by signing each block with an HMAC
krb5p : And also ensure privacy of the data by encrypting it

There is a performance penalty to encryption of about 25% in my experience. I get about 1.2 Gbps with krb5, 1.1 Gbps with krb5i, and 0.9 Gbps with krb5p in my environment on a 10 Gbps network.

Other options in that line:
rw : Allow read-write access to this share
hard : Hard mount. When an IO request fails, the process should hang.
intr : Allow a hung process to be interrupted. (Typically hard and intr are used together.)
soft : Soft mount. When an IO request fails, an error is thrown instead. Programs often cannot be trusted to handle this condition correctly. Hard + intr is preferred.
sync : Complete write requests before returning success.
async : Cache write requests and immediately return success. Better performance is possible, but you corrupted data.
no_subtree_check : Do not recursively scan parent folders of an export for their permissions so as to inherit them. This is done for performance reasons.
sec : Specify which security method to use for the share. «sys» is used by default (ie no security).
proto : You can specify «tcp» or «udp». In my experience, UDP is bugged and causes awful performance. When not specified, TCP is used.

Once you have saved your changes to /etc/exports, you’ll need to update the shares by issuing:

Your client(s) can now see the new or changed shares.

Discovering Exported Shares on a Server
You can use the «showmount» command to discover the NFS shares available on a remote server (or locally).

Code:

$ sudo showmount -e nfs.sample.local

Mount an NFS Share on the NFS Client
Mounting an NFS share is pretty easy at this point. You can do it from a terminal using the «mount» command or do it via your «/etc/fstab» config file. I suggest doing the latter since it allows you to make your mounts automatic or allow them to be mounted by regular users (ie not needing sudo).

These examples assume that a folder exists in «/media/<username>» called «nfs-music» and that we want to mount «/exports/music» as shared by our NFS server at «nfs.sample.local».

Mounting via command line:

Code:

$ sudo mount nfs.sample.local:/exports/music /media/<username>/nfs-music"

Mounting via fstab:

Code:

# /etc/fstab

# There will be at least one line already defined for your system partition.  Do not eliminate it!
# ...
# ...
# ... more lines that may already exist and should not be removed ...
# ...
# ...

# path to mount to             hostname        :share path        type    options                                    Always zeroes
/media/<username>/nfs-music    nfs.sample.local:/exports/music    nfs4    rw,sync,hard,intr,sec=krb5p,user,noauto    0 0

Options not defined earlier:
user : Allow non-root users to trigger mounting or unmounting of this share.
noauto: Do not auto-mount this share on startup or with «mount -a» (ie mount all).

If the mount is defined in your fstab, you may simply mount it by specifying the local path it should mount to. Your system figures out which fstab entry you’re going for and uses all the options and hostname defined there.

Code:

$ mount /media/<username>/nfs-music

Понравилась статья? Поделить с друзьями:
  • Kad arbitr ru ошибка 429
  • K tag checksum error при записи
  • K lite codec pack error
  • K amd device disconnected error amazing
  • Jxb 178 ошибка error 1