Error keepererrorcode connectionloss for hbase master

I'm going completely crazy: Installed Hadoop/Hbase, all is running; /opt/jdk1.6.0_24/bin/jps 23261 ThriftServer 22582 QuorumPeerMain 21969 NameNode 23500 Jps 23021 HRegionServer 22211 TaskTracker

I’m going completely crazy:

Installed Hadoop/Hbase, all is running;

/opt/jdk1.6.0_24/bin/jps
23261 ThriftServer
22582 QuorumPeerMain
21969 NameNode
23500 Jps
23021 HRegionServer
22211 TaskTracker
22891 HMaster
22117 SecondaryNameNode
21779 DataNode
22370 Main
22704 JobTracker

Pseudo distributed environment.

hbase shell

is working and coming up with correct results running ‘list’ and;

hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.90.1-cdh3u0, r, Fri Mar 25 16:10:51 PDT 2011

hbase(main):001:0> status
1 servers, 0 dead, 8.0000 average load

When connecting via ruby & thrift, everything is working fine; we are adding data, it’s getting in the system, we can query/scan it. Everything seems fine.

However, when connecting with Java:

groovy> import org.apache.hadoop.hbase.HBaseConfiguration 
groovy> import org.apache.hadoop.hbase.client.HBaseAdmin 
groovy> conf = HBaseConfiguration.create() 
groovy> conf.set("hbase.master","127.0.0.1:60000"); 
groovy> hbase = new HBaseAdmin(conf); 

Exception thrown

org.apache.hadoop.hbase.ZooKeeperConnectionException: org.apache.hadoop.hbase.ZooKeeperConnectionException: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1000)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.setupZookeeperTrackers(HConnectionManager.java:303)
    at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:294)
    at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:156)
    at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:84)

I’ve been trying to find the cause, but I really have no clue at all. Everything seems to be correctly installed.

netstat -lnp|grep 60000
tcp6       0      0 :::60000                :::*                    LISTEN      22891/java  

Looks fine as well.

# telnet localhost 60000
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

Connects and dies if you type anything + enter (not sure if that’s the idea, thrift on 9090 does the same).

Can anyone help me?

errorlog.txtThis thread is related to my post related to Metrics collector not starting (https://community.hortonworks.com/questions/23512/metrics-collector-is-not-starting-showing-error-re.html). I thought of creating a new thread to highlight the main issue that I see in the log files. In general, Metric Collector is not starting. The logs have many information related to the error. The error that is mainly shown

exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master

In the same log file, I also see the following line

zookeeper.ClientCnxn: Opening socket connection to server localhost/0:0:0:0:0:0:0:1:61181.

If ‘localhost’ is the issue here, then I am not sure why localhost is used here instead of FQDN and I do not know which configuration file will help me to use FQN instead of localhost.

I have read many threads related to this error but none of them helped me fix this issue. Can anyone please help me understand this issue.

I am pasting the value some of the properties:

hbase.rootdir = hdfs://item-70288:8020/apps/hbase/data

hbase.cluster.distributed= true

Metrics service operation mode = embedded

hbase.zookeeper.property.clientPort= 2181

hbase.zookeeper.quorum =item-70288

Update #1

=========

The command netstat -anp |grep 61181 is not returning anything. It seems nothing is listening on 61181.

I am attaching the full error log with this post.

Содержание

  1. Support Questions
  2. Support Questions
  3. Zookeeper Connection Loss Errors
  4. Symptom
  5. Error messages
  6. Possible causes
  7. Network connectivity issue across different data centers
  8. Diagnosis
  9. Resolution
  10. Solution #1
  11. Solution #2
  12. Solution #3
  13. ZooKeeper node not serving requests
  14. Diagnosis
  15. Resolution
  16. Ошибка Hbase «ОШИБКА: KeeperErrorCode = NoNode для / hbase / master»

Support Questions

  • Subscribe to RSS Feed
  • Mark Question as New
  • Mark Question as Read
  • Float this Question for Current User
  • Bookmark
  • Subscribe
  • Mute
  • Printer Friendly Page

Created ‎03-18-2016 10:14 AM

  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Subscribe to RSS Feed
  • Permalink
  • Print
  • Report Inappropriate Content

errorlog.txtThis thread is related to my post related to Metrics collector not starting (https://community.hortonworks.com/questions/23512/metrics-collector-is-not-starting-showing-error-re.html). I thought of creating a new thread to highlight the main issue that I see in the log files. In general, Metric Collector is not starting. The logs have many information related to the error. The error that is mainly shown

exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master

In the same log file, I also see the following line

zookeeper.ClientCnxn: Opening socket connection to server localhost/0:0:0:0:0:0:0:1:61181 .

If ‘localhost’ is the issue here, then I am not sure why localhost is used here instead of FQDN and I do not know which configuration file will help me to use FQN instead of localhost.

I have read many threads related to this error but none of them helped me fix this issue. Can anyone please help me understand this issue.

I am pasting the value some of the properties:

Metrics service operation mode = embedded

The command netstat -anp |grep 61181 is not returning anything. It seems nothing is listening on 61181.

I am attaching the full error log with this post.

Created ‎03-22-2016 08:15 AM

  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Subscribe to RSS Feed
  • Permalink
  • Print
  • Report Inappropriate Content

I have fixed the issue related to Metrics Monitor not starting. Even though I have no idea why the above error came, I fixed by changing/setting the following property under Ambari Metrics Monitor configuration through Ambari.

  1. set hbase.zookeeper.property.clientPort to 2181. I had observed that this was set to 61181, which I am not sure how it happend. This is a very important property. The Zookeeper server also runs at the same port, so I guess the hbase.zookeeper.property.clientPort should also be set to the same port. If these ports are different, then there will be a number of errors thrown by Ambari-metrics collector service is started. Therefore make sure this property is set to the same port as the zookeeper server.
  2. Change the «Metrics Service operation mode» to distributed.
  3. Set hbase.cluster.distributed to true
  4. Set hbase.rootdir to a hdfs folder. I created a new folder in the hdfs root and made hdfs the owner of the folder.

After I did this, I restarted only the Ambari Metric collector and this time it was happy to start with a green flag 😉

Источник

Support Questions

  • Subscribe to RSS Feed
  • Mark Question as New
  • Mark Question as Read
  • Float this Question for Current User
  • Bookmark
  • Subscribe
  • Mute
  • Printer Friendly Page

Created ‎03-18-2016 10:14 AM

  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Subscribe to RSS Feed
  • Permalink
  • Print
  • Report Inappropriate Content

errorlog.txtThis thread is related to my post related to Metrics collector not starting (https://community.hortonworks.com/questions/23512/metrics-collector-is-not-starting-showing-error-re.html). I thought of creating a new thread to highlight the main issue that I see in the log files. In general, Metric Collector is not starting. The logs have many information related to the error. The error that is mainly shown

exception=org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/master

In the same log file, I also see the following line

zookeeper.ClientCnxn: Opening socket connection to server localhost/0:0:0:0:0:0:0:1:61181 .

If ‘localhost’ is the issue here, then I am not sure why localhost is used here instead of FQDN and I do not know which configuration file will help me to use FQN instead of localhost.

I have read many threads related to this error but none of them helped me fix this issue. Can anyone please help me understand this issue.

I am pasting the value some of the properties:

Metrics service operation mode = embedded

The command netstat -anp |grep 61181 is not returning anything. It seems nothing is listening on 61181.

I am attaching the full error log with this post.

Created ‎03-22-2016 08:15 AM

  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Subscribe to RSS Feed
  • Permalink
  • Print
  • Report Inappropriate Content

I have fixed the issue related to Metrics Monitor not starting. Even though I have no idea why the above error came, I fixed by changing/setting the following property under Ambari Metrics Monitor configuration through Ambari.

  1. set hbase.zookeeper.property.clientPort to 2181. I had observed that this was set to 61181, which I am not sure how it happend. This is a very important property. The Zookeeper server also runs at the same port, so I guess the hbase.zookeeper.property.clientPort should also be set to the same port. If these ports are different, then there will be a number of errors thrown by Ambari-metrics collector service is started. Therefore make sure this property is set to the same port as the zookeeper server.
  2. Change the «Metrics Service operation mode» to distributed.
  3. Set hbase.cluster.distributed to true
  4. Set hbase.rootdir to a hdfs folder. I created a new folder in the hdfs root and made hdfs the owner of the folder.

After I did this, I restarted only the Ambari Metric collector and this time it was happy to start with a green flag 😉

Источник

Zookeeper Connection Loss Errors

You’re viewing Apigee Edge documentation.
View Apigee X documentation.

Symptom

The ZooKeeper connectivity issues can manifest as different symptoms such as:

  1. API proxy deployment errors
  2. Management API calls fail with 5XX errors
  3. Routers or Message Processors fail to start
  4. Analytics components report ZooKeeper connection loss in system.logs

Error messages

The following provides examples of error messages that may be observed when there is connection loss to ZooKeeper node(s).

  1. The following error is returned in Management Server logs when an API Proxy deployment fails due to ZooKeeper Connection loss:
  2. During startup, the Routers and Message Processors connect to ZooKeeper. If there are connectivity issues with ZooKeeper, then these components will fail to start with the following error:
  3. The Edge UI may display the following error indicating it was unable to check the deployment status of the API Proxies:

Possible causes

The following table lists possible causes of this issue:

Cause For
Network connectivity issue across different data centers Edge Private Cloud users
ZooKeeper node not serving requests Edge Private Cloud users

Click a link in the table to see possible resolutions to that cause.

Network connectivity issue across different data centers

Diagnosis

A ZooKeeper cluster may have nodes that span across multiple regions/data centers, such as DC-1 and DC-2. The typical Apigee Edge 2 DC topology will have:

  • ZooKeeper servers 1, 2, and 3 as voters in DC-1
  • ZooKeeper 4 and 5 as voters and ZooKeeper 6 as an observer in DC-2.

If DC-1 region goes down or network connectivity between DC-1 and DC-2 is broken, then ZooKeeper nodes cannot elect a new leader in DC-2 and they fail to communicate with the leader node. ZooKeeper observers cannot elect a new leader and the two remaining voters in DC-2 do not have a quorum of at least 3 voter nodes to elect a new leader. Thus, the ZooKeepers in DC-2 will not be able to process any requests. The remaining ZooKeeper nodes in DC-2 will continue to loop retrying to connect back to the ZooKeeper voters to find the leader.

Resolution

Apply the following solutions to address this issue in the specified order.

If you are unable to resolve the problem after trying these solutions, please contact Apigee Support.

Solution #1

  1. Work with your network administrators to repair the network connectivity issue between the data centers.
  2. When the ZooKeeper ensemble are able to communicate across the data centers and elect a ZooKeeper leader, the nodes should become healthy and be able to process requests.

Solution #2

  1. If the network connectivity will take time to repair, a workaround is to reconfigure ZooKeeper nodes in the region where they are down. For example, reconfigure the ZooKeeper cluster in DC-2 so that the 3 ZooKeeper nodes in this region are all voters and remove the server.# in the zoo.cfg of the ZooKeepers from DC-1 region.
    1. In the following example, zoo.cfg configures nodes for 2 regions where DC-1 uses us-ea hostnames denoting US-East region and DC-2 uses us-wo hostnames denoting US-West region. (NOTE: Only relevant configs are displayed):

    In the above example, reconfigure the zoo.cfg as follows:

  2. Using code with config, create a file /opt/apigee/customer/application/zookeeper.properties with the following:

In the above, the nodes from US-East are removed, and the US-West nodes get promoted to voters when the :observer annotation is removed.

Back up /opt/apigee/apigee-zookeeper/conf/zoo.cfg and old /opt/apigee/customer/application/zookeeper.properties .

These files will be used to restore the defaults when the network connectivity is back up between data centers.

Disable the observer notation for the observer node. To do this, add the following configuration to the top of /opt/apigee/customer/application/zookeeper.properties :

Edit the /opt/apigee/data/apigee-zookeeper/data/myid file as follows:

  • For server.1 , change the entry inside myid from 4 to 1.
  • For server.2 , change the myid from 5 to 2.
  • For server.3 change the myid from 6 to 3.
  • Restart the ZooKeeper nodes in the region where you reconfigured the ZooKeeper cluster.
  • Repeat the above configuration from step #1b through step# 5 on all ZooKeeper nodes in DC-2.
  • Validate the nodes are up with a leader:

    The output of this command will contain a line that says «mode» followed by «leader» if it is the leader, or «follower» if it is a follower.

    When the network between data centers is reestablished, the ZooKeeper configurations changes can be reverted on the ZooKeeper nodes in DC-2.

    Solution #3

    1. If ZooKeeper node(s) in the cluster is not started, then restart it.
    2. Check ZooKeeper logs to determine why the ZooKeeper node went down.

    ZooKeeper logs are available in the following directory:

  • Contact Apigee Support and provide the ZooKeeper logs to troubleshoot the cause of any ZooKeeper node that may have been stopped.
  • ZooKeeper node not serving requests

    A ZooKeeper node in the ensemble may become unhealthy and be unable to respond to client requests. This could be because:

    1. The node was stopped without being restarted.
    2. The node was rebooted without auto-start enabled.
    3. System load on the node caused it to go down or become unhealthy.

    Diagnosis

    1. Execute the following ZooKeeper health check commands on each of the ZooKeeper nodes and check the output:
      1. Note:NOTE: Response “imok” is a successful response.

      Check the mode to determine if the ZooKeeper node is a leader or follower.

      Example output for an all in one, single ZooKeeper node:

      This command lists the ZooKeeper variables which can be used to check the health of the ZooKeeper cluster.

      This command lists statistics about performance and connected clients.

      This command gives extended details on ZooKeeper connections.

      If any of the last 3 health check commands show the following message:

      Then it indicates that specific ZooKeeper node(s) is not serving requests.

  • Check the ZooKeeper logs on the specific node and try to locate any errors causing the ZooKeeper to be down. ZooKeeper logs are available in the following directory:
  • Resolution

    1. Restart all other ZooKeeper nodes in the cluster one by one.
    2. Re-run the ZooKeeper health check commands on each node and see if you get the expected output.

    Contact Apigee Support to troubleshoot the cause of system load if it persists or if restarts does not resolve the problem.

    Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

    Источник

    Ошибка Hbase «ОШИБКА: KeeperErrorCode = NoNode для / hbase / master»

    При выполнении любой команды в оболочке hbase я получаю следующую ошибку «ERROR: KeeperErrorCode = NoNode for / hbase / master» в оболочке hbase.

    Запущен HBASE:

    При проверке статуса в HBASE SHELL:

    hbase-site.xml

    Пожалуйста, дайте мне знать, почему эта ошибка возникает при выполнении команд hbase?

    Во-первых, убедитесь, что сопоставление IP-адреса и имени хоста настроено в файле hosts.

    Во-вторых, измените расположение временного каталога HBase. данные временного каталога будут регулярно очищаться. По умолчанию временный каталог находится на /tmp , измените их в hbase-site.xml .

    Если не работает. очистите каталог данных hbase, также очистите метаданные в zookeeper, перезапустите hbase снова.

    Более того, проверьте свой NTP и брандмауэр.

    Спасибо за ответ. Я создал каталог / hbase / tmp и настроил это свойство в hbase-site.xml, очистил оба данных, каталог zookeeper. Но по-прежнему получаю то же исключение.

    @ user7413163, если вам нужна дополнительная помощь, покажите свои журналы, не только ошибки

    замените или добавьте эту конфигурацию в файл hbase-site.xml в папке conf каталога hbase, а затем повторно запустите команду «hbase shell», а затем команду «list», чтобы просмотреть имеющиеся таблицы.

    Я знаю, что это не связано с искрой, но я получал следующие ошибки;

    Вызвано: java.io.IOException: org.apache.zookeeper.KeeperException $ NoNodeException: KeeperErrorCode = NoNode для / hbase / meta-region-server

    Вызвано: org.apache.zookeeper.KeeperException $ NoNodeException: KeeperErrorCode = NoNode для / hbase / meta-region-server

    Вызвано: org.apache.zookeeper.KeeperException $ NoNodeException: KeeperErrorCode = NoNode для / hbase / hbasei

    Установка конфигурации hbase.rootdir решила мои проблемы при создании HBaseContext;

    Так что вы можете попробовать добавить эту конфигурацию в hbase-site.xml.

    В моем случае я получал этот « ERROR: KeeperErrorCode = NoNode for /hbase/master », потому что процесс HMaster не работал.

    Проверьте с помощью команды jps .

    Если вы не видите HMaster процесс, как в приведенном выше списке, то это причина ERROR: KeeperErrorCode = NoNode. в оболочке hbase.

    Источник

    IMPORTANT! If you trying to install Apache Atlas and receiving this error, there is a separate article: https://mchesnavsky.tech/apache-atlas-building-installing/

    Suppose that we are faced with these exceptions. The first:

    [ReadOnlyZKClient-localhost:2181@<id>] [WARN] ReadOnlyZKClient$ZKTask$1:183 - <id> to localhost:2181 failed for get of /hbase/hbaseid, code = CONNECTIONLOSS, retries = 1
    [ReadOnlyZKClient-localhost:2181@<id>] [WARN] ReadOnlyZKClient$ZKTask$1:183 - <id> to localhost:2181 failed for get of /hbase/meta-region-server, code = CONNECTIONLOSS, retries = 1

    The second:

    Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
            at org.apache.zookeeper.KeeperException.create(KeeperException.java:102)
            at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
            at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient$ZKTask$1.exec(ReadOnlyZKClient.java:189)
            at org.apache.hadoop.hbase.zookeeper.ReadOnlyZKClient.run(ReadOnlyZKClient.java:323)

    The third:

    WARN] ConnectionImplementation:529 - Retrieve cluster id failed
    java.util.concurrent.ExecutionException: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
            at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
            at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
            at org.apache.hadoop.hbase.client.ConnectionImplementation.retrieveClusterId(ConnectionImplementation.java:527)
            at org.apache.hadoop.hbase.client.ConnectionImplementation.<init>(ConnectionImplementation.java:287)
            at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
            at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
            at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
            at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
            at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:219)
            at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:114)

    The hbase-client cannot connect to the Zookeeper. You need to pay attention to the address:

    ... to localhost:2181 failed for get of ...

    If there is a real Zookeeper instance at this address, then you need to check its state. But if there is no ZooKeeper instance at the given address, then the problem is in the under-configured hbase-site.xml configuration. If you have a Cloudera distribution, then it is located here:

    /etc/hbase/conf.cloudera.hbase/hbase-site.xml

    Check the following properties:

    • hbase.zookeeper.quorum
    • hbase.cluster.distributed
    • hbase.rootdir

    In my case, the hbase.zookeeper.quorum field was not populated. Therefore, the hbase-client tried to connect to ZooKeeper using the standard host (localhost) and the standard port (2181).

    If the file is empty or not exist at all – refer to the official documentation: https://hbase.apache.org/book.html.

    If you still have any questions, feel free to ask me in the comments under this article, or write me on promark33@gmail.com.

    On top of Hadoop Cluster Installed HBase (one kind of NoSQL database within Hadoop) service for real-time random reads/random writes in aginst to sequential file accessing of Hadoop Distributed File System (HDFS).

    HBase used for better storage but we can’t use HBase to process data with some business logic for some other services like HIVE, Map-Reduce, PIG, andSQOOP, etc.

    After Installed Spark server getting below error with HBase Snapshot from Hadoop cluster CLI

    Below is the error in the HBase node:

    at org.jruby.Ruby.runScript(Ruby.java:697)
    at org.jruby.Ruby.runNormally(Ruby.java:597)
    at org.jruby.Ruby.runFromMain(Ruby.java:446)
    at org.jruby.Ruby.internalRun(Main.Ruby.java:258)
    ERROR [ main] client.ConnectManager$HConnectionImplementation: Can't get connection to Zookeeeper: KEeperErrorCode = ConnectionLoss for /hbase
    Error: KeeperErrorCode = ConnectionLoss for /hbase
    Here is some help for this command:
    List all tables in hbase. Optional regualr expression paramete could be used to filter the output. Examples:

    How to resolve the below error in HBase Master node?

    Resolutions for KeeprErrorCode = ConnectionLoss for /HBase in Cluster:

    Above error code means HBase Master is not running on Hadoop cluster:

    Resolution 1:

    Step 1: First will check the HBase Master node is running or not by using "jps" commands.
    Step 2: using "stop-all.sh" command to stop the all running services on Hadoop cluster
    Step 3: using "start-all.sh" command to start all running services.
    Step 4: using "jps" command to check the services if it showing HBase master working then fine otherwise will do below steps:
    Step 5: Goto root user using "sudo su"
    Step 6:  Goto hbase shell file path: "cd /usr/lib/habse-1.2.6-hadoop/bin/start-hbase.sh"
    Step 7: Open the hbase shell using "hbase shell" command
    Step 8: use "list" command.

    Resolution 2:

    It may cause Zookeeper issue while HBase Master node tries to get the list from Zookeeper then it fails.

    Step 1: First check zookeeper service is running or not using "ps -ef | grep zookeeper"
    Step 2: Using "sudo service zookeeper stop" command to stop the Zookeeper service in Haodop cluster and stop the HBase service as well.
    Step 3: Then HBase xml file to increase the number of connection to Zookeeper services using"hbase.zookeeper,property.maxClientCnxns"
    Step 4: start the zookeeper service first then start the HBase service.

    Hi. I have created some tables in HBase shell. Now, I want to see the table names in HBase shell. But when I try to do that, I am getting the following error:

    How to solve this?







    Dec 28, 2018


    in Big Data Hadoop


    by



    • 29,330 points





    5,093 views



    1 answer to this question.

    Mostly HMaster is not running. Enter the sudo jps command in your Terminal and check if HMaster is running or not.

    If HMaster is not running, run the following command to start it:

    sudo su
    cd /usr/lib/hbase-0.96.2-hadoop2
    bin/start-hbase.sh
    

    Next, to list the tables, first open hbase shell and then list the tables. Use the following commands:

    hbase shell 
    list






    answered

    Dec 28, 2018


    by
    Omkar


    • 69,190 points



    Related Questions In Big Data Hadoop

    • All categories

    • ChatGPT
      (4)

    • Apache Kafka
      (84)

    • Apache Spark
      (596)

    • Azure
      (131)

    • Big Data Hadoop
      (1,907)

    • Blockchain
      (1,673)

    • C#
      (141)

    • C++
      (271)

    • Career Counselling
      (1,060)

    • Cloud Computing
      (3,446)

    • Cyber Security & Ethical Hacking
      (147)

    • Data Analytics
      (1,266)

    • Database
      (855)

    • Data Science
      (75)

    • DevOps & Agile
      (3,575)

    • Digital Marketing
      (111)

    • Events & Trending Topics
      (28)

    • IoT (Internet of Things)
      (387)

    • Java
      (1,247)

    • Kotlin
      (8)

    • Linux Administration
      (389)

    • Machine Learning
      (337)

    • MicroStrategy
      (6)

    • PMP
      (423)

    • Power BI
      (516)

    • Python
      (3,188)

    • RPA
      (650)

    • SalesForce
      (92)

    • Selenium
      (1,569)

    • Software Testing
      (56)

    • Tableau
      (608)

    • Talend
      (73)

    • TypeSript
      (124)

    • Web Development
      (3,002)

    • Ask us Anything!
      (66)

    • Others
      (1,929)

    • Mobile Development
      (263)

    Subscribe to our Newsletter, and get personalized recommendations.

    Already have an account? Sign in.

    160x600 Hire Freelancers

    Search entire site:

    Looking for a DevOps or I.T. job? Try ZipRecruiter.

    Interested in Bitcoin Competitors but only have US dollars?

    Coinbase has many different coins; you can often get $5 of cryptocurrency free if you sign up with Coinbase.

    Want to invest in over 150 cryptocurrencies (e.g. altcoins) besides Bitcoin directly with U.S. dollars?  No need to be concerned about trading pairs as long as you have US dollars for over 150 altcoin types.  Try Changelly. (Changelly and Coinbase are not related.)

    Continual Integration Recommendations

    Would you like to be part of ContinualIntegration.com’s mailing list?  Provide your email address below.

    Want your career to move toward cryptocurrency?

    Archives

    Archives

    Categories

    Categories


    When browsing this website from a VPN IP address, some [useful] advertisement links may not work when clicked as normal. Consider turning off your VPN to click on an ad. Please use the Contact page if there are other problems. The privacy policy is here.

    160x600 Fiverr Pro

    • Remove From My Forums
    • Question

    • I’m learning about HDInsight. I’ve provisioned a Hadoop cluster.

      In the command window I start HBase. At the HBase command prompt I run a very basic command below to create a table.

          create ‘Stocks’,’Price’,’Trade’

      However, it gives me this error:

         ERROR client.ConnectionManager$HConnectionImplementation: Can’t get connection to ZooKeeper: KeeperErrorCode = ConnectionLoss for /hbase

      I already searched MSDN and couldn’t find an answer.

      On StackOverflow I found a post but it mentioned messing with config files. A basic command should
      just work on a brand new HDInsight cluster.

      I can successfully run Hive queries on the same cluster. 

      Thanks in advance. 

    Answers

    • I figured it out.

      The solution, from within Azure is to setup and use Data Services — HDInsight —
      HBase.

      There seems to be some extra configuring needed for the HBase that is installed from Data Services — HDInsight —
      Hadoop.

      • Proposed as answer by

        Wednesday, November 12, 2014 7:36 AM

      • Marked as answer by
        CraigGu
        Thursday, November 13, 2014 4:07 AM

    I’m going completely crazy:

    Installed Hadoop/Hbase, all is running;

    /opt/jdk1.6.0_24/bin/jps
    23261 ThriftServer
    22582 QuorumPeerMain
    21969 NameNode
    23500 Jps
    23021 HRegionServer
    22211 TaskTracker
    22891 HMaster
    22117 SecondaryNameNode
    21779 DataNode
    22370 Main
    22704 JobTracker
    
    

    Pseudo distributed environment.

    hbase shell

    is working and coming up with correct results running ‘list’ and;

    hbase shell
    HBase Shell; enter 'help<RETURN>' for list of supported commands.
    Type "exit<RETURN>" to leave the HBase Shell
    Version 0.90.1-cdh3u0, r, Fri Mar 25 16:10:51 PDT 2011
    
    hbase(main):001:0> status
    1 servers, 0 dead, 8.0000 average load
    
    

    When connecting via ruby & thrift, everything is working fine; we are adding data, it’s getting in the system, we can query/scan it. Everything seems fine.

    However, when connecting with Java:

    groovy> import org.apache.hadoop.hbase.HBaseConfiguration
    groovy> import org.apache.hadoop.hbase.client.HBaseAdmin
    groovy> conf = HBaseConfiguration.create()
    groovy> conf.set("hbase.master","127.0.0.1:60000");
    groovy> hbase = new HBaseAdmin(conf); 
    
    Exception thrown
    
    org.apache.hadoop.hbase.ZooKeeperConnectionException: org.apache.hadoop.hbase.ZooKeeperConnectionException: org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getZooKeeperWatcher(HConnectionManager.java:1000)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.setupZookeeperTrackers(HConnectionManager.java:303)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.<init>(HConnectionManager.java:294)
        at org.apache.hadoop.hbase.client.HConnectionManager.getConnection(HConnectionManager.java:156)
        at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:84)
    
    

    I’ve been trying to find the cause, but I really have no clue at all. Everything seems to be correctly installed.

    netstat -lnp|grep 60000
    tcp6       0      0 :::60000                :::*                    LISTEN      22891/java
    
    

    Looks fine as well.

    # telnet localhost 60000
    Trying 127.0.0.1...
    Connected to localhost.
    Escape character is '^]'.
    
    

    Connects and dies if you type anything + enter (not sure if that’s the idea, thrift on 9090 does the same).

    Can anyone help me?


    Charles,

    This is a Zookeeper(ZK) error. The HBase client tries to get the /hbase node from Zookeeper and fails.

    You can get a ZK dump from the HBase master web interface. You should see all the connections to ZK and figure out if something is exhausting them.

    Before diving into anything else you could try restarting your ZK cluster and see if it fixes your problem. (It’s strange that you see that with a single client).

    HBase has a setting to increase the number of connections to ZK. It’s

    hbase.zookeeper.property.maxClientCnxns
    
    

    There were a few updates (see below) lately related to the default number of connections (there’s a hbase-default.xml file that has all the default configurations). You can override this in your hbase-site.xml file (under HBase conf dir) and raise it to 100 or more. But make sure you’re not masking the real problem this way, you shouldn’t see this problem with a single client.

    We’ve had a similar situation, but it was happening during heavy operations from map-reduce jobs, after upgrading to HBase-0.90.

    Here are a couple of issue related to your problem:

    • https://issues.apache.org/jira/browse/HBASE-3773
    • https://issues.apache.org/jira/browse/HBASE-3777

    If you still can’t figure it out send an email to the hbase-users list or join the #hbase channel on freenode and ask live questions.

    Понравилась статья? Поделить с друзьями:
  • Error jusched exe
  • Error juris транскрипция
  • Error juris перевод
  • Error junk at end of line first unrecognized character is
  • Error jump to case label fpermissive