Execution error return code 1 from org apache hadoop hive ql exec mr mapredlocaltask

Hi,   some of the Hive queries started to fail with this error message, so I checked the web and community, and usually the response is the setting the autoconvert.join to false can help.  The error appears very quickly, almost immediately after the query is submitted. I know that Hive is trying to ...

Hi, 

 some of the Hive queries started to fail with this error message, so I checked the web and community, and usually the response is the setting the autoconvert.join to false can help. 

The error appears very quickly, almost immediately after the query is submitted. I know that Hive is trying to create a map-side join but I dont understand why it fails, when X months it was running ok. The actual tables are very small (<1000 rows) so I dont think the hash-table can cause the out-of-memory.

2018-08-29 04:23:46,468 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [HiveServer2-Background-Pool: Thread-138349]: <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver>
2018-08-29 04:23:46,468 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [HiveServer2-Background-Pool: Thread-138349]: <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver>
2018-08-29 04:23:46,468 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [HiveServer2-Background-Pool: Thread-138349]: <PERFLOG method=acquireReadWriteLocks from=org.apache.hadoop.hive.ql.Driver>
2018-08-29 04:23:46,480 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [HiveServer2-Background-Pool: Thread-138349]: </PERFLOG method=acquireReadWriteLocks start=1535509426468 end=1535509426480 duration=12 from=org.apache.hadoop.hive.ql.Driver>
2018-08-29 04:23:46,480 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [HiveServer2-Background-Pool: Thread-138349]: <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver>
2018-08-29 04:23:46,480 INFO  org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-138349]: Executing command(queryId=hive_20180829042323_49ab08f3-b5e3-45db-9ca5-94ce91d63b11): INSERT OVERWRITE TABLE `prod_work`.`table`  SELECT
    `esub`.`yesterday` AS `yesterday`,
    `esub`.`product_group` AS `product_group`,
    `esub`.`priceplan` AS `priceplan`,
    `esub`.`brand` AS `brand`,
    `esub`.`brand_desc` AS `brand_desc`,
    `esub`.`segment` AS `segment`,
    `esub`.`product_payment_type` AS `product_payment_type`,
    `esub`.`esub` AS `esub`,
    `rnch`.`rnch` AS `rnch`,
    `gradd`.`gradd` AS `gradd`
  FROM `prod_work`.`tablea` `esub`
  LEFT JOIN `prod_work`.`tableb` `rnch`
    ON `esub`.`priceplan` = `rnch`.`priceplan`
  LEFT JOIN `prod_work`.`tablec` `gradd`
    ON `esub`.`priceplan` = `gradd`.`priceplan`
2018-08-29 04:23:46,480 INFO  org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-138349]: Query ID = hive_20180829042323_49ab08f3-b5e3-45db-9ca5-94ce91d63b11
2018-08-29 04:23:46,480 INFO  org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-138349]: Total jobs = 1
2018-08-29 04:23:46,480 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [HiveServer2-Background-Pool: Thread-138349]: </PERFLOG method=TimeToSubmit start=1535509426468 end=1535509426480 duration=12 from=org.apache.hadoop.hive.ql.Driver>
2018-08-29 04:23:46,480 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [HiveServer2-Background-Pool: Thread-138349]: <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver>
2018-08-29 04:23:46,481 INFO  org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-138349]: Starting task [Stage-6:MAPREDLOCAL] in serial mode
2018-08-29 04:23:46,481 INFO  org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask: [HiveServer2-Background-Pool: Thread-138349]: Generating plan file file:/tmp/hive/dbb7f6a5-a988-466b-8170-58bbcc924fb9/hive_2018-08-29_04-23-46_256_3553242332203703514-2517/-local-10004/plan.xml
2018-08-29 04:23:46,481 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [HiveServer2-Background-Pool: Thread-138349]: <PERFLOG method=serializePlan from=org.apache.hadoop.hive.ql.exec.Utilities>
2018-08-29 04:23:46,481 INFO  org.apache.hadoop.hive.ql.exec.Utilities: [HiveServer2-Background-Pool: Thread-138349]: Serializing MapredLocalWork via kryo
2018-08-29 04:23:46,483 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [HiveServer2-Background-Pool: Thread-138349]: </PERFLOG method=serializePlan start=1535509426481 end=1535509426483 duration=2 from=org.apache.hadoop.hive.ql.exec.Utilities>
2018-08-29 04:23:46,532 WARN  org.apache.hadoop.hive.conf.HiveConf: [HiveServer2-Background-Pool: Thread-138349]: HiveConf of name hive.server2.idle.session.timeout_check_operation does not exist
2018-08-29 04:23:46,532 WARN  org.apache.hadoop.hive.conf.HiveConf: [HiveServer2-Background-Pool: Thread-138349]: HiveConf of name hive.sentry.conf.url does not exist
2018-08-29 04:23:46,532 WARN  org.apache.hadoop.hive.conf.HiveConf: [HiveServer2-Background-Pool: Thread-138349]: HiveConf of name hive.entity.capture.input.URI does not exist
2018-08-29 04:23:46,543 INFO  org.apache.hadoop.hdfs.DFSClient: [HiveServer2-Background-Pool: Thread-138349]: Created token for hive: HDFS_DELEGATION_TOKEN owner=hive/ip-10-197-23-43.eu-west-1.compute.internal@DOMAIN.LOCAL, renewer=hive, realUser=, issueDate=1535509426541, maxDate=1536114226541, sequenceNumber=1381844, masterKeyId=457 on ha-hdfs:hanameservice
2018-08-29 04:23:46,544 INFO  org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask: [HiveServer2-Background-Pool: Thread-138349]: Executing: /opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/lib/hadoop/bin/hadoop jar /opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/jars/hive-common-1.1.0-cdh5.13.3.jar org.apache.hadoop.hive.ql.exec.mr.ExecDriver -libjars file:///opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/lib/hive/auxlib/hive-exec-1.1.0-cdh5.13.3-core.jar,file:///opt/cloudera/parcels/CDH-5.13.3-1.cdh5.13.3.p0.2/lib/hive/auxlib/hive-exec-core.jar  -localtask -plan file:/tmp/hive/dbb7f6a5-a988-466b-8170-58bbcc924fb9/hive_2018-08-29_04-23-46_256_3553242332203703514-2517/-local-10004/plan.xml   -jobconffile file:/tmp/hive/dbb7f6a5-a988-466b-8170-58bbcc924fb9/hive_2018-08-29_04-23-46_256_3553242332203703514-2517/-local-10005/jobconf.xml
2018-08-29 04:23:46,583 ERROR org.apache.hadoop.hive.ql.exec.Task: [HiveServer2-Background-Pool: Thread-138349]: Execution failed with exit status: 1
2018-08-29 04:23:46,583 ERROR org.apache.hadoop.hive.ql.exec.Task: [HiveServer2-Background-Pool: Thread-138349]: Obtaining error information
2018-08-29 04:23:46,583 ERROR org.apache.hadoop.hive.ql.exec.Task: [HiveServer2-Background-Pool: Thread-138349]:
Task failed!
Task ID:
  Stage-6

Logs:

2018-08-29 04:23:46,583 ERROR org.apache.hadoop.hive.ql.exec.Task: [HiveServer2-Background-Pool: Thread-138349]: /var/log/hive/hadoop-cmf-CD-HIVE-nRaFPvFN-HIVESERVER2-ip-10-197-23-43.eu-west-1.compute.internal.log.out
2018-08-29 04:23:46,583 ERROR org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask: [HiveServer2-Background-Pool: Thread-138349]: Execution failed with exit status: 1
2018-08-29 04:23:46,584 ERROR org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-138349]: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
2018-08-29 04:23:46,584 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [HiveServer2-Background-Pool: Thread-138349]: </PERFLOG method=Driver.execute start=1535509426480 end=1535509426584 duration=104 from=org.apache.hadoop.hive.ql.Driver>
2018-08-29 04:23:46,584 INFO  org.apache.hadoop.hive.ql.Driver: [HiveServer2-Background-Pool: Thread-138349]: Completed executing command(queryId=hive_20180829042323_49ab08f3-b5e3-45db-9ca5-94ce91d63b11); Time taken: 0.104 seconds
2018-08-29 04:23:46,584 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [HiveServer2-Background-Pool: Thread-138349]: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver>
2018-08-29 04:23:46,594 INFO  org.apache.hadoop.hive.ql.log.PerfLogger: [HiveServer2-Background-Pool: Thread-138349]: </PERFLOG method=releaseLocks start=1535509426584 end=1535509426594 duration=10 from=org.apache.hadoop.hive.ql.Driver>
2018-08-29 04:23:46,595 ERROR org.apache.hive.service.cli.operation.Operation: [HiveServer2-Background-Pool: Thread-138349]: Error running hive query:
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
        at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:400)
        at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:238)
        at org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:89)
        at org.apache.hive.service.cli.operation.SQLOperation$3$1.run(SQLOperation.java:301)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
        at org.apache.hive.service.cli.operation.SQLOperation$3.run(SQLOperation.java:314)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)

Is it probably related to the HiveServer memory configuration? Do I understand it correctly that this fail happens inside the HiveServer2 JVM, when it tries to build a hash-table for a map-side join?

Thanks

Published: 03 Oct 2013
Last Modified Date: 24 Aug 2022

Issue

After connecting to a Hadoop Hive data from Tableau Desktop, when you try to drag a field into the view or filter data, the following error might occur:
 

Error from Hive: error code: ‘1’ error message: Error while processing statement: Failed: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask

Environment

  • Tableau Desktop
  • Hortonworks Hadoop Hive
  • MapR Hadoop
  • Cloudera Hadoop Hive
  • Cloudera Impala

Resolution

Work with you database administrator to troubleshoot the following database configurations:

  1. Verify that the Hadoop Hive configuration does not contain added properties:
    • Verify that properties allow for SELECT statements with defined fields from tables, for example, SELECT <field_name> FROM <table_name>.
  2. Restart the Hadoop Hive cluster to ensure that the database is not referring to cached configurations or JAR files that no longer exist.

Cause

This error is passed to Tableau Desktop from the database. It indicates that there is a problem with the Hadoop Hive database configuration.

Additional Information

Cloudera: Hive ODBC query fails, works in Hive shell though — hive.aux.jars.path?




having hive.auto.convert.join set to true works in the CLI with no issue, but fails in HiveServer2 when jmx options are passed to the service on startup. This (in hive-env.sh) is enough to make it fail:

As soon as I remove the line, it works properly. I have *no*idea…
Here’s the log from the service:

2015-07-24 17:19:27,457 INFO  [HiveServer2-Handler-Pool: Thread-22]: ql.Driver (SessionState.java:printInfo(912)) - Query ID = hive_20150724171919_aaa88a89-dc6d-490b-821c-4eec6d4c0421
2015-07-24 17:19:27,457 INFO  [HiveServer2-Handler-Pool: Thread-22]: ql.Driver (SessionState.java:printInfo(912)) - Total jobs = 1
2015-07-24 17:19:27,465 INFO  [HiveServer2-Handler-Pool: Thread-22]: ql.Driver (Driver.java:launchTask(1638)) - Starting task [Stage-4:MAPREDLOCAL] in serial mode
2015-07-24 17:19:27,467 INFO  [HiveServer2-Handler-Pool: Thread-22]: mr.MapredLocalTask (MapredLocalTask.java:executeInChildVM(159)) - Generating plan file file:/tmp/hive/8932c206-5420-4b6f-9f1f-5f1706f30df8/hive_2015-07-24_17-19-26_552_5082133674120283907-1/-local-10005/plan.xml
2015-07-24 17:19:27,625 WARN  [HiveServer2-Handler-Pool: Thread-22]: conf.HiveConf (HiveConf.java:initialize(2620)) - HiveConf of name hive.files.umask.value does not exist
2015-07-24 17:19:27,708 INFO  [HiveServer2-Handler-Pool: Thread-22]: mr.MapredLocalTask (MapredLocalTask.java:executeInChildVM(288)) - Executing: /usr/lib/hadoop/bin/hadoop jar /usr/lib/hive/lib/hive-common-1.1.0-cdh5.4.3.jar org.apache.hadoop.hive.ql.exec.mr.ExecDriver -localtask -plan file:/tmp/hive/8932c206-5420-4b6f-9f1f-5f1706f30df8/hive_2015-07-24_17-19-26_552_5082133674120283907-1/-local-10005/plan.xml   -jobconffile file:/tmp/hive/8932c206-5420-4b6f-9f1f-5f1706f30df8/hive_2015-07-24_17-19-26_552_5082133674120283907-1/-local-10006/jobconf.xml
2015-07-24 17:19:28,499 ERROR [HiveServer2-Handler-Pool: Thread-22]: exec.Task (SessionState.java:printError(921)) - Execution failed with exit status: 1
2015-07-24 17:19:28,500 ERROR [HiveServer2-Handler-Pool: Thread-22]: exec.Task (SessionState.java:printError(921)) - Obtaining error information
2015-07-24 17:19:28,500 ERROR [HiveServer2-Handler-Pool: Thread-22]: exec.Task (SessionState.java:printError(921)) -
Task failed!
Task ID:
  Stage-4

Logs:

2015-07-24 17:19:28,501 ERROR [HiveServer2-Handler-Pool: Thread-22]: exec.Task (SessionState.java:printError(921)) - /tmp/hiveserver2_manual/hive-server2.log
2015-07-24 17:19:28,501 ERROR [HiveServer2-Handler-Pool: Thread-22]: mr.MapredLocalTask (MapredLocalTask.java:executeInChildVM(308)) - Execution failed with exit status: 1
2015-07-24 17:19:28,518 ERROR [HiveServer2-Handler-Pool: Thread-22]: ql.Driver (SessionState.java:printError(921)) - FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
2015-07-24 17:19:28,599 WARN  [HiveServer2-Handler-Pool: Thread-22]: security.UserGroupInformation (UserGroupInformation.java:doAs(1674)) - PriviledgedActionException as:hive (auth:SIMPLE) cause:org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
2015-07-24 17:19:28,600 WARN  [HiveServer2-Handler-Pool: Thread-22]: thrift.ThriftCLIService (ThriftCLIService.java:ExecuteStatement(496)) - Error executing statement:
org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
	at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:315)
	at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:146)
	at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:173)
	at org.apache.hive.service.cli.operation.Operation.run(Operation.java:257)
	at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:398)
	at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatement(HiveSessionImpl.java:379)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78)
	at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36)
	at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671)
	at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59)
	at com.sun.proxy.$Proxy23.executeStatement(Unknown Source)
	at org.apache.hive.service.cli.CLIService.executeStatement(CLIService.java:258)
	at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:490)
	at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1313)
	at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1298)
	at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
	at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
	at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
	at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
	at java.lang.Thread.run(Thread.java:745)

Не удалось выполнить. Ошибка 193:% 1 не является допустимым приложением Win32. Dev c 2019 Обновлено

Я новичок в Hadoop и пытаюсь выполнить несколько запросов на присоединение в Hive. Я создал две таблицы (table1 и table2). Я выполнил запрос на присоединение, но получил следующее сообщение об ошибке:

FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask

Однако, когда я запускаю этот запрос в пользовательском интерфейсе Hive, он выполняется, и я получаю правильные результаты. Может ли кто-нибудь помочь здесь объяснить, что может быть не так?

  • 1 Hive не имеет окончательного «пользовательского интерфейса». Откуда вы спрашиваете?
  • Я запускаю его через Hive Editor по адресу quickstart.cloudera: 8888
  • 1 Это называется Hue … Итак, где вы запускаете запрос, когда получаете сообщение об ошибке? hive команда устарела
  • Да, это Хюэ. Я запускаю запрос в Терминале. Обычные команды SQL работают нормально, за исключением запроса соединения, после которого я получаю эту ошибку: ‘hive> select t1.Id, t1.Name, t2.Id, t2.Name from table1 t1 join table2 t2 on t1.id = t2.id; Идентификатор запроса = root_20170926212222_d79b2469-efc1-49db-a2d5-e68a5e1dca87 Всего заданий = 1 FAILED: Ошибка выполнения, код возврата 1 из org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask ​​’Однако в редакторе Hue запрос работает нормально.
  • Hue запускает запросы HiveServer2. Вы обходите это с помощью Hive CLI. blog.cloudera.com/blog/2014/02/…

Я просто добавил следующее перед выполнением своего запроса, и это сработало.

SET hive.auto.convert.join=false; 
  • Это было великолепно. У меня все заработало :) Спасибо.

Просто поместите эту команду перед Query:

SET hive.auto.convert.join=false; 

Это определенно работает!

  • +2 Разъяснение пожалуйста … Не собираюсь дорабатывать конфиг только на три строчки «объяснения».
  • Некоторая предыстория hive.auto.convert.join устанавливается по адресу docs.qubole.com/en/latest/user-guide/engines/hive/…, cwiki.apache.org/confluence/display/Hive/… и т. д.

Я также столкнулся с проблемой на Cloudera Quick Start VM — 5.12, которая была решена путем выполнения следующей инструкции в приглашении улья:

SET hive.auto.convert.join=false; 

Надеюсь, что нижеприведенная информация будет более полезной:

Шаг 1: Импорт всех таблиц из базы данных retail_db MySQL

sqoop import-all-tables  --connect jdbc:mysql://quickstart.cloudera:3306/retail_db  --username retail_dba  --password cloudera  --num-mappers 1  --warehouse-dir /user/cloudera/sqoop/import-all-tables-text  --as-textfile 

Шаг 2: Создание базы данных с названием retail_db и необходимых таблиц в Hive

create database retail_db; use retail_db; create external table categories( category_id int, category_department_id int, category_name string) row format delimited fields terminated by ',' stored as textfile location '/user/cloudera/sqoop/import-all-tables-text/categories'; create external table customers( customer_id int, customer_fname string, customer_lname string, customer_email string, customer_password string, customer_street string, customer_city string, customer_state string, customer_zipcode string) row format delimited fields terminated by ',' stored as textfile location '/user/cloudera/sqoop/import-all-tables-text/customers'; create external table departments( department_id int, department_name string) row format delimited fields terminated by ',' stored as textfile location '/user/cloudera/sqoop/import-all-tables-text/departments'; create external table order_items( order_item_id int, order_item_order_id int, order_item_product_id int, order_item_quantity int, order_item_subtotal float, order_item_product_price float) row format delimited fields terminated by ',' stored as textfile location '/user/cloudera/sqoop/import-all-tables-text/order_items'; create external table orders( order_id int, order_date string, order_customer_id int, order_status string) row format delimited fields terminated by ',' stored as textfile location '/user/cloudera/sqoop/import-all-tables-text/orders'; create external table products( product_id int, product_category_id int, product_name string, product_description string, product_price float, product_image string) row format delimited fields terminated by ',' stored as textfile location '/user/cloudera/sqoop/import-all-tables-text/products'; 

Шаг 3: Выполнить запрос JOIN

SET hive.cli.print.current.db=true; select o.order_date, sum(oi.order_item_subtotal) from orders o join order_items oi on (o.order_id = oi.order_item_order_id) group by o.order_date limit 10; 

Вышеупомянутый запрос давал следующую проблему:

Идентификатор запроса = cloudera_20171029182323_6eedd682-256b-466c-b2e5-58ea100715fb Всего заданий = 1 FAILED: ошибка выполнения, код возврата 1 из org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask

Шаг 4: Вышеупомянутая проблема была решена путем выполнения приведенной ниже инструкции в командной строке HIVE:

SET hive.auto.convert.join=false; 

Шаг 5: Результат запроса

select o.order_date, sum(oi.order_item_subtotal) from orders o join order_items oi on (o.order_id = oi.order_item_order_id) group by o.order_date limit 10; Query ID = cloudera_20171029182525_cfc70553-89d2-4c61-8a14-4bbeecadb3cf Total jobs = 2 Launching Job 1 out of 2 Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the maximum number of reducers: set hive.exec.reducers.max= In order to set a constant number of reducers: set mapreduce.job.reduces= Starting Job = job_1509278183296_0005, Tracking URL = http://quickstart.cloudera:8088/proxy/application_1509278183296_0005/ Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_1509278183296_0005 Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1 2017-10-29 18:25:19,861 Stage-1 map = 0%, reduce = 0% 2017-10-29 18:25:26,181 Stage-1 map = 50%, reduce = 0%, Cumulative CPU 2.72 sec 2017-10-29 18:25:27,240 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 5.42 sec 2017-10-29 18:25:32,479 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 8.01 sec MapReduce Total cumulative CPU time: 8 seconds 10 msec Ended Job = job_1509278183296_0005 Launching Job 2 out of 2 Number of reduce tasks not specified. Estimated from input data size: 1 In order to change the average load for a reducer (in bytes): set hive.exec.reducers.bytes.per.reducer= In order to limit the maximum number of reducers: set hive.exec.reducers.max= In order to set a constant number of reducers: set mapreduce.job.reduces= Starting Job = job_1509278183296_0006, Tracking URL = http://quickstart.cloudera:8088/proxy/application_1509278183296_0006/ Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_1509278183296_0006 Hadoop job information for Stage-2: number of mappers: 1; number of reducers: 1 2017-10-29 18:25:38,676 Stage-2 map = 0%, reduce = 0% 2017-10-29 18:25:43,925 Stage-2 map = 100%, reduce = 0%, Cumulative CPU 0.85 sec 2017-10-29 18:25:49,142 Stage-2 map = 100%, reduce = 100%, Cumulative CPU 2.13 sec MapReduce Total cumulative CPU time: 2 seconds 130 msec Ended Job = job_1509278183296_0006 MapReduce Jobs Launched: Stage-Stage-1: Map: 2 Reduce: 1 Cumulative CPU: 8.01 sec HDFS Read: 8422614 HDFS Write: 17364 SUCCESS Stage-Stage-2: Map: 1 Reduce: 1 Cumulative CPU: 2.13 sec HDFS Read: 22571 HDFS Write: 407 SUCCESS Total MapReduce CPU Time Spent: 10 seconds 140 msec OK 2013-07-25 00:00:00.0 68153.83132743835 2013-07-26 00:00:00.0 136520.17266082764 2013-07-27 00:00:00.0 101074.34193611145 2013-07-28 00:00:00.0 87123.08192253113 2013-07-29 00:00:00.0 137287.09244918823 2013-07-30 00:00:00.0 102745.62186431885 2013-07-31 00:00:00.0 131878.06256484985 2013-08-01 00:00:00.0 129001.62241744995 2013-08-02 00:00:00.0 109347.00200462341 2013-08-03 00:00:00.0 95266.89186286926 Time taken: 35.721 seconds, Fetched: 10 row(s) 

Попробуйте установить параметр AuthMech при подключении

Я установил его на 2 и определил имя пользователя

это решило мою проблему на ctas

С уважением, Окан

В моем случае добавить параметр configuration за execute решит эту проблему. Эта проблема вызвана конфликтом доступа для записи. Вы должны использовать configuration чтобы убедиться, что у вас есть доступ на запись.

Tweet

Share

Link

Plus

Send

Send

Pin

Hi,

Currently I am working with hive, and while using one command to check out all records inside the table, its showing me an error;

Command:

Error:

I am not getting why the error is happening and what can be the solution to this?







May 15, 2019


in Big Data Hadoop


by
diana





3,927 views



1 answer to this question.

Hey,

The error you are getting because the query you have used should show the the number of rows in your table, but there is no data inside your table.

You can use a query to list down the elements, just use:

select * from table_name;

I hope it works.






answered

May 15, 2019


by



• 65,910 points



Related Questions In Big Data Hadoop

  • All categories

  • ChatGPT
    (4)

  • Apache Kafka
    (84)

  • Apache Spark
    (596)

  • Azure
    (131)

  • Big Data Hadoop
    (1,907)

  • Blockchain
    (1,673)

  • C#
    (141)

  • C++
    (271)

  • Career Counselling
    (1,060)

  • Cloud Computing
    (3,446)

  • Cyber Security & Ethical Hacking
    (147)

  • Data Analytics
    (1,266)

  • Database
    (855)

  • Data Science
    (75)

  • DevOps & Agile
    (3,575)

  • Digital Marketing
    (111)

  • Events & Trending Topics
    (28)

  • IoT (Internet of Things)
    (387)

  • Java
    (1,247)

  • Kotlin
    (8)

  • Linux Administration
    (389)

  • Machine Learning
    (337)

  • MicroStrategy
    (6)

  • PMP
    (423)

  • Power BI
    (516)

  • Python
    (3,188)

  • RPA
    (650)

  • SalesForce
    (92)

  • Selenium
    (1,569)

  • Software Testing
    (56)

  • Tableau
    (608)

  • Talend
    (73)

  • TypeSript
    (124)

  • Web Development
    (3,002)

  • Ask us Anything!
    (66)

  • Others
    (1,929)

  • Mobile Development
    (263)

Subscribe to our Newsletter, and get personalized recommendations.

Already have an account? Sign in.

Понравилась статья? Поделить с друзьями:
  • Executing grub install dev sda failed this is fatal error
  • Execute error для не синхронизированного блока кода вызван метод синхронизации объектов
  • Execute error 183 rappelz
  • Execute command error gta san andreas
  • Exception access violation java error