Sql error 40000 42000 error while compiling statement failed nullpointerexception null

Support Questions Subscribe to RSS Feed Mark Question as New Mark Question as Read Float this Question for Current User Bookmark Subscribe Mute Printer Friendly Page Created on ‎01-20-2020 03:36 AM — last edited on ‎01-20-2020 08:53 AM by VidyaSargur Mark as New Bookmark Subscribe Mute Subscribe to RSS Feed Permalink Print Report Inappropriate […]

Содержание

  1. Support Questions
  2. Hive sync error on non partitioned hive table: nullPointer exception #545
  3. Comments
  4. Footer
  5. Как понять NullPointerException

Support Questions

  • Subscribe to RSS Feed
  • Mark Question as New
  • Mark Question as Read
  • Float this Question for Current User
  • Bookmark
  • Subscribe
  • Mute
  • Printer Friendly Page

Created on ‎01-20-2020 03:36 AM — last edited on ‎01-20-2020 08:53 AM by VidyaSargur

  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Subscribe to RSS Feed
  • Permalink
  • Print
  • Report Inappropriate Content

I am unable to access (select) the tables (both external and internal) in hive and permissions are managed by ranger.

Please find the error

0: jdbc:hive2://w0lxqhdp03:2181,w0lxq> select * from pcr_project;
Error: Error while compiling statement: FAILED: NullPointerException null (state=42000,code=40000)
0: jdbc:hive2://w0lxqhdp03:2181,w0lxq> explain select * from pcr_project;
Error: Error while compiling statement: FAILED: NullPointerException null (state=42000,code=40000)
0: jdbc:hive2://w0lxqhdp03:2181,w0lxq>

Created ‎01-20-2020 04:28 AM

  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Subscribe to RSS Feed
  • Permalink
  • Print
  • Report Inappropriate Content

Please try the ‘full path’ to the table, i.e.:

select * from dbname.tablename

use dbname; (switch to said db)

select * from tablename; (select from table in db selected above)

Created ‎01-20-2020 04:33 AM

  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Subscribe to RSS Feed
  • Permalink
  • Print
  • Report Inappropriate Content

@lyubomirangelo still getting the same error

select * from asop.test;
Error: Error while compiling statement: FAILED: NullPointerException null (state=42000,code=40000)

Full trace from hive server:

2020-01-20T07:27:50,724 INFO [fb06c5bc-0ca8-4f8f-93d8-76bd188d1e4c HiveServer2-Handler-Pool: Thread-115]: session.SessionState (:()) — Resetting thread name to HiveServer2-Handler-Pool: Thread-115
2020-01-20T07:27:50,724 WARN [HiveServer2-Handler-Pool: Thread-115]: thrift.ThriftCLIService (:()) — Error executing statement:
org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: NullPointerException null
at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:335)

[?:1.8.0_112]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_112]
Caused by: java.lang.NullPointerException
at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.checkResultsCache(SemanticAnalyzer.java:15019)

Created ‎01-20-2020 04:54 AM

  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Subscribe to RSS Feed
  • Permalink
  • Print
  • Report Inappropriate Content

It looks like query is not retrieving any results from the specific table.Could you please attach the «show create table pcr_project» output.

Meanwhile verify the data and it’s ownership in the hdfs path.

Created ‎01-20-2020 06:21 PM

  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Subscribe to RSS Feed
  • Permalink
  • Print
  • Report Inappropriate Content

That’s the desired results once you enable Ranger plugin for hive. As you said permissions managed in ranger.

Guessing from your scambled URL jdbc:hive2://w0lxqhdp03:2181/ w0lxq check under the ranger policies and ensure the user executing the SQL has SELECT on the underlying database w0lxq have a look at this hive /ranger security guidance

Created ‎01-21-2020 12:04 AM

  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Subscribe to RSS Feed
  • Permalink
  • Print
  • Report Inappropriate Content

@Shelton I have created the sample DB and table and provided the access in ranger but you know still i am not able to access the table.

Attaching the ranger policy screenshot as the user has all the permissions.

Note: We have integrated hive with LDAP.

select * from test;
Error: Error while compiling statement: FAILED: NullPointerException null (state=42000,code=40000)

Created ‎01-21-2020 12:13 AM

  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Subscribe to RSS Feed
  • Permalink
  • Print
  • Report Inappropriate Content

@Prakashcit Please find the show create table output

+—————————————————-+
| createtab_stmt |
+—————————————————-+
| CREATE EXTERNAL TABLE `asop.pcr_project`( |
| `project_id` int, |
| `project_name` string, |
| `imp_start_date` string, |
| `imp_end_date` string, |
| `project_type` string, |
| `region` string, |
| `all_countries` string, |
| `dept_div` string, |
| `project_status` string) |
| ROW FORMAT SERDE |
| ‘org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe’ |
| WITH SERDEPROPERTIES ( |
| ‘field.delim’=’|’, |
| ‘line.delim’=’n’, |
| ‘serialization.format’=’|’) |
| STORED AS INPUTFORMAT |
| ‘org.apache.hadoop.mapred.TextInputFormat’ |
| OUTPUTFORMAT |
| ‘org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat’ |
| LOCATION |
| ‘hdfs://datalakeqa/data/operations/asop/pcr_project’ |
| TBLPROPERTIES ( |
| ‘bucketing_version’=’2’, |
| ‘transient_lastDdlTime’=’1579513951’)

As checked the HDFS data user and group are hive:hadoop but i have provided the permission in ranger

Источник

Hive sync error on non partitioned hive table: nullPointer exception #545

Below is the error:
com.uber.hoodie.hive.HoodieHiveSyncException: Failed in executing SQL ALTER TABLE databus_realtime.databus_realtime_databus_sub_hd_t_hudi_sub_hd: REPLACE COLUMNS(_hoodie_commit_time string, _hoodie_commit_seqno string, _hoodie_record_key string, _hoodie_partition_path string, _hoodie_file_name string, id bigint, dbname string, ctime string, mtime string, test1 int, test2_prod string, test3 string, _commit_ts bigint ) cascade
at com.uber.hoodie.hive.HoodieHiveClient.updateHiveSQL(HoodieHiveClient.java:459) at com.uber.hoodie.hive.HoodieHiveClient.updateTableDefinition(HoodieHiveClient.java:249) at com.uber.hoodie.hive.HiveSyncTool.syncSchema(HiveSyncTool.java:145) at com.uber.hoodie.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:96) at com.uber.hoodie.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:68) at com.lianjia.dtarch.databus.streaming.hudi.service.HudiService.syncToHive(HudiService.java:92) at com.lianjia.dtarch.databus.streaming.hudi.service.HudiService.writeWithCompactAndSync(HudiService.java:69) at com.lianjia.dtarch.databus.streaming.hudi.KfkHudiConsumer.lambda$saveToHudi$c06d719c$1(KfkHudiConsumer.java:165) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$1.apply(JavaDStreamLike.scala:272) at org.apache.spark.streaming.api.java.JavaDStreamLike$$anonfun$foreachRDD$1.apply(JavaDStreamLike.scala:272) at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:628) at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$1$$anonfun$apply$mcV$sp$3.apply(DStream.scala:628) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51) at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:416) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50) at scala.util.Try$.apply(Try.scala:192) at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:257) at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58) at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:256) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:748) Caused by: org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: NullPointerException null at org.apache.hive.jdbc.Utils.verifySuccess(Utils.java:256) at org.apache.hive.jdbc.Utils.verifySuccessWithInfo(Utils.java:242) at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:254) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at org.apache.commons.dbcp.DelegatingStatement.execute(DelegatingStatement.java:264) at com.uber.hoodie.hive.HoodieHiveClient.updateHiveSQL(HoodieHiveClient.java:457) . 28 more Caused by: org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: NullPointerException null at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:315) at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:112) at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:181) at org.apache.hive.service.cli.operation.Operation.run(Operation.java:257) at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:388) at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:375) at sun.reflect.GeneratedMethodAccessor53.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:78) at org.apache.hive.service.cli.session.HiveSessionProxy.access$000(HiveSessionProxy.java:36) at org.apache.hive.service.cli.session.HiveSessionProxy$1.run(HiveSessionProxy.java:63) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698) at org.apache.hive.service.cli.session.HiveSessionProxy.invoke(HiveSessionProxy.java:59) at com.sun.proxy.$Proxy20.executeStatementAsync(Unknown Source) at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:274) at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:486) at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1313) at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1298) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39) at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:692) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) . 3 more Caused by: java.lang.NullPointerException: null at org.apache.hadoop.hive.metastore.Warehouse.makePartName(Warehouse.java:541) at org.apache.hadoop.hive.metastore.Warehouse.makePartName(Warehouse.java:483) at org.apache.hadoop.hive.ql.metadata.Partition.getName(Partition.java:224) at org.apache.hadoop.hive.ql.hooks.Entity.computeName(Entity.java:339) at org.apache.hadoop.hive.ql.hooks.Entity. (Entity.java:208) at org.apache.hadoop.hive.ql.hooks.WriteEntity. (WriteEntity.java:104) at org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.addInputsOutputsAlterTable(DDLSemanticAnalyzer.java:1417) at org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.addInputsOutputsAlterTable(DDLSemanticAnalyzer.java:1394) at org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeAlterTableModifyCols(DDLSemanticAnalyzer.java:2642) at org.apache.hadoop.hive.ql.parse.DDLSemanticAnalyzer.analyzeInternal(DDLSemanticAnalyzer.java:272) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:227) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:424) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:308) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1122) at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1116) at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:110) . 26 more

The text was updated successfully, but these errors were encountered:

This is my fault. I give a partiontionFiled in hiveSyncConfig.

© 2023 GitHub, Inc.

You can’t perform that action at this time.

You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.

Источник

Как понять NullPointerException

Эта простая статья скорее для начинающих разработчиков Java, хотя я нередко вижу и опытных коллег, которые беспомощно глядят на stack trace, сообщающий о NullPointerException (сокращённо NPE), и не могут сделать никаких выводов без отладчика. Разумеется, до NPE своё приложение лучше не доводить: вам помогут null-аннотации, валидация входных параметров и другие способы. Но когда пациент уже болен, надо его лечить, а не капать на мозги, что он ходил зимой без шапки.

Итак, вы узнали, что ваше приложение упало с NPE, и у вас есть только stack trace. Возможно, вам прислал его клиент, или вы сами увидели его в логах. Давайте посмотрим, какие выводы из него можно сделать.

NPE может произойти в трёх случаях:

  1. Его кинули с помощью throw
  2. Кто-то кинул null с помощью throw
  3. Кто-то пытается обратиться по null-ссылке

Во втором и третьем случае message в объекте исключения всегда null, в первом может быть произвольным. К примеру, java.lang.System.setProperty кидает NPE с сообщением «key can’t be null», если вы передали в качестве key null. Если вы каждый входной параметр своих методов проверяете таким же образом и кидаете исключение с понятным сообщением, то вам остаток этой статьи не потребуется.

Обращение по null-ссылке может произойти в следующих случаях:

  1. Вызов нестатического метода класса
  2. Обращение (чтение или запись) к нестатическому полю
  3. Обращение (чтение или запись) к элементу массива
  4. Чтение length у массива
  5. Неявный вызов метода valueOf при анбоксинге (unboxing)

Важно понимать, что эти случаи должны произойти именно в той строчке, на которой заканчивается stack trace, а не где-либо ещё.

Рассмотрим такой код:

Откуда-то был вызван метод handle с какими-то параметрами, и вы получили:

В чём причина исключения — в f, d или d.val? Нетрудно заметить, что f в этой строке вообще не читается, так как метод format статический. Конечно, обращаться к статическому методу через экземпляр класса плохо, но такой код встречается (мог, например, появиться после рефакторинга). Так или иначе значение f не может быть причиной исключения. Если бы d был не null, а d.val — null, тогда бы исключение возникло уже внутри метода format (в девятой строчке). Аналогично проблема не могла быть внутри метода getValue, даже если бы он был сложнее. Раз исключение в пятнадцатой строчке, остаётся одна возможная причина: null в параметре d.

Вот другой пример:

Снова вызываем метод handle и получаем

Теперь метод format нестатический, и f вполне может быть источником ошибки. Зато s не может быть ни под каким соусом: в девятой строке уже было обращение к s. Если бы s было null, исключение бы случилось в девятой строке. Просмотр логики кода перед исключением довольно часто помогает отбросить некоторые варианты.

С логикой, конечно, надо быть внимательным. Предположим, условие в девятой строчке было бы написано так:

Теперь в самой строчке обращения к полям и методам s нету, а метод equals корректно обрабатывает null, возвращая false, поэтому в таком случае ошибку в двенадцатой строке мог вызвать как f, так и s. Анализируя вышестоящий код, уточняйте в документации или исходниках, как используемые методы и конструкции реагируют на null. Оператор конкатенации строк +, к примеру, никогда не вызывает NPE.

Вот такой код (здесь может играть роль версия Java, я использую Oracle JDK 1.7.0.45):

Вызываем метод dump, получаем такое исключение:

В параметре pw не может быть null, иначе нам не удалось бы войти в метод print. Возможно, null в obj? Легко проверить, что pw.print(null) выводит строку «null» без всяких исключений. Пойдём с конца. Исключение случилось здесь:

В строке 473 возможна только одна причина NPE: обращение к методу length строки s. Значит, s содержит null. Как так могло получиться? Поднимемся по стеку выше:

В метод write передаётся результат вызова метода String.valueOf. В каком случае он может вернуть null?

Единственный возможный вариант — obj не null, но obj.toString() вернул null. Значит, ошибку надо искать в переопределённом методе toString() нашего объекта MyObject. Заметьте, в stack trace MyObject вообще не фигурировал, но проблема именно там. Такой несложный анализ может сэкономить кучу времени на попытки воспроизвести ситуацию в отладчике.

Не стоит забывать и про коварный автобоксинг. Пусть у нас такой код:

И такое исключение:

На первый взгляд единственный вариант — это null в параметре obj. Но следует взглянуть на класс MyContainer:

Мы видим, что getCount() возвращает Integer, который автоматически превращается в int именно в третьей строке TestNPE.java, а значит, если getCount() вернул null, произойдёт именно такое исключение, которое мы видим. Обнаружив класс, подобный классу MyContainer, посмотрите в истории системы контроля версий, кто его автор, и насыпьте ему крошек под одеяло.

Помните, что если метод принимает параметр int, а вы передаёте Integer null, то анбоксинг случится до вызова метода, поэтому NPE будет указывать на строку с вызовом.

В заключение хочется пожелать пореже запускать отладчик: после некоторой тренировки анализ кода в голове нередко выполняется быстрее, чем воспроизведение трудноуловимой ситуации.

Источник

Checking for null values in a map column in Hive (1.2.1, Hortonworks) interestingly returns a null pointer exception:

create table npe (m map);
select count(*) from npe where m is null;
Error: Error while compiling statement: FAILED: NullPointerException null (state=42000,code=40000)

The error happens at parsing time when Hive tries to estimate data size. From hiveserver2.log:

Caused by: java.lang.NullPointerException
at org.apache.hadoop.hive.ql.stats.StatsUtils.getSizeOfMap(StatsUtils.java:1096)

Interestingly, getting not null is fine:

select count(*) from npe where m is not null; -- returns 0

If you think like me, you will think ‘haha! not not null should work!’

select count(*) from npe where not m is not null; -- does not work

If you are smarter than me, you will have guessed before trying that Hive optimises the double negation away, and gives another NPE.

But going in this direction, we can still trick Hive by casting the boolean to int:

select count(*) from npe where int(m is not null)=0; -- works

This happens either without data either when there are real NULLs in the table. By real NULL I mean that a SELECT would show NULL, which happens only in the case where you add a column to an existing table. Indeed, you cannot yourself insert NULL into a complex column:

with a as (select null) insert into npe select * from a;
Error: Error while compiling statement: FAILED: SemanticException [Error 10044]: Line 1:36 Cannot insert into target table because column number/types are different 'npe': Cannot convert column 0 from void to map. (state=42000,code=10044)

You have to create an empty map object:

with a as (select map(cast(null as bigint), cast(null as bigint))) insert into npe select * from a;

Then, of course, the empty map object is not (a real) NULL and if you want to look for null you have to fudge a bit, looking at the size of the map for instance:

select m, size(m), isnull(m) from npe;
+----+-----+--------+--+
| m  | _c1 | _c2    |
+----+-----+--------+--+
| {} |  0  | false  |
+----+-----+--------+--+

Ran into this error message — «Error while compiling statement: FAILED: NullPointerException null » when I specified an incorrect tablename in the merge statement.

> create table src (col1 int,col2 int);
No rows affected (0.231 seconds)
> create table trgt (tcol1 int,tcol2 int);
No rows affected (0.182 seconds)
> insert into src values (1,232);
> merge into trgt using (select * from src) sub on sub.col1 = *invalidtablename.tcol1* when not matched then insert values (sub.col1,sub.col2);
Error: Error while compiling statement: FAILED: NullPointerException null (state=42000,code=40000)

> merge into trgt using (select * from src) sub on sub.col1 = *trgt.tcol1* when not matched then insert values (sub.col1,sub.col2);

INFO  : Session is already open
INFO  : Dag name: merge into trgt using ...(sub.col1,sub.col2)(Stage-1)
INFO  : Setting tez.task.scale.memory.reserve-fraction to 0.30000001192092896
INFO  : 

INFO  : Status: Running (Executing on YARN cluster with App id application_1485398058799_0129)

INFO  : Map 1: 0/1	Map 2: -/-	
INFO  : Map 1: 0(+1)/1	Map 2: -/-	
INFO  : Map 1: 0(+1)/1	Map 2: -/-	
INFO  : Map 1: 1/1	Map 2: -/-	
INFO  : Loading data to table tpch.trgt from hdfs:INFO  : Table tpch.trgt stats: [numFiles=1, numRows=1, totalSize=4, rawDataSize=3]
No rows affected (7.709 seconds)

Hiveserver2 logs:

2017-01-30 19:34:09,972 INFO  [HiveServer2-Handler-Pool: Thread-70]: parse.ParseDriver (ParseDriver.java:parse(185)) - Parsing command: merge into trgt using (select * from src) sub on sub.col1 = target.tcol1 when not matched then insert values (sub.col1,sub.col2)
2017-01-30 19:34:09,975 INFO  [HiveServer2-Handler-Pool: Thread-70]: parse.ParseDriver (ParseDriver.java:parse(209)) - Parse Completed
2017-01-30 19:34:09,976 INFO  [HiveServer2-Handler-Pool: Thread-70]: log.PerfLogger (PerfLogger.java:PerfLogEnd(177)) - </PERFLOG method=parse start=1485804849971 end=1485804849976 duration=5 from=org.apache.hadoop.hive.ql.Driver>
2017-01-30 19:34:09,976 INFO  [HiveServer2-Handler-Pool: Thread-70]: log.PerfLogger (PerfLogger.java:PerfLogBegin(149)) - <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver>
2017-01-30 19:34:09,977 INFO  [HiveServer2-Handler-Pool: Thread-70]: metastore.HiveMetaStore (HiveMetaStore.java:logInfo(824)) - 13: get_table : db=tpch tbl=trgt
2017-01-30 19:34:09,977 INFO  [HiveServer2-Handler-Pool: Thread-70]: HiveMetaStore.audit (HiveMetaStore.java:logAuditEvent(393)) - ugi=hive     ip=unknown-ip-addr      cmd=get_table : db=tpch tbl=trgt
2017-01-30 19:34:10,031 ERROR [HiveServer2-Handler-Pool: Thread-70]: ql.Driver (SessionState.java:printError(980)) - FAILED: NullPointerException null
java.lang.NullPointerException
        at org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer$OnClauseAnalyzer.getPredicate(UpdateDeleteSemanticAnalyzer.java:1143)
        at org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer$OnClauseAnalyzer.access$400(UpdateDeleteSemanticAnalyzer.java:1049)
        at org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.handleInsert(UpdateDeleteSemanticAnalyzer.java:1025)
        at org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeMerge(UpdateDeleteSemanticAnalyzer.java:660)
        at org.apache.hadoop.hive.ql.parse.UpdateDeleteSemanticAnalyzer.analyzeInternal(UpdateDeleteSemanticAnalyzer.java:80)
        at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:230)
        at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:465)
        at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:321)
        at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1221)
        at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1215)
        at org.apache.hive.service.cli.operation.SQLOperation.prepare(SQLOperation.java:146)
        at org.apache.hive.service.cli.operation.SQLOperation.runInternal(SQLOperation.java:226)
        at org.apache.hive.service.cli.operation.Operation.run(Operation.java:276)
        at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementInternal(HiveSessionImpl.java:468)
        at org.apache.hive.service.cli.session.HiveSessionImpl.executeStatementAsync(HiveSessionImpl.java:456)
        at org.apache.hive.service.cli.CLIService.executeStatementAsync(CLIService.java:298)
        at org.apache.hive.service.cli.thrift.ThriftCLIService.ExecuteStatement(ThriftCLIService.java:506)
        at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1317)
        at org.apache.hive.service.cli.thrift.TCLIService$Processor$ExecuteStatement.getResult(TCLIService.java:1302)
        at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
        at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
        at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56)
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

The following code reports an empty pointer

with `__all_dim__` as (
  select
    *
  from (
      select
        from_unixtime(unix_timestamp(`__bts__`) -1,'yyyy-MM-dd HH:mm:ss') as `__bts__`
      from (
          select
            concat_ws(' ', `d`.`date`, `t`.`time_of_day`) as `__bts__`
          from `ecmp`.`dim_date` as `d`
          left join `ecmp`.`dim_time_of_day` as `t` on 1 = 1
          where
            `d`.`date` >= '2020-01-12'
            and `d`.`date` <= '2020-01-13'
        ) as `__bts___tp1`
      where
        `__bts__` > '2020-01-12 00:00:00'
        and `__bts__` <= '2020-01-13 00:00:00'
        and second(`__bts__`) = 0
        and minute(`__bts__`) = 0
        and hour(`__bts__`) = 0
        and pmod(day(`__bts__`), 1) = 0
    ) as `__time_model__`
  cross join (
      select
        `dd_59282`.`tenant_pk` as `tenant_pk`,
        `dd_59282`.`tenant_id` as `tenant_id`,
        `dd_59282`.`tenant_name` as `tenant_name`
      from `ecmp`.`dim_tenant` as `dd_59282`
    ) as `tenant_pk`
  cross join (
      select
        'Fatal' as incident_level from system.dual
      union all
      select
        'Error' as incident_level from system.dual
      union all
      select
        'Warning' as incident_level from system.dual
      union all
      select
        'Info' as incident_level from system.dual
    ) as `incident_level`
)
,`t` as (
  select
    `tenant_pk`,
    `incident_accept_violation_count`,
    `incident_level`,
    rank() over( partition by `incident_level` order by `incident_accept_violation_count` DESC) as `incident_accept_violation_count_rank`,
    rank() over( partition by `incident_level` order by `incident_accept_violation_count` ASC) as `__inverse_rank__`
  from (
      select
        `__all_dim__`.*,--  After the investigation, the table name plus single quotation marks again. * It will report the empty pointer and remove the single quotes of the table name.
        CAST(round(nvl(`incident_accept_violation_count`, 0), 0) as INT) as `incident_accept_violation_count`
      from `__all_dim__`
      left join (
          select
            `incident_level`,
            `tenant_pk`,
            count(*) as `incident_accept_violation_count`
          from `ecmp`.dwd_incident_accept
          where
            incident_accept_violation_flag = 'Violation'
            and `incident_accept_time` >= '2020-01-12 00:00:00'
            AND `incident_accept_time` <= '2020-01-12 23:59:59'
          group by
            `incident_level`,
            `tenant_pk`
        ) as `t1` on 1 = 1
        and `__all_dim__`.`tenant_pk` = `t1`.`tenant_pk`
        and `__all_dim__`.`incident_level` = `t1`.`incident_level`
    ) as `t0`
)

select
  `__all_dim__`.`__bts__` as `__bts__`,
  CAST(SYSDATE as STRING) as `__cts__`,
  CAST(dround(nvl(`incident_accept_violation_count`, 0), 0) as INT) as `incident_accept_violation_count`,
  CAST(dround(`incident_accept_violation_count_rank`, 0) as INT) as `incident_accept_violation_count_rank`,
  CAST(dround(`incident_accept_violation_count_win_rate`, 1) as DOUBLE) as `incident_accept_violation_count_win_rate`,
  `__all_dim__`.`incident_level` as `incident_level`,
  `__all_dim__`.`tenant_id` as `tenant_id`,
  `__all_dim__`.`tenant_name` as `tenant_name`,
  `__all_dim__`.`tenant_pk` as `tenant_pk`
from `__all_dim__`
left join (
    select
      '2020-01-12 23:59:59' as `__bts__`,
      `incident_accept_violation_count`,
      `incident_accept_violation_count_rank`,
      `incident_accept_violation_count_win_rate`,
      CAST(coalesce(`tp1`.`incident_level`) as STRING) as `incident_level`,
      CAST(coalesce(`tp1`.`tenant_pk`) as STRING) as `tenant_pk`
    from (
        select
          `t`.`tenant_pk`,
          `t`.`incident_level`,
          `t`.`incident_accept_violation_count`,
          `t`.`incident_accept_violation_count_rank`,
          if(`c`.ct = 1,null,(`__inverse_rank__` -1) /(`c`.ct -1) * 100) as `incident_accept_violation_count_win_rate`
        from `t`
        left join (
            select
              `incident_level`,
              count(*) as `ct`
            from `t`
            group by
              `incident_level`
          ) as `c` on 1 = 1
          and `t`.`incident_level` = `c`.`incident_level`
      ) as `tp1`
  ) as `__dws__` on `__all_dim__`.`__bts__` = `__dws__`.`__bts__`
  and `__all_dim__`.`tenant_pk` = `__dws__`.`tenant_pk`
  and `__all_dim__`.`incident_level` = `__dws__`.`incident_level`;

Then modify it directly Select * from T; there is no problem, but use the first SELECT error

[Code: 10009, SQL State: 42000] COMPILE FAILED: Semantic error: [Error 10009] Line 54:8 Invalid table alias. Error encountered near token ‘all_dim’

with alldim as (
select
    *
  from (
      select
        from_unixtime(unix_timestamp(`__bts__`) -1,'yyyy-MM-dd HH:mm:ss') as `__bts__`
      from (
          select
            concat_ws(' ', `d`.`date`, t.`time_of_day`) as `__bts__`
          from `ecmp`.`dim_date` as `d`
          left join `ecmp`.`dim_time_of_day` as t on 1 = 1
          where
            `d`.`date` >= '2020-01-12'
            and `d`.`date` <= '2020-01-13'
        ) as `__bts___tp1`
      where
        `__bts__` > '2020-01-12 00:00:00'
        and `__bts__` <= '2020-01-13 00:00:00'
        and second(`__bts__`) = 0
        and minute(`__bts__`) = 0
        and hour(`__bts__`) = 0
        and pmod(day(`__bts__`), 1) = 0
    ) as `__time_model__`
  cross join (
      select
        `dd_59282`.`tenant_pk` as `tenant_pk`,
        `dd_59282`.`tenant_id` as `tenant_id`,
        `dd_59282`.`tenant_name` as `tenant_name`
      from `ecmp`.`dim_tenant` as `dd_59282`
    ) as `tenant_pk`
  cross join (
      select
        'Fatal' as incident_level from system.dual
      union all
      select
        'Error' as incident_level from system.dual
      union all
      select
        'Warning' as incident_level from system.dual
      union all
      select
        'Info' as incident_level from system.dual
    ) as `incident_level`
)
, t as (
select
    `tenant_pk`,
    `incident_accept_violation_count`,
    `incident_level`,
    rank() over( partition by `incident_level` order by `incident_accept_violation_count` DESC) as `incident_accept_violation_count_rank`,
    rank() over( partition by `incident_level` order by `incident_accept_violation_count` ASC) as `__inverse_rank__`
  from (
      select
        --alldim.*,
        alldim.`tenant_pk`,--Remove. *, Switch to a few ways to query a few columns
        alldim.`incident_level`,
        CAST(round(nvl(`incident_accept_violation_count`, 0), 0) as INT) as `incident_accept_violation_count`
      from alldim
      left join (
          select
            `incident_level`,
            `tenant_pk`,
            count(*) as `incident_accept_violation_count`
          from `ecmp`.dwd_incident_accept
          where
            incident_accept_violation_flag = 'Violation'
            and `incident_accept_time` >= '2020-01-12 00:00:00'
            AND `incident_accept_time` <= '2020-01-12 23:59:59'
          group by
            `incident_level`,
            `tenant_pk`
        ) as `t1` on 1 = 1
        and alldim.`tenant_pk` = `t1`.`tenant_pk`
        and alldim.`incident_level` = `t1`.`incident_level`
    ) as `t0`)
 

 
select
  alldim.`__bts__` as `__bts__`,
  CAST(SYSDATE as STRING) as `__cts__`,
  CAST(round(nvl(`incident_accept_violation_count`, 0), 0) as INT) as `incident_accept_violation_count`,--DROUND is changed to Round, handwriting error
  CAST(round(`incident_accept_violation_count_rank`, 0) as INT) as `incident_accept_violation_count_rank`,
  CAST(round(`incident_accept_violation_count_win_rate`, 1) as DOUBLE) as `incident_accept_violation_count_win_rate`,
  alldim.`incident_level` as `incident_level`,
  alldim.`tenant_id` as `tenant_id`,
  alldim.`tenant_name` as `tenant_name`,
  alldim.`tenant_pk` as `tenant_pk`
from alldim
left join (
    select
      '2020-01-12 23:59:59' as `__bts__`,
      `incident_accept_violation_count`,
      `incident_accept_violation_count_rank`,
      `incident_accept_violation_count_win_rate`,
     CAST(coalesce(`tp1`.`incident_level`) as STRING) as `incident_level`,
     CAST(coalesce(`tp1`.`tenant_pk`) as STRING) as `tenant_pk`
   from (
       select
         t.`tenant_pk`,
         t.`incident_level`,
         t.`incident_accept_violation_count`,
         t.`incident_accept_violation_count_rank`,
         if(`c`.ct = 1,null,(`__inverse_rank__` -1) /(`c`.ct -1) * 100) as `incident_accept_violation_count_win_rate`
       from t
       left join (
           select
             `incident_level`,
             count(*) as `ct`
           from t
           group by
             `incident_level`
         ) as `c` on 1 = 1
         and t.`incident_level` = `c`.`incident_level`
     ) as `tp1`
  ) 
  as `__dws__` on alldim.`__bts__` = `__dws__`.`__bts__`
  and alldim.`tenant_pk` = `__dws__`.`tenant_pk`
  and alldim.`incident_level` = `__dws__`.`incident_level`;

Понравилась статья? Поделить с друзьями:
  • Sql error 15151
  • Spt aki escape from tarkov ошибка
  • Sql error 15023
  • Sql error 1452
  • Spt aki error