Execution error return code 2 from org apache hadoop hive ql exec mr mapredtask

, Thanks for sharing the finding and solution. Yes, the error message means that the JDBC driver JAR files were missing. So adding them can resolve the issue. Cheers Eric

@EricL 

here is the full log for this error

note: I am not sure whether this is the exact log

2019-12-18 13:34:19,469 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for application appattempt_1576701227741_0008_000001
2019-12-18 13:34:20,151 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
2019-12-18 13:34:20,151 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, Service: , Ident: (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@5d69b59e)
2019-12-18 13:34:20,530 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in config org.apache.hadoop.hive.ql.io.HiveFileFormatUtils$NullOutputCommitter
2019-12-18 13:34:20,532 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is org.apache.hadoop.hive.ql.io.HiveFileFormatUtils$NullOutputCommitter
2019-12-18 13:34:21,308 WARN [main] org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2019-12-18 13:34:21,482 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.jobhistory.EventType for class org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
2019-12-18 13:34:21,483 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
2019-12-18 13:34:21,484 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
2019-12-18 13:34:21,486 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
2019-12-18 13:34:21,486 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
2019-12-18 13:34:21,492 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
2019-12-18 13:34:21,493 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
2019-12-18 13:34:21,494 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
2019-12-18 13:34:21,540 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://quickstart.cloudera:8020]
2019-12-18 13:34:21,561 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://quickstart.cloudera:8020]
2019-12-18 13:34:21,589 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://quickstart.cloudera:8020]
2019-12-18 13:34:21,603 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Emitting job history data to the timeline server is not enabled
2019-12-18 13:34:21,659 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
2019-12-18 13:34:21,866 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2019-12-18 13:34:21,916 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2019-12-18 13:34:21,916 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics system started
2019-12-18 13:34:21,928 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for job_1576701227741_0008 to jobTokenSecretManager
2019-12-18 13:34:22,061 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing job_1576701227741_0008 because: not enabled;
2019-12-18 13:34:22,106 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job job_1576701227741_0008 = 1. Number of splits = 1
2019-12-18 13:34:22,106 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces for job job_1576701227741_0008 = 0
2019-12-18 13:34:22,106 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1576701227741_0008Job Transitioned from NEW to INITED
2019-12-18 13:34:22,107 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching normal, non-uberized, multi-container job job_1576701227741_0008.
2019-12-18 13:34:22,143 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 100
2019-12-18 13:34:22,156 INFO [Socket Reader #1 for port 43753] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 43753
2019-12-18 13:34:22,199 INFO [main] org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server
2019-12-18 13:34:22,200 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2019-12-18 13:34:22,200 INFO [IPC Server listener on 43753] org.apache.hadoop.ipc.Server: IPC Server listener on 43753: starting
2019-12-18 13:34:22,203 INFO [main] org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated MRClientService at quickstart.cloudera/10.0.2.15:43753
2019-12-18 13:34:22,282 INFO [main] org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2019-12-18 13:34:22,292 INFO [main] org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-12-18 13:34:22,299 INFO [main] org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.mapreduce is not defined
2019-12-18 13:34:22,338 INFO [main] org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2019-12-18 13:34:22,345 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context mapreduce
2019-12-18 13:34:22,345 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context static
2019-12-18 13:34:22,348 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /mapreduce/*
2019-12-18 13:34:22,348 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /ws/*
2019-12-18 13:34:22,358 INFO [main] org.apache.hadoop.http.HttpServer2: Jetty bound to port 56166
2019-12-18 13:34:22,358 INFO [main] org.mortbay.log: jetty-6.1.26.cloudera.4
2019-12-18 13:34:22,388 INFO [main] org.mortbay.log: Extract jar:file:/usr/lib/hadoop-yarn/hadoop-yarn-common-2.6.0-cdh5.13.0.jar!/webapps/mapreduce to /tmp/Jetty_0_0_0_0_56166_mapreduce____.rvyfuf/webapp
2019-12-18 13:34:22,766 INFO [main] org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:56166
2019-12-18 13:34:22,768 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Web app /mapreduce started at 56166
2019-12-18 13:34:23,062 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules
2019-12-18 13:34:23,066 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.speculate.DefaultSpeculator: JOB_CREATE job_1576701227741_0008
2019-12-18 13:34:23,067 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 3000
2019-12-18 13:34:23,068 INFO [Socket Reader #1 for port 57335] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 57335
2019-12-18 13:34:23,073 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2019-12-18 13:34:23,075 INFO [IPC Server listener on 57335] org.apache.hadoop.ipc.Server: IPC Server listener on 57335: starting
2019-12-18 13:34:23,108 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: nodeBlacklistingEnabled:true
2019-12-18 13:34:23,108 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: maxTaskFailuresPerNode is 3
2019-12-18 13:34:23,108 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: blacklistDisablePercent is 33
2019-12-18 13:34:23,171 INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at quickstart.cloudera/10.0.2.15:8030
2019-12-18 13:34:23,257 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: maxContainerCapability: <memory:2816, vCores:2>
2019-12-18 13:34:23,257 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue: root.users.cloudera
2019-12-18 13:34:23,264 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper limit on the thread pool size is 500
2019-12-18 13:34:23,264 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: The thread pool initial size is 10
2019-12-18 13:34:23,272 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1576701227741_0008Job Transitioned from INITED to SETUP
2019-12-18 13:34:23,274 INFO [CommitterEvent Processor #0] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_SETUP
2019-12-18 13:34:23,282 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1576701227741_0008Job Transitioned from SETUP to RUNNING
2019-12-18 13:34:23,313 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1576701227741_0008_m_000000 Task Transitioned from NEW to SCHEDULED
2019-12-18 13:34:23,315 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1576701227741_0008_m_000000_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2019-12-18 13:34:23,316 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: mapResourceRequest:<memory:128, vCores:1>
2019-12-18 13:34:23,337 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer setup for JobId: job_1576701227741_0008, File: hdfs://quickstart.cloudera:8020/user/cloudera/.staging/job_1576701227741_0008/job_1576701227741_0008_1.jhist
2019-12-18 13:34:23,772 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://quickstart.cloudera:8020]
2019-12-18 13:34:23,884 INFO [IPC Server handler 0 on 43753] org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Getting task report for MAP   job_1576701227741_0008. Report-size will be 1
2019-12-18 13:34:23,941 INFO [IPC Server handler 0 on 43753] org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Getting task report for REDUCE   job_1576701227741_0008. Report-size will be 0
2019-12-18 13:34:24,264 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 HostLocal:0 RackLocal:0
2019-12-18 13:34:24,294 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1576701227741_0008: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:2560, vCores:2> knownNMs=1
2019-12-18 13:34:26,313 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated containers 1
2019-12-18 13:34:26,339 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned container container_1576701227741_0008_01_000002 to attempt_1576701227741_0008_m_000000_0
2019-12-18 13:34:26,342 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:0
2019-12-18 13:34:26,404 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-jar file on the remote FS is hdfs://quickstart.cloudera:8020/user/cloudera/.staging/job_1576701227741_0008/job.jar
2019-12-18 13:34:26,407 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf file on the remote FS is /user/cloudera/.staging/job_1576701227741_0008/job.xml
2019-12-18 13:34:26,419 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #0 tokens and #1 secret keys for NM use for launching container
2019-12-18 13:34:26,419 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of containertokens_dob is 1
2019-12-18 13:34:26,419 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting shuffle token in serviceData
2019-12-18 13:34:26,457 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1576701227741_0008_m_000000_0 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED
2019-12-18 13:34:26,462 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_1576701227741_0008_01_000002 taskAttempt attempt_1576701227741_0008_m_000000_0
2019-12-18 13:34:26,464 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching attempt_1576701227741_0008_m_000000_0
2019-12-18 13:34:26,517 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle port returned by ContainerManager for attempt_1576701227741_0008_m_000000_0 : 13562
2019-12-18 13:34:26,518 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1576701227741_0008_m_000000_0] using containerId: [container_1576701227741_0008_01_000002 on NM: [quickstart.cloudera:8041]
2019-12-18 13:34:26,521 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1576701227741_0008_m_000000_0 TaskAttempt Transitioned from ASSIGNED to RUNNING
2019-12-18 13:34:26,522 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1576701227741_0008_m_000000 Task Transitioned from SCHEDULED to RUNNING
2019-12-18 13:34:27,346 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1576701227741_0008: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:2048, vCores:1> knownNMs=1
2019-12-18 13:34:28,123 INFO [Socket Reader #1 for port 57335] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1576701227741_0008 (auth:SIMPLE)
2019-12-18 13:34:28,141 INFO [IPC Server handler 0 on 57335] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1576701227741_0008_m_000002 asked for a task
2019-12-18 13:34:28,141 INFO [IPC Server handler 0 on 57335] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1576701227741_0008_m_000002 given task: attempt_1576701227741_0008_m_000000_0
2019-12-18 13:34:29,250 FATAL [IPC Server handler 1 on 57335] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1576701227741_0008_m_000000_0 - exited : java.lang.ClassNotFoundException: com.mongodb.MongoClientURI
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at com.mongodb.hadoop.input.MongoInputSplit.readFields(MongoInputSplit.java:241)
	at com.mongodb.hadoop.hive.input.HiveMongoInputFormat$MongoHiveInputSplit.readFields(HiveMongoInputFormat.java:311)
	at org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.readFields(HiveInputFormat.java:172)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:71)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:42)
	at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:372)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:432)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

2019-12-18 13:34:29,251 INFO [IPC Server handler 1 on 57335] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from attempt_1576701227741_0008_m_000000_0: Error: java.lang.ClassNotFoundException: com.mongodb.MongoClientURI
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at com.mongodb.hadoop.input.MongoInputSplit.readFields(MongoInputSplit.java:241)
	at com.mongodb.hadoop.hive.input.HiveMongoInputFormat$MongoHiveInputSplit.readFields(HiveMongoInputFormat.java:311)
	at org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.readFields(HiveInputFormat.java:172)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:71)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:42)
	at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:372)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:432)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

2019-12-18 13:34:29,252 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1576701227741_0008_m_000000_0: Error: java.lang.ClassNotFoundException: com.mongodb.MongoClientURI
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at com.mongodb.hadoop.input.MongoInputSplit.readFields(MongoInputSplit.java:241)
	at com.mongodb.hadoop.hive.input.HiveMongoInputFormat$MongoHiveInputSplit.readFields(HiveMongoInputFormat.java:311)
	at org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.readFields(HiveInputFormat.java:172)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:71)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:42)
	at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:372)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:432)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

2019-12-18 13:34:29,258 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1576701227741_0008_m_000000_0 TaskAttempt Transitioned from RUNNING to FAIL_FINISHING_CONTAINER
2019-12-18 13:34:29,267 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 1 failures on node quickstart.cloudera
2019-12-18 13:34:29,269 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1576701227741_0008_m_000000_1 TaskAttempt Transitioned from NEW to UNASSIGNED
2019-12-18 13:34:29,270 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added attempt_1576701227741_0008_m_000000_1 to list of failed maps
2019-12-18 13:34:29,349 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:0
2019-12-18 13:34:29,352 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1576701227741_0008: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:2048, vCores:1> knownNMs=1
2019-12-18 13:34:30,359 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_1576701227741_0008_01_000002
2019-12-18 13:34:30,360 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated containers 1
2019-12-18 13:34:30,360 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1576701227741_0008_m_000000_0: 
2019-12-18 13:34:30,360 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1576701227741_0008_m_000000_0 TaskAttempt Transitioned from FAIL_FINISHING_CONTAINER to FAILED
2019-12-18 13:34:30,360 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning container Container: [ContainerId: container_1576701227741_0008_01_000003, NodeId: quickstart.cloudera:8041, NodeHttpAddress: quickstart.cloudera:8042, Resource: <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken, service: 10.0.2.15:8041 }, ] to fast fail map
2019-12-18 13:34:30,360 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from earlierFailedMaps
2019-12-18 13:34:30,360 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned container container_1576701227741_0008_01_000003 to attempt_1576701227741_0008_m_000000_1
2019-12-18 13:34:30,361 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:0
2019-12-18 13:34:30,361 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1576701227741_0008_m_000000_1 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED
2019-12-18 13:34:30,362 INFO [ContainerLauncher #1] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_COMPLETED for container container_1576701227741_0008_01_000002 taskAttempt attempt_1576701227741_0008_m_000000_0
2019-12-18 13:34:30,365 INFO [ContainerLauncher #2] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_1576701227741_0008_01_000003 taskAttempt attempt_1576701227741_0008_m_000000_1
2019-12-18 13:34:30,365 INFO [ContainerLauncher #2] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching attempt_1576701227741_0008_m_000000_1
2019-12-18 13:34:30,378 INFO [ContainerLauncher #2] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle port returned by ContainerManager for attempt_1576701227741_0008_m_000000_1 : 13562
2019-12-18 13:34:30,378 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1576701227741_0008_m_000000_1] using containerId: [container_1576701227741_0008_01_000003 on NM: [quickstart.cloudera:8041]
2019-12-18 13:34:30,378 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1576701227741_0008_m_000000_1 TaskAttempt Transitioned from ASSIGNED to RUNNING
2019-12-18 13:34:31,366 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1576701227741_0008: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:2048, vCores:1> knownNMs=1
2019-12-18 13:34:32,025 INFO [Socket Reader #1 for port 57335] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1576701227741_0008 (auth:SIMPLE)
2019-12-18 13:34:32,037 INFO [IPC Server handler 4 on 57335] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1576701227741_0008_m_000003 asked for a task
2019-12-18 13:34:32,037 INFO [IPC Server handler 4 on 57335] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1576701227741_0008_m_000003 given task: attempt_1576701227741_0008_m_000000_1
2019-12-18 13:34:33,199 FATAL [IPC Server handler 1 on 57335] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1576701227741_0008_m_000000_1 - exited : java.lang.ClassNotFoundException: com.mongodb.MongoClientURI
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at com.mongodb.hadoop.input.MongoInputSplit.readFields(MongoInputSplit.java:241)
	at com.mongodb.hadoop.hive.input.HiveMongoInputFormat$MongoHiveInputSplit.readFields(HiveMongoInputFormat.java:311)
	at org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.readFields(HiveInputFormat.java:172)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:71)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:42)
	at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:372)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:432)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

2019-12-18 13:34:33,200 INFO [IPC Server handler 1 on 57335] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from attempt_1576701227741_0008_m_000000_1: Error: java.lang.ClassNotFoundException: com.mongodb.MongoClientURI
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at com.mongodb.hadoop.input.MongoInputSplit.readFields(MongoInputSplit.java:241)
	at com.mongodb.hadoop.hive.input.HiveMongoInputFormat$MongoHiveInputSplit.readFields(HiveMongoInputFormat.java:311)
	at org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.readFields(HiveInputFormat.java:172)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:71)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:42)
	at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:372)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:432)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

2019-12-18 13:34:33,202 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1576701227741_0008_m_000000_1: Error: java.lang.ClassNotFoundException: com.mongodb.MongoClientURI
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at com.mongodb.hadoop.input.MongoInputSplit.readFields(MongoInputSplit.java:241)
	at com.mongodb.hadoop.hive.input.HiveMongoInputFormat$MongoHiveInputSplit.readFields(HiveMongoInputFormat.java:311)
	at org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.readFields(HiveInputFormat.java:172)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:71)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:42)
	at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:372)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:432)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

2019-12-18 13:34:33,203 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1576701227741_0008_m_000000_1 TaskAttempt Transitioned from RUNNING to FAIL_FINISHING_CONTAINER
2019-12-18 13:34:33,204 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 2 failures on node quickstart.cloudera
2019-12-18 13:34:33,204 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1576701227741_0008_m_000000_2 TaskAttempt Transitioned from NEW to UNASSIGNED
2019-12-18 13:34:33,206 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added attempt_1576701227741_0008_m_000000_2 to list of failed maps
2019-12-18 13:34:33,371 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:2 ContRel:0 HostLocal:0 RackLocal:0
2019-12-18 13:34:33,374 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1576701227741_0008: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:2048, vCores:1> knownNMs=1
2019-12-18 13:34:34,378 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_1576701227741_0008_01_000003
2019-12-18 13:34:34,378 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated containers 1
2019-12-18 13:34:34,378 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning container Container: [ContainerId: container_1576701227741_0008_01_000004, NodeId: quickstart.cloudera:8041, NodeHttpAddress: quickstart.cloudera:8042, Resource: <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken, service: 10.0.2.15:8041 }, ] to fast fail map
2019-12-18 13:34:34,378 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from earlierFailedMaps
2019-12-18 13:34:34,378 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1576701227741_0008_m_000000_1: 
2019-12-18 13:34:34,378 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned container container_1576701227741_0008_01_000004 to attempt_1576701227741_0008_m_000000_2
2019-12-18 13:34:34,378 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1576701227741_0008_m_000000_1 TaskAttempt Transitioned from FAIL_FINISHING_CONTAINER to FAILED
2019-12-18 13:34:34,378 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0 HostLocal:0 RackLocal:0
2019-12-18 13:34:34,379 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1576701227741_0008_m_000000_2 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED
2019-12-18 13:34:34,380 INFO [ContainerLauncher #3] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_COMPLETED for container container_1576701227741_0008_01_000003 taskAttempt attempt_1576701227741_0008_m_000000_1
2019-12-18 13:34:34,384 INFO [ContainerLauncher #4] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_1576701227741_0008_01_000004 taskAttempt attempt_1576701227741_0008_m_000000_2
2019-12-18 13:34:34,384 INFO [ContainerLauncher #4] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching attempt_1576701227741_0008_m_000000_2
2019-12-18 13:34:34,397 INFO [ContainerLauncher #4] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle port returned by ContainerManager for attempt_1576701227741_0008_m_000000_2 : 13562
2019-12-18 13:34:34,398 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1576701227741_0008_m_000000_2] using containerId: [container_1576701227741_0008_01_000004 on NM: [quickstart.cloudera:8041]
2019-12-18 13:34:34,398 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1576701227741_0008_m_000000_2 TaskAttempt Transitioned from ASSIGNED to RUNNING
2019-12-18 13:34:35,381 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1576701227741_0008: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:2048, vCores:1> knownNMs=1
2019-12-18 13:34:36,009 INFO [Socket Reader #1 for port 57335] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1576701227741_0008 (auth:SIMPLE)
2019-12-18 13:34:36,019 INFO [IPC Server handler 4 on 57335] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1576701227741_0008_m_000004 asked for a task
2019-12-18 13:34:36,020 INFO [IPC Server handler 4 on 57335] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1576701227741_0008_m_000004 given task: attempt_1576701227741_0008_m_000000_2
2019-12-18 13:34:37,087 FATAL [IPC Server handler 3 on 57335] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1576701227741_0008_m_000000_2 - exited : java.lang.ClassNotFoundException: com.mongodb.MongoClientURI
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at com.mongodb.hadoop.input.MongoInputSplit.readFields(MongoInputSplit.java:241)
	at com.mongodb.hadoop.hive.input.HiveMongoInputFormat$MongoHiveInputSplit.readFields(HiveMongoInputFormat.java:311)
	at org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.readFields(HiveInputFormat.java:172)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:71)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:42)
	at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:372)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:432)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

2019-12-18 13:34:37,087 INFO [IPC Server handler 3 on 57335] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from attempt_1576701227741_0008_m_000000_2: Error: java.lang.ClassNotFoundException: com.mongodb.MongoClientURI
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at com.mongodb.hadoop.input.MongoInputSplit.readFields(MongoInputSplit.java:241)
	at com.mongodb.hadoop.hive.input.HiveMongoInputFormat$MongoHiveInputSplit.readFields(HiveMongoInputFormat.java:311)
	at org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.readFields(HiveInputFormat.java:172)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:71)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:42)
	at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:372)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:432)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

2019-12-18 13:34:37,089 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1576701227741_0008_m_000000_2: Error: java.lang.ClassNotFoundException: com.mongodb.MongoClientURI
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at com.mongodb.hadoop.input.MongoInputSplit.readFields(MongoInputSplit.java:241)
	at com.mongodb.hadoop.hive.input.HiveMongoInputFormat$MongoHiveInputSplit.readFields(HiveMongoInputFormat.java:311)
	at org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.readFields(HiveInputFormat.java:172)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:71)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:42)
	at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:372)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:432)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

2019-12-18 13:34:37,090 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1576701227741_0008_m_000000_2 TaskAttempt Transitioned from RUNNING to FAIL_FINISHING_CONTAINER
2019-12-18 13:34:37,091 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1576701227741_0008_m_000000_3 TaskAttempt Transitioned from NEW to UNASSIGNED
2019-12-18 13:34:37,091 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: 3 failures on node quickstart.cloudera
2019-12-18 13:34:37,091 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Blacklisted host quickstart.cloudera
2019-12-18 13:34:37,092 INFO [Thread-52] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Added attempt_1576701227741_0008_m_000000_3 to list of failed maps
2019-12-18 13:34:37,384 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0 HostLocal:0 RackLocal:0
2019-12-18 13:34:37,387 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1576701227741_0008: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:0, vCores:0> knownNMs=1
2019-12-18 13:34:37,387 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the blacklist for application_1576701227741_0008: blacklistAdditions=1 blacklistRemovals=0
2019-12-18 13:34:37,387 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Ignore blacklisting set to true. Known: 1, Blacklisted: 1, 100%
2019-12-18 13:34:38,390 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: Update the blacklist for application_1576701227741_0008: blacklistAdditions=0 blacklistRemovals=1
2019-12-18 13:34:38,390 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Received completed container container_1576701227741_0008_01_000004
2019-12-18 13:34:38,390 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:3 ContRel:0 HostLocal:0 RackLocal:0
2019-12-18 13:34:38,390 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1576701227741_0008_m_000000_2: 
2019-12-18 13:34:38,390 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1576701227741_0008_m_000000_2 TaskAttempt Transitioned from FAIL_FINISHING_CONTAINER to FAILED
2019-12-18 13:34:38,391 INFO [ContainerLauncher #5] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_COMPLETED for container container_1576701227741_0008_01_000004 taskAttempt attempt_1576701227741_0008_m_000000_2
2019-12-18 13:34:39,396 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated containers 1
2019-12-18 13:34:39,396 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigning container Container: [ContainerId: container_1576701227741_0008_01_000005, NodeId: quickstart.cloudera:8041, NodeHttpAddress: quickstart.cloudera:8042, Resource: <memory:512, vCores:1>, Priority: 5, Token: Token { kind: ContainerToken, service: 10.0.2.15:8041 }, ] to fast fail map
2019-12-18 13:34:39,396 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned from earlierFailedMaps
2019-12-18 13:34:39,396 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned container container_1576701227741_0008_01_000005 to attempt_1576701227741_0008_m_000000_3
2019-12-18 13:34:39,396 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:0 RackLocal:0
2019-12-18 13:34:39,397 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1576701227741_0008_m_000000_3 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED
2019-12-18 13:34:39,398 INFO [ContainerLauncher #6] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_1576701227741_0008_01_000005 taskAttempt attempt_1576701227741_0008_m_000000_3
2019-12-18 13:34:39,398 INFO [ContainerLauncher #6] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching attempt_1576701227741_0008_m_000000_3
2019-12-18 13:34:39,409 INFO [ContainerLauncher #6] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle port returned by ContainerManager for attempt_1576701227741_0008_m_000000_3 : 13562
2019-12-18 13:34:39,410 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1576701227741_0008_m_000000_3] using containerId: [container_1576701227741_0008_01_000005 on NM: [quickstart.cloudera:8041]
2019-12-18 13:34:39,410 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1576701227741_0008_m_000000_3 TaskAttempt Transitioned from ASSIGNED to RUNNING
2019-12-18 13:34:40,399 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1576701227741_0008: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:2048, vCores:1> knownNMs=1
2019-12-18 13:34:40,954 INFO [Socket Reader #1 for port 57335] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1576701227741_0008 (auth:SIMPLE)
2019-12-18 13:34:40,966 INFO [IPC Server handler 4 on 57335] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1576701227741_0008_m_000005 asked for a task
2019-12-18 13:34:40,966 INFO [IPC Server handler 4 on 57335] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1576701227741_0008_m_000005 given task: attempt_1576701227741_0008_m_000000_3
2019-12-18 13:34:42,223 FATAL [IPC Server handler 4 on 57335] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: attempt_1576701227741_0008_m_000000_3 - exited : java.lang.ClassNotFoundException: com.mongodb.MongoClientURI
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at com.mongodb.hadoop.input.MongoInputSplit.readFields(MongoInputSplit.java:241)
	at com.mongodb.hadoop.hive.input.HiveMongoInputFormat$MongoHiveInputSplit.readFields(HiveMongoInputFormat.java:311)
	at org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.readFields(HiveInputFormat.java:172)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:71)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:42)
	at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:372)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:432)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

2019-12-18 13:34:42,223 INFO [IPC Server handler 4 on 57335] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Diagnostics report from attempt_1576701227741_0008_m_000000_3: Error: java.lang.ClassNotFoundException: com.mongodb.MongoClientURI
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at com.mongodb.hadoop.input.MongoInputSplit.readFields(MongoInputSplit.java:241)
	at com.mongodb.hadoop.hive.input.HiveMongoInputFormat$MongoHiveInputSplit.readFields(HiveMongoInputFormat.java:311)
	at org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.readFields(HiveInputFormat.java:172)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:71)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:42)
	at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:372)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:432)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

2019-12-18 13:34:42,225 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Diagnostics report from attempt_1576701227741_0008_m_000000_3: Error: java.lang.ClassNotFoundException: com.mongodb.MongoClientURI
	at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
	at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
	at java.security.AccessController.doPrivileged(Native Method)
	at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
	at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
	at com.mongodb.hadoop.input.MongoInputSplit.readFields(MongoInputSplit.java:241)
	at com.mongodb.hadoop.hive.input.HiveMongoInputFormat$MongoHiveInputSplit.readFields(HiveMongoInputFormat.java:311)
	at org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.readFields(HiveInputFormat.java:172)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:71)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:42)
	at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:372)
	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:432)
	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

2019-12-18 13:34:42,226 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1576701227741_0008_m_000000_3 TaskAttempt Transitioned from RUNNING to FAIL_FINISHING_CONTAINER
2019-12-18 13:34:42,229 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1576701227741_0008_m_000000 Task Transitioned from RUNNING to FAILED
2019-12-18 13:34:42,229 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 1
2019-12-18 13:34:42,229 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Job failed as tasks failed. failedMaps:1 failedReduces:0
2019-12-18 13:34:42,230 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1576701227741_0008Job Transitioned from RUNNING to FAIL_ABORT
2019-12-18 13:34:42,234 INFO [CommitterEvent Processor #1] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_ABORT
2019-12-18 13:34:42,240 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1576701227741_0008Job Transitioned from FAIL_ABORT to FAILED
2019-12-18 13:34:42,242 INFO [Thread-69] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly so this is the last retry
2019-12-18 13:34:42,242 INFO [Thread-69] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator isAMLastRetry: true
2019-12-18 13:34:42,242 INFO [Thread-69] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: RMCommunicator notified that shouldUnregistered is: true
2019-12-18 13:34:42,242 INFO [Thread-69] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH isAMLastRetry: true
2019-12-18 13:34:42,242 INFO [Thread-69] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: JobHistoryEventHandler notified that forceJobCompletion is true
2019-12-18 13:34:42,242 INFO [Thread-69] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the services
2019-12-18 13:34:42,243 INFO [Thread-69] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping JobHistoryEventHandler. Size of the outstanding queue size is 0
2019-12-18 13:34:42,277 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://quickstart.cloudera:8020/user/cloudera/.staging/job_1576701227741_0008/job_1576701227741_0008_1.jhist to hdfs://quickstart.cloudera:8020/user/history/done_intermediate/cloudera/job_1576701227741_0008-1576704856456-cloudera-select+loc.address+FROM+theater%28Stage%2D1%29-1576704882229-0-0-FAILED-root.users.cloudera-1576704863268.jhist_tmp
2019-12-18 13:34:42,303 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://quickstart.cloudera:8020/user/history/done_intermediate/cloudera/job_1576701227741_0008-1576704856456-cloudera-select+loc.address+FROM+theater%28Stage%2D1%29-1576704882229-0-0-FAILED-root.users.cloudera-1576704863268.jhist_tmp
2019-12-18 13:34:42,307 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://quickstart.cloudera:8020/user/cloudera/.staging/job_1576701227741_0008/job_1576701227741_0008_1_conf.xml to hdfs://quickstart.cloudera:8020/user/history/done_intermediate/cloudera/job_1576701227741_0008_conf.xml_tmp
2019-12-18 13:34:42,340 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://quickstart.cloudera:8020/user/history/done_intermediate/cloudera/job_1576701227741_0008_conf.xml_tmp
2019-12-18 13:34:42,350 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://quickstart.cloudera:8020/user/history/done_intermediate/cloudera/job_1576701227741_0008.summary_tmp to hdfs://quickstart.cloudera:8020/user/history/done_intermediate/cloudera/job_1576701227741_0008.summary
2019-12-18 13:34:42,354 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://quickstart.cloudera:8020/user/history/done_intermediate/cloudera/job_1576701227741_0008_conf.xml_tmp to hdfs://quickstart.cloudera:8020/user/history/done_intermediate/cloudera/job_1576701227741_0008_conf.xml
2019-12-18 13:34:42,356 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://quickstart.cloudera:8020/user/history/done_intermediate/cloudera/job_1576701227741_0008-1576704856456-cloudera-select+loc.address+FROM+theater%28Stage%2D1%29-1576704882229-0-0-FAILED-root.users.cloudera-1576704863268.jhist_tmp to hdfs://quickstart.cloudera:8020/user/history/done_intermediate/cloudera/job_1576701227741_0008-1576704856456-cloudera-select+loc.address+FROM+theater%28Stage%2D1%29-1576704882229-0-0-FAILED-root.users.cloudera-1576704863268.jhist
2019-12-18 13:34:42,358 INFO [Thread-69] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped JobHistoryEventHandler. super.stop()
2019-12-18 13:34:42,359 INFO [Thread-69] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING attempt_1576701227741_0008_m_000000_3
2019-12-18 13:34:42,373 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1576701227741_0008_m_000000_3 TaskAttempt Transitioned from FAIL_FINISHING_CONTAINER to FAILED
2019-12-18 13:34:42,379 INFO [Thread-69] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job diagnostics to Task failed task_1576701227741_0008_m_000000
Job failed as tasks failed. failedMaps:1 failedReduces:0

2019-12-18 13:34:42,380 INFO [Thread-69] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url is <a href="http://quickstart.cloudera:19888/jobhistory/job/job_1576701227741_0008" target="_blank">http://quickstart.cloudera:19888/jobhistory/job/job_1576701227741_0008</a>
2019-12-18 13:34:42,386 INFO [Thread-69] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for application to be successfully unregistered.
2019-12-18 13:34:43,392 INFO [Thread-69] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:4 ContRel:0 HostLocal:0 RackLocal:0
2019-12-18 13:34:43,393 INFO [Thread-69] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory hdfs://quickstart.cloudera:8020 /user/cloudera/.staging/job_1576701227741_0008
2019-12-18 13:34:43,401 INFO [Thread-69] org.apache.hadoop.ipc.Server: Stopping server on 57335
2019-12-18 13:34:43,408 INFO [IPC Server listener on 57335] org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 57335
2019-12-18 13:34:43,412 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2019-12-18 13:34:43,412 INFO [TaskHeartbeatHandler PingChecker] org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler: TaskHeartbeatHandler thread interrupted
2019-12-18 13:34:43,413 INFO [Ping Checker] org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: TaskAttemptFinishingMonitor thread interrupted
2019-12-18 13:34:48,414 INFO [Thread-69] org.apache.hadoop.ipc.Server: Stopping server on 43753
2019-12-18 13:34:48,415 INFO [IPC Server listener on 43753] org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 43753
2019-12-18 13:34:48,415 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2019-12-18 13:34:48,424 INFO [Thread-69] org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:0

Содержание

  1. hive failed execution error return code 2 from org.apache.hadoop.hive.ql.exec.mapredtask
  2. 3 Answers 3
  3. What is Hive: Return Code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
  4. 11 Answers 11
  5. Support Questions
  6. Support Questions

hive failed execution error return code 2 from org.apache.hadoop.hive.ql.exec.mapredtask

I have one query. It is executing fine on Hive CLI and returning the result. But when I am executing it with the help of Hive JDBC, I am getting an error below:

What is the problem? Also I am starting the Hive Thrift Server through Shell Script. (I have written a shell script which has command to start Hive Thrift Server) Later I decided to start Hive thrift Server manually by typing command as:

Please help me out from this. Thanks

3 Answers 3

For this error :
java.sql.SQLException: Query returned non-zero code: 9, cause: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask at org.apache.hadoop.hive.jdbc.HiveStatement.executeQuer

Go to this link :

to the class path of your project , add this jars from the lib of hadoop and hive, and try the code. and also add the path of hadoop, hive, and hbase(if your are using) lib folder path to the project class path, like you have added the jars.

and for the second error you got

if it shows something means hive server is already running. the second error comes only when the port you are specifying is already acquired by some other process, by default server port is 10000 so very with the above netstat command which i said.

Note : suppose you have connected using code exit from . bin/hive of if you are connected through bin/hive > then code will not connect because i think (not sure) only one client can connect to the hive server.

do above steps hopefully will solve your problem.

NOTE : exit from cli when you are going to execute the code, and dont start cli while code is being executing.

Источник

What is Hive: Return Code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask

While trying to make a copy of a partitioned table using the commands in the hive console:

I initially got some semantic analysis errors and had to set:

Although I’m not sure what the above properties do?

Full ouput from hive console:

11 Answers 11

That’s not the real error, here’s how to find it:

Go to the hadoop jobtracker web-dashboard, find the hive mapreduce jobs that failed and look at the logs of the failed tasks. That will show you the real error.

The console output errors are useless, largely beause it doesn’t have a view of the individual jobs/tasks to pull the real errors (there could be errors in multiple tasks)

I know I am 3 years late on this thread, however still providing my 2 cents for similar cases in future.

I recently faced the same issue/error in my cluster. The JOB would always get to some 80%+ reduction and fail with the same error, with nothing to go on in the execution logs either. Upon multiple iterations and research I found that among the plethora of files getting loaded some were non-compliant with the structure provided for the base table(table being used to insert data into partitioned table).

Point to be noted here is whenever I executed a select query for a particular value in the partitioning column or created a static partition it worked fine as in that case error records were being skipped.

TL;DR: Check the incoming data/files for inconsistency in the structuring as HIVE follows Schema-On-Read philosophy.

Источник

Support Questions

  • Subscribe to RSS Feed
  • Mark Question as New
  • Mark Question as Read
  • Float this Question for Current User
  • Bookmark
  • Subscribe
  • Mute
  • Printer Friendly Page

Created on ‎10-05-2014 02:48 PM — edited ‎09-16-2022 02:09 AM

  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Subscribe to RSS Feed
  • Permalink
  • Print
  • Report Inappropriate Content

i have setup Kerberos + Sentry service(not policy file), currently everything works fine except HIVE,

«select * from table «, this statement is ok, it means if the statement no any condition, it can finish ok. but select count(*) from table or select * from table where xxx=xxx will appears errors like title. that’s so strange, anybody has this experience? thanks in advance.

the more details like below:

Created ‎10-05-2014 04:11 PM

  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Subscribe to RSS Feed
  • Permalink
  • Print
  • Report Inappropriate Content

everyone, the below is my some tests, i am going to set HADOOP_YARN_HOME manually.

Test one: if the home is hadoop-0.20-mapreduce, then it’s ok.

]$ hive
14/10/06 06:59:04 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.

Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hive/lib/hive-common-0.12.0-cdh5.1.0.jar!/hive-log4j.properties
hive> select count(*) from test;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapred.reduce.tasks=
Starting Job = job_local1939864979_0001, Tracking URL = http://localhost:8080/
Kill Command = /opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/bin/hadoop job -kill job_local1939864979_0001
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2014-10-06 06:59:14,364 Stage-1 map = 0%, reduce = 100%
Ended Job = job_local1939864979_0001
MapReduce Jobs Launched:
Job 0: HDFS Read: 0 HDFS Write: 12904 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK

Time taken: 4.095 seconds, Fetched: 1 row(s)
hive> exit;

TEST two: this is default, it menas i didn’t change anyting, just test when i am login OS by hdfs, it’s failed.

]$ hive
14/10/06 07:03:27 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/10/06 07:03:27 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/10/06 07:03:27 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/10/06 07:03:27 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/10/06 07:03:27 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
14/10/06 07:03:27 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/10/06 07:03:27 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/10/06 07:03:27 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.

Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hive/lib/hive-common-0.12.0-cdh5.1.0.jar!/hive-log4j.properties
hive> select count(*) from test
> ;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapred.reduce.tasks=
Starting Job = job_1412549128740_0004, Tracking URL = http://namenode01.hadoop:8088/proxy/application_1412549128740_0004/
Kill Command = /opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/bin/hadoop job -kill job_1412549128740_0004
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2014-10-06 07:03:53,523 Stage-1 map = 0%, reduce = 0%
Ended Job = job_1412549128740_0004 with errors
Error during job, obtaining debugging information.
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Job 0: HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

TEST THREE: it’s failed.

[hdfs@namenode02 hadoop-yarn]$ export HADOOP_YARN_HOME=/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop-yarn/

[hdfs@namenode02 hadoop-yarn]$ hive
14/10/06 06:44:38 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.

Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hive/lib/hive-common-0.12.0-cdh5.1.0.jar!/hive-log4j.properties
hive> show tables;
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
hive> show tables;
OK
database_params
**bleep**you
sequence_table
tbls
test
test1
Time taken: 0.338 seconds, Fetched: 6 row(s)
hive> select count(*) from test;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapred.reduce.tasks=
Starting Job = job_1412549128740_0003, Tracking URL = http://namenode01.hadoop:8088/proxy/application_1412549128740_0003/
Kill Command = /opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/bin/hadoop job -kill job_1412549128740_0003
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2014-10-06 06:54:19,156 Stage-1 map = 0%, reduce = 0%
Ended Job = job_1412549128740_0003 with errors
Error during job, obtaining debugging information.
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Job 0: HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

the conclusion is , just when i set the HADOOP_YARN_HOME with *0.20-«, it will be fine, so what can i do right now ?

Источник

Support Questions

  • Subscribe to RSS Feed
  • Mark Question as New
  • Mark Question as Read
  • Float this Question for Current User
  • Bookmark
  • Subscribe
  • Mute
  • Printer Friendly Page

Created on ‎10-05-2014 02:48 PM — edited ‎09-16-2022 02:09 AM

  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Subscribe to RSS Feed
  • Permalink
  • Print
  • Report Inappropriate Content

i have setup Kerberos + Sentry service(not policy file), currently everything works fine except HIVE,

«select * from table «, this statement is ok, it means if the statement no any condition, it can finish ok. but select count(*) from table or select * from table where xxx=xxx will appears errors like title. that’s so strange, anybody has this experience? thanks in advance.

the more details like below:

Created ‎10-05-2014 04:11 PM

  • Mark as New
  • Bookmark
  • Subscribe
  • Mute
  • Subscribe to RSS Feed
  • Permalink
  • Print
  • Report Inappropriate Content

everyone, the below is my some tests, i am going to set HADOOP_YARN_HOME manually.

Test one: if the home is hadoop-0.20-mapreduce, then it’s ok.

]$ hive
14/10/06 06:59:04 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.

Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hive/lib/hive-common-0.12.0-cdh5.1.0.jar!/hive-log4j.properties
hive> select count(*) from test;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapred.reduce.tasks=
Starting Job = job_local1939864979_0001, Tracking URL = http://localhost:8080/
Kill Command = /opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/bin/hadoop job -kill job_local1939864979_0001
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2014-10-06 06:59:14,364 Stage-1 map = 0%, reduce = 100%
Ended Job = job_local1939864979_0001
MapReduce Jobs Launched:
Job 0: HDFS Read: 0 HDFS Write: 12904 SUCCESS
Total MapReduce CPU Time Spent: 0 msec
OK

Time taken: 4.095 seconds, Fetched: 1 row(s)
hive> exit;

TEST two: this is default, it menas i didn’t change anyting, just test when i am login OS by hdfs, it’s failed.

]$ hive
14/10/06 07:03:27 INFO Configuration.deprecation: mapred.input.dir.recursive is deprecated. Instead, use mapreduce.input.fileinputformat.input.dir.recursive
14/10/06 07:03:27 INFO Configuration.deprecation: mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
14/10/06 07:03:27 INFO Configuration.deprecation: mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
14/10/06 07:03:27 INFO Configuration.deprecation: mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
14/10/06 07:03:27 INFO Configuration.deprecation: mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
14/10/06 07:03:27 INFO Configuration.deprecation: mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
14/10/06 07:03:27 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
14/10/06 07:03:27 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.

Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hive/lib/hive-common-0.12.0-cdh5.1.0.jar!/hive-log4j.properties
hive> select count(*) from test
> ;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapred.reduce.tasks=
Starting Job = job_1412549128740_0004, Tracking URL = http://namenode01.hadoop:8088/proxy/application_1412549128740_0004/
Kill Command = /opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/bin/hadoop job -kill job_1412549128740_0004
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2014-10-06 07:03:53,523 Stage-1 map = 0%, reduce = 0%
Ended Job = job_1412549128740_0004 with errors
Error during job, obtaining debugging information.
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Job 0: HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

TEST THREE: it’s failed.

[hdfs@namenode02 hadoop-yarn]$ export HADOOP_YARN_HOME=/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop-yarn/

[hdfs@namenode02 hadoop-yarn]$ hive
14/10/06 06:44:38 WARN conf.HiveConf: DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.

Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hive/lib/hive-common-0.12.0-cdh5.1.0.jar!/hive-log4j.properties
hive> show tables;
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
hive> show tables;
OK
database_params
**bleep**you
sequence_table
tbls
test
test1
Time taken: 0.338 seconds, Fetched: 6 row(s)
hive> select count(*) from test;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=
In order to set a constant number of reducers:
set mapred.reduce.tasks=
Starting Job = job_1412549128740_0003, Tracking URL = http://namenode01.hadoop:8088/proxy/application_1412549128740_0003/
Kill Command = /opt/cloudera/parcels/CDH-5.1.0-1.cdh5.1.0.p0.53/lib/hadoop/bin/hadoop job -kill job_1412549128740_0003
Hadoop job information for Stage-1: number of mappers: 0; number of reducers: 0
2014-10-06 06:54:19,156 Stage-1 map = 0%, reduce = 0%
Ended Job = job_1412549128740_0003 with errors
Error during job, obtaining debugging information.
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched:
Job 0: HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

the conclusion is , just when i set the HADOOP_YARN_HOME with *0.20-«, it will be fine, so what can i do right now ?

Источник

Я осознаю:

FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask

При попытке сделать копию секционированной таблицы с помощью команд в консоли куста:

CREATE TABLE copy_table_name LIKE table_name;
INSERT OVERWRITE TABLE copy_table_name PARTITION(day) SELECT * FROM table_name;

Сначала я получил некоторые ошибки семантического анализа и должен был установить:

set hive.exec.dynamic.partition=true
set hive.exec.dynamic.partition.mode=nonstrict

Хотя я не уверен, что делают вышеуказанные свойства?

Полный вывод из консоли улья:

Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_201206191101_4557, Tracking URL = http://jobtracker:50030/jobdetails.jsp?jobid=job_201206191101_4557
Kill Command = /usr/lib/hadoop/bin/hadoop job  -Dmapred.job.tracker=master:8021 -kill job_201206191101_4557
2012-06-25 09:53:05,826 Stage-1 map = 0%,  reduce = 0%
2012-06-25 09:53:53,044 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201206191101_4557 with errors
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask

10 ответы

Это не настоящая ошибка, вот как ее найти:

Перейдите на веб-панель Hadoop jobtracker, найдите неудачные задания hive mapreduce и просмотрите журналы неудачных задач. Это покажет вам реальные ошибка.

Ошибки вывода консоли бесполезны, в основном потому, что у нее нет представления об отдельных заданиях/задачах, чтобы вытащить настоящие ошибки (могут быть ошибки в нескольких задачах)

Надеюсь, это поможет.

Создан 28 июн.

Я знаю, что опоздал на 3 года в этой теме, но все же вношу свои 2 цента за подобные случаи в будущем.

Недавно я столкнулся с той же проблемой/ошибкой в ​​своем кластере. JOB всегда приводил к сокращению примерно на 80%+ и терпел неудачу с той же ошибкой, и в журналах выполнения также ничего не происходило. После нескольких итераций и исследований я обнаружил, что среди множества загружаемых файлов некоторые не соответствовали структуре, предусмотренной для базовой таблицы (таблица, используемая для вставки данных в секционированную таблицу).

Здесь следует отметить, что всякий раз, когда я выполнял запрос выбора для определенного значения в столбце разделения или создавал статический раздел, он работал нормально, поскольку в этом случае записи об ошибках пропускались.

TL;DR: проверьте входящие данные/файлы на несогласованность в структурировании, поскольку HIVE следует философии Schema-On-Read.

ответ дан 07 апр.

Добавлю здесь немного информации, так как мне потребовалось некоторое время, чтобы найти веб-панель Hadoop Jobtracker в HDInsight (Azure’s Hadoop), и мой коллега наконец показал мне, где она находится. На головном узле есть ярлык «Статус пряжи Hadoop», который является просто ссылкой на локальную http-страницу (http://headnodehost:9014/cluster в моем случае). При открытии приборная панель выглядела так:

Введите описание изображения здесь

В этой панели вы можете найти свое неудачное приложение, а затем, нажав на него, вы можете посмотреть журналы отдельной карты и уменьшить количество заданий.

В моем случае показалось все еще нехватка памяти в редьюсерах, хотя я уже провернул память в конфигурации. По какой-то причине он не показывал ошибки «java outofmemory», которые я получил ранее.

Создан 09 сен.

Я удалил файл _SUCCESS из выходного пути EMR в S3, и он работал нормально.

ответ дан 10 апр.

Верхний ответ правильный, что код ошибки не дает вам много информации. Одной из распространенных причин, которую мы видели в нашей команде для этого кода ошибки, была плохая оптимизация запроса. Известная причина заключалась в том, что мы делаем внутреннее соединение с величинами левой стороны таблицы больше, чем таблица с правой стороны. В таких случаях обычно помогала замена этих таблиц.

Создан 14 сен.

Я также столкнулся с той же ошибкой, когда вставлял данные во внешнюю таблицу HIVE, которая указывала на эластичный поисковый кластер.

Я заменил старый JAR elasticsearch-hadoop-2.0.0.RC1.jar в elasticsearch-hadoop-5.6.0.jar, и все работало нормально.

Мое предложение: используйте конкретный JAR в соответствии с эластичной версией поиска. Не используйте старые JAR-файлы, если вы используете более новую версию эластичного поиска.

Благодаря этому сообщению Hive-Elasticsearch Операция записи #409

Создан 15 сен.

Даже я столкнулся с той же проблемой — при проверке на панели инструментов я обнаружил следующую ошибку. Поскольку данные проходили через Flume и прерывались между ними, из-за чего могла быть несогласованность в нескольких файлах.

Caused by: org.apache.hadoop.hive.serde2.SerDeException: org.codehaus.jackson.JsonParseException: Unexpected end-of-input within/between OBJECT entries

Работая с меньшим количеством файлов, это сработало. В моем случае причиной была согласованность формата.

Создан 15 сен.

Я столкнулся с той же проблемой, потому что у меня не было разрешения на запрос базы данных, которую я пытался сделать.

В случае, если у вас нет разрешения на запрос таблицы/базы данных, помимо Return Code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask ошибка, вы увидите, что в Cloudera Manager даже не регистрируется ваш запрос.

ответ дан 07 мая ’20, 21:05

Получил эту ошибку при объединении двух таблиц. И одна таблица большая по размеру, а другая маленькая, влезла бы в память диска. В таком случае используйте

set hive.auto.convert.join = false

Это может помочь избавиться от вышеуказанной ошибки. Для получения более подробной информации по этому вопросу, пожалуйста, обратитесь к темам ниже

  1. Тайна конфигурации Hive Map-Join
  2. Hive.auto.convert.join = true, что это значит?

Создан 15 июн.

Я получил ту же ошибку при создании таблицы кустов в beeline, а затем попытался создать ее через spark-shell, которая выдала фактическую ошибку. В моем случае ошибка была связана с квотой дискового пространства для каталога hdfs.

org.apache.hadoop.ipc.RemoteException: превышена квота DiskSpace для /user/hive/warehouse/XXX_XX.db: квота = 6597069766656 B = 6 ТБ, но занятое дисковое пространство = 6597493381629 B = 6.00 ТБ

Создан 05 янв.

Не тот ответ, который вы ищете? Просмотрите другие вопросы с метками

hadoop
mapreduce
hive

or задайте свой вопрос.

Понравилась статья? Поделить с друзьями:
  • Executing grub install dev sda failed this is fatal error
  • Execute error для не синхронизированного блока кода вызван метод синхронизации объектов
  • Execute error 183 rappelz
  • Execute command error gta san andreas
  • Exception access violation java error