java - HIVE COUNT * OUT OF MEMORY -
hive> select count(*) ipaddress country='china'; warning: hive-on-mr deprecated in hive 2 , may not available in future versions. consider using different execution engine (i.e. tez, spark) or using hive 1.x releases. query id = pruthviraj_20160922163728_79a0f8d6-5ea6-4cb5-8dd2-d3bb63f8baaf total jobs = 1 launching job 1 out of 1 number of reduce tasks determined @ compile time: 1 in order change average load reducer (in bytes): set hive.exec.reducers.bytes.per.reducer=<number> in order limit maximum number of reducers: set hive.exec.reducers.max=<number> in order set constant number of reducers: set mapreduce.job.reduces=<number> starting job = job_1474512819880_0032, tracking url = http://pruthvis-macbook-pro.local:8088/proxy/application_1474512819880_0032/ kill command = /users/pruthviraj/lab/software/hadoop-2.7.0/bin/hadoop job -kill job_1474512819880_0032 hadoop job information stage-1: number of mappers: 1; number of reducers: 1 2016-09-22 16:37:45,094 stage-1 map = 0%, reduce = 0% 2016-09-22 16:37:52,532 stage-1 map = 100%, reduce = 0% 2016-09-22 16:37:59,901 stage-1 map = 100%, reduce = 100% ended job = job_1474512819880_0032 mapreduce jobs launched: stage-stage-1: map: 1 reduce: 1 hdfs read: 10393 hdfs write: 102 success total mapreduce cpu time spent: 0 msec ok exception in thread "main" exception: java.lang.outofmemoryerror thrown uncaughtexceptionhandler in thread "main" pruthvis-macbook-pro:apache-hive-2.1.0-bin pruthviraj$
i running on mac os 10 , have tired premmax size , still not working.any appreciated.
go env file , increase -xmx2048m -xmx4096m
-xmx4096m -xx:permsize=128m -xx:maxpermsize=128m
Comments
Post a Comment