메뉴 건너뛰기

Bigdata, Semantic IoT, Hadoop, NoSQL

Bigdata, Hadoop ecosystem, Semantic IoT등의 프로젝트를 진행중에 습득한 내용을 정리하는 곳입니다.
필요한 분을 위해서 공개하고 있습니다. 문의사항은 gooper@gooper.com로 메일을 보내주세요.


hive index생성, 삭제, 활용

총관리자 2014.04.25 16:41 조회 수 : 1702

1. index설정

hive> create index h_price_info_index on table h_price_info (key_id) as 'COMPACT' WITH DEFERRED REBUILD;
OK
Time taken: 6.898 seconds

 

2. index 생성 정보 확인
hive> show formatted index on h_price_info;
OK
idx_name             tab_name             col_names            idx_tab_name         idx_type             comment             
h_price_info_index h_price_info key_id default__h_price_info_h_price_info_index__ compact
Time taken: 0.402 seconds, Fetched: 4 row(s)


3. index를 물리적으로 생성함
hive> alter index h_price_info_index on h_price_info rebuild;

--> 아래와 같은 오류가 발생할 수 있는데.. 아래와 같이 hive 실행시 libpath를 지정하고 실행한다.

(hive --auxpath /home/hadoop/hive/lib/hbase-0.94.6.1.jar,/home/hadoop/hive/lib/zookeeper-3.4.3.jar,/home/hadoop/hive/lib/hive-hbase-handler-0.11.0.jar,/home/hadoop/hive/lib/guava-11.0.2.jar,/home/hadoop/hive/lib/hive-contrib-0.11.0.jar -hiveconf hbase.master=localhost:60000 )

 

Total MapReduce jobs = 1

----------------------오류내용----------------------
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapred.reduce.tasks=<number>
Starting Job = job_201404241444_0032, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201404241444_0032
Kill Command = /home/hadoop/hadoop/libexec/../bin/hadoop job  -kill job_201404241444_0032
Hadoop job information for Stage-1: number of mappers: 2; number of reducers: 1
2014-04-25 16:39:27,621 Stage-1 map = 0%,  reduce = 0%
2014-04-25 16:40:22,624 Stage-1 map = 100%,  reduce = 100%
Ended Job = job_201404241444_0032 with errors
Error during job, obtaining debugging information...
Job Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201404241444_0032
Examining task ID: task_201404241444_0032_m_000003 (and more) from job job_201404241444_0032

Task with the most failures(4):
-----
Task ID:
  task_201404241444_0032_m_000000

URL:
  http://localhost:50030/taskdetails.jsp?jobid=job_201404241444_0032&tipid=task_201404241444_0032_m_000000
-----
Diagnostic Messages for this Task:
java.io.IOException: Cannot create an instance of InputSplit class = org.apache.hadoop.hive.hbase.HBaseSplit:org.apache.hadoop.hive.hbase.HBaseSplit
 at org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.readFields(HiveInputFormat.java:146)
 at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)
 at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40)
 at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:390)
 at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:406)
 at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
 at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.Subject.doAs(Subject.java:415)
 at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
 at org.apache.hadoop.mapred.Child.main(Child.java:249)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.hbase.HBaseSplit
 at java.net.URLClassLoader$1.run(URLClassLoader.java:366)
 at java.net.URLClassLoader$1.run(URLClassLoader.java:355)
 at java.security.AccessController.doPrivileged(Native Method)
 at java.net.URLClassLoader.findClass(URLClassLoader.java:354)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:425)
 at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:308)
 at java.lang.ClassLoader.loadClass(ClassLoader.java:358)
 at java.lang.Class.forName0(Native Method)
 at java.lang.Class.forName(Class.java:270)
 at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:810)
 at org.apache.hadoop.hive.ql.io.HiveInputFormat$HiveInputSplit.readFields(HiveInputFormat.java:143)
 ... 10 more


FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 2  Reduce: 1   HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec
--------------------------------------------------------------------------------------------

 

4. index사용설정

set hive.optimize.autoindex=true;

 

5. 쿼리수행

 select * from h_price_info where key_id like '%고추%'

 

근데 속도가 빠른건지.. 모르겠다...

번호 제목 글쓴이 날짜 조회 수
30 Query Status: Sender xxx.xxx.xxx.xxx timed out waiting for receiver fragment instance: 1234:cdsf, dest node: 10 의 오류 원인및 대응방안 총관리자 2021.11.03 77
29 kudu 테이블 metadata강제 삭제시 발생하는 오류 메세지 총관리자 2022.01.12 110
28 not leader of this config: current role FOLLOWER 오류 발생시 확인방법 총관리자 2022.01.17 23
27 Soft memory limit exceeded (at 100.05% of capacity) 오류 조치 총관리자 2022.01.17 181
26 Failed to write to server: (no server available): 총관리자 2022.01.17 32
25 Kudu tablet이 FAILED일때 원인 확인 방법 총관리자 2022.01.17 90
24 kudu rebalance수행 command예시 총관리자 2022.01.17 85
23 [hive] hive.tbls테이블의 owner컬럼값은 hadoop.security.auth_to_local에 의해서 filtering된다. 총관리자 2022.04.14 55
22 [TLS/SSL]Kudu Master 설정하기 총관리자 2022.05.13 61
21 [TLS/SSL]Kudu Tablet Server설정 총관리자 2022.05.13 35
20 Query 1234:1234 expired due to client inactivity(timeout is 5m)및 invalid query handle gooper 2022.06.10 81
19 small file 한개 파일로 만들기(text file 혹은 parquet file의 테이블) gooper 2022.07.04 76
18 [Cloudera 6.3.4, Kudu]]Service Monitor에서 사용하는 metric중에 일부를 blacklist로 설정하여 모니터링 정보 수집 제외하는 방법 gooper 2022.07.08 31
17 [Kudu]Schema별 혹은 테이블별 사용량(Replica포함) 구하는 방법 gooper 2022.07.14 121
16 [CDP7.1.7]Impala Query의 Memory Spilled 양은 ScratchFileUsedBytes값을 누적해서 구할 수 있다. gooper 2022.07.29 29
15 [impala]쿼리 수행중 발생하는 오류(due to memory pressure: the memory usage of this transaction, Failed to write to server) gooper 2022.10.05 67
14 [Impala 3.2버젼]compute incremental stats db명.테이블명 수행시 ERROR: AnalysisException: Incremental stats size estimate exceeds 2000.00MB. 오류 발생원인및 조치방안 gooper 2022.11.30 697
13 hive의 메타정보 테이블을 MariaDB로 사용하는 경우 table comment나 column comment에 한글 입력시 깨지는 경우 utf8로 바꾸는 방법. gooper 2023.03.10 93
12 [Kudu]ERROR: Unable to advance iterator for node with id '2' for Kudu table 'impala::core.pm0_abdasubjct': Network error: recv error from unknown peer: Transport endpoint is not connected (error 107) gooper 2023.03.16 536
11 [KUDU] kudu tablet server여러가지 원인에 의해서 corrupted상태가 된 경우 복구방법 gooper 2023.03.28 37

A personal place to organize information learned during the development of such Hadoop, Hive, Hbase, Semantic IoT, etc.
We are open to the required minutes. Please send inquiries to gooper@gooper.com.

위로