메뉴 건너뛰기

Bigdata, Semantic IoT, Hadoop, NoSQL

Bigdata, Hadoop ecosystem, Semantic IoT등의 프로젝트를 진행중에 습득한 내용을 정리하는 곳입니다.
필요한 분을 위해서 공개하고 있습니다. 문의사항은 gooper@gooper.com로 메일을 보내주세요.


 ./sbin/hadoop-daemon.sh start namenode를 실행할때 아래와 같은 오류가 뜨면서 namenode가 기동하지 못하는 경우가 있는데
이때는 "hdfs namenode -initializeSharedEdits" 명령을 master 서버에 실행하면 Re-format filesystem in QJM 여부를 묻는데
이때 Y를 선택하면 된다.

------hdfs namenode -initializeSharedEdits명령 실행시 로그내용----------------
16/07/29 15:16:01 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
16/07/29 15:16:01 INFO namenode.NameNode: createNameNode [-initializeSharedEdits]
16/07/29 15:16:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/07/29 15:16:01 WARN namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
16/07/29 15:16:01 WARN namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
16/07/29 15:16:01 INFO namenode.FSNamesystem: No KeyProvider found.
16/07/29 15:16:01 INFO namenode.FSNamesystem: fsLock is fair:true
16/07/29 15:16:01 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
16/07/29 15:16:01 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
16/07/29 15:16:01 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
16/07/29 15:16:01 INFO blockmanagement.BlockManager: The block deletion will start around 2016 7월 29 15:16:01
16/07/29 15:16:01 INFO util.GSet: Computing capacity for map BlocksMap
16/07/29 15:16:01 INFO util.GSet: VM type       = 64-bit
16/07/29 15:16:01 INFO util.GSet: 2.0% max memory 958.5 MB = 19.2 MB
16/07/29 15:16:01 INFO util.GSet: capacity      = 2^21 = 2097152 entries
16/07/29 15:16:01 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
16/07/29 15:16:01 INFO blockmanagement.BlockManager: defaultReplication         = 3
16/07/29 15:16:01 INFO blockmanagement.BlockManager: maxReplication             = 512
16/07/29 15:16:01 INFO blockmanagement.BlockManager: minReplication             = 1
16/07/29 15:16:01 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
16/07/29 15:16:01 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
16/07/29 15:16:01 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
16/07/29 15:16:01 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
16/07/29 15:16:01 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
16/07/29 15:16:01 INFO namenode.FSNamesystem: supergroup          = supergroup
16/07/29 15:16:01 INFO namenode.FSNamesystem: isPermissionEnabled = false
16/07/29 15:16:01 INFO namenode.FSNamesystem: Determined nameservice ID: mycluster
16/07/29 15:16:01 INFO namenode.FSNamesystem: HA Enabled: true
16/07/29 15:16:01 INFO namenode.FSNamesystem: Append Enabled: true
16/07/29 15:16:01 INFO util.GSet: Computing capacity for map INodeMap
16/07/29 15:16:01 INFO util.GSet: VM type       = 64-bit
16/07/29 15:16:01 INFO util.GSet: 1.0% max memory 958.5 MB = 9.6 MB
16/07/29 15:16:01 INFO util.GSet: capacity      = 2^20 = 1048576 entries
16/07/29 15:16:01 INFO namenode.FSDirectory: ACLs enabled? false
16/07/29 15:16:01 INFO namenode.FSDirectory: XAttrs enabled? true
16/07/29 15:16:01 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
16/07/29 15:16:01 INFO namenode.NameNode: Caching file names occuring more than 10 times
16/07/29 15:16:01 INFO util.GSet: Computing capacity for map cachedBlocks
16/07/29 15:16:01 INFO util.GSet: VM type       = 64-bit
16/07/29 15:16:01 INFO util.GSet: 0.25% max memory 958.5 MB = 2.4 MB
16/07/29 15:16:01 INFO util.GSet: capacity      = 2^18 = 262144 entries
16/07/29 15:16:01 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
16/07/29 15:16:01 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
16/07/29 15:16:01 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
16/07/29 15:16:01 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
16/07/29 15:16:01 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
16/07/29 15:16:01 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
16/07/29 15:16:01 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
16/07/29 15:16:01 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
16/07/29 15:16:01 INFO util.GSet: Computing capacity for map NameNodeRetryCache
16/07/29 15:16:01 INFO util.GSet: VM type       = 64-bit
16/07/29 15:16:01 INFO util.GSet: 0.029999999329447746% max memory 958.5 MB = 294.5 KB
16/07/29 15:16:01 INFO util.GSet: capacity      = 2^15 = 32768 entries
16/07/29 15:16:01 INFO common.Storage: Lock on /data/hadoop/dfs/namenode/in_use.lock acquired by nodename 58145@sda1
16/07/29 15:16:01 INFO namenode.FSImage: No edit log streams selected.
16/07/29 15:16:01 INFO namenode.FSImageFormatPBINode: Loading 34148 INodes.
16/07/29 15:16:02 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
16/07/29 15:16:02 INFO namenode.FSImage: Loaded image for txid 831713 from /data/hadoop/dfs/namenode/current/fsimage_0000000000000831713
16/07/29 15:16:02 INFO namenode.FSNamesystem: Need to save fs image? false (staleImage=true, haEnabled=true, isRollingUpgrade=false)
16/07/29 15:16:02 INFO namenode.NameCache: initialized with 222 entries 30740 lookups
16/07/29 15:16:02 INFO namenode.FSNamesystem: Finished loading FSImage in 272 msecs
Re-format filesystem in QJM to [XXX.XXX.XXX.44:8485, XXX.XXX.XXX.31:8485, XXX.XXX.XXX.32:8485] ? (Y or N) Y
16/07/29 15:16:33 INFO namenode.FileJournalManager: Recovering unfinalized segments in /data/hadoop/dfs/namenode/current
16/07/29 15:16:33 INFO client.QuorumJournalManager: Starting recovery process for unclosed journal segments...
16/07/29 15:16:33 INFO client.QuorumJournalManager: Successfully started new epoch 1
16/07/29 15:16:34 INFO util.ExitUtil: Exiting with status 0
16/07/29 15:16:34 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at sda1/XXX.XXX.XXX.43
************************************************************/

------------------------------오류내용-------------------------------
2016-06-12 15:02:34,760 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file http://so2:8480/getJournal?jid=mycluster&segmentTxId=185317&storageInfo=-63%3A1278801372%3A0%3ACID-c643ab86-79a1-481f-bd2e-f638d722ff4e, http://sda2:8480/
getJournal?jid=mycluster&segmentTxId=185317&storageInfo=-63%3A1278801372%3A0%3ACID-c643ab86-79a1-481f-bd2e-f638d722ff4e
"hadoop-root-namenode-sda1.log" 587921L, 201223721C
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

        at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
        at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:142)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:471)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:278)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1508)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1532)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:214)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:331)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:284)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:301)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:297)
2016-07-29 14:11:14,799 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state
2016-07-29 14:11:14,799 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Edit log tailer interrupted
java.lang.InterruptedException: sleep interrupted
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:347)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:284)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:301)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:297)
2016-07-29 14:11:14,800 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
2016-07-29 14:11:14,807 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Starting recovery process for unclosed journal segments...
2016-07-29 14:11:14,831 FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: recoverUnfinalizedSegments failed for required journal (JournalAndStream(mgr=QJM to [XXX.XXX.XXX.44:8485, XXX.XXX.XXX.31:8485, XXX.XXX.XXX.32:8485], stream=null))
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 1 successful responses:
XXX.XXX.XXX.44:8485: lastPromisedEpoch: 22
httpPort: 8480
fromURL: "http://0.0.0.0:8480"

2 exceptions thrown:
XXX.XXX.XXX.32:8485: Journal Storage Directory /data/hadoop/journal/data/mycluster not formatted
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkFormatted(Journal.java:461)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.getLastPromisedEpoch(Journal.java:244)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.getJournalState(JournalNodeRpcServer.java:123)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.getJournalState(QJournalProtocolServerSideTranslatorPB.java:118)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25415)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

XXX.XXX.XXX.31:8485: Journal Storage Directory /data/hadoop/journal/data/mycluster not formatted
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkFormatted(Journal.java:461)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.getLastPromisedEpoch(Journal.java:244)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.getJournalState(JournalNodeRpcServer.java:123)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.getJournalState(QJournalProtocolServerSideTranslatorPB.java:118)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25415)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

        at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
번호 제목 글쓴이 날짜 조회 수
336 you are accessing a non-optimized hue please switch to one of the available addresses 총관리자 2021.10.06 42
335 CM의 Impala->Query tab에서 FINISHED query가 보이지 않는 현상 총관리자 2021.08.31 35
334 tablet별 disk사용량 확인하는 방법 총관리자 2021.08.27 110
333 AnalysisException: Incomplatible return type 'DECIMAL(38,0)' and 'DECIMAL(38,5)' of exprs가 발생시 조치 총관리자 2021.07.26 33
332 drop table로 삭제했으나 tablet server에는 여전히 존재하는 테이블 삭제방법 총관리자 2021.07.09 7533
331 impala session type별 표시되는 정보로 구분하는 방법 총관리자 2021.05.25 89
330 Hive JDBC Connection과 유형별 에러및 필요한 jar파일 총관리자 2021.05.24 809
329 [Kudu] tablet server 혹은 kudu master가 어떤 원인에 의해서 replica가 failed상태인 경우 복구하는 방법 총관리자 2021.05.24 309
328 impald에서 idle_query_timeout 와 idle_session_timeout 구분 총관리자 2021.05.20 1630
327 missing block및 관련 파일명 찾는 명령어 총관리자 2021.02.20 153
326 [sap] Error: java.io.IOException: SQLException in nextKeyValue 오류 발생 총관리자 2020.06.08 253
325 [kudu]테이블 drop이 안되고 timeout이 걸리는 경우 조치 방법 총관리자 2020.06.08 340
324 [oozie] oozie shell action에서 shellscript수행결과의 2개 변수를 decision 액션에서 사용하기 총관리자 2020.06.05 133
323 [Sentry]HDFS의 ACL을 Sentry와 연동후 테스트 총관리자 2020.06.02 610
322 [sqoop] mapper를 2이상으로 설정하기 위한 split-by컬럼을 찾을때 유용하게 활용할 수 있는 쿼리 총관리자 2020.05.13 326
321 mysql sqoop작업을 위해서 mysql-connector-java.jar을 추가하는 경우 확실하게 인식시키는 방법 총관리자 2020.05.11 61
320 W/F수행후 Logs not available for 1. Aggregation may not to complete. 표시되며 로그내용이 보이지 않은 경우 총관리자 2020.05.08 2110
319 A Cluster의 HDFS 디렉토리및 파일을 사용자및 권한 유지 하여 다운 받아서 B Cluster에 넣기 총관리자 2020.05.06 77
318 impala external 테이블 생성시 컬럼과 라인 구분자를 지정하여 테이블 생성하는 예시 총관리자 2020.02.20 107
317 [Kerberos]Kerberos상태의 클러스터에 JDBC로 접근할때 케이스별 오류내용 총관리자 2020.02.14 747

A personal place to organize information learned during the development of such Hadoop, Hive, Hbase, Semantic IoT, etc.
We are open to the required minutes. Please send inquiries to gooper@gooper.com.

위로