메뉴 건너뛰기

Bigdata, Semantic IoT, Hadoop, NoSQL

Bigdata, Hadoop ecosystem, Semantic IoT등의 프로젝트를 진행중에 습득한 내용을 정리하는 곳입니다.
필요한 분을 위해서 공개하고 있습니다. 문의사항은 gooper@gooper.com로 메일을 보내주세요.


 ./sbin/hadoop-daemon.sh start namenode를 실행할때 아래와 같은 오류가 뜨면서 namenode가 기동하지 못하는 경우가 있는데
이때는 "hdfs namenode -initializeSharedEdits" 명령을 master 서버에 실행하면 Re-format filesystem in QJM 여부를 묻는데
이때 Y를 선택하면 된다.

------hdfs namenode -initializeSharedEdits명령 실행시 로그내용----------------
16/07/29 15:16:01 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
16/07/29 15:16:01 INFO namenode.NameNode: createNameNode [-initializeSharedEdits]
16/07/29 15:16:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/07/29 15:16:01 WARN namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
16/07/29 15:16:01 WARN namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
16/07/29 15:16:01 INFO namenode.FSNamesystem: No KeyProvider found.
16/07/29 15:16:01 INFO namenode.FSNamesystem: fsLock is fair:true
16/07/29 15:16:01 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
16/07/29 15:16:01 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
16/07/29 15:16:01 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
16/07/29 15:16:01 INFO blockmanagement.BlockManager: The block deletion will start around 2016 7월 29 15:16:01
16/07/29 15:16:01 INFO util.GSet: Computing capacity for map BlocksMap
16/07/29 15:16:01 INFO util.GSet: VM type       = 64-bit
16/07/29 15:16:01 INFO util.GSet: 2.0% max memory 958.5 MB = 19.2 MB
16/07/29 15:16:01 INFO util.GSet: capacity      = 2^21 = 2097152 entries
16/07/29 15:16:01 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
16/07/29 15:16:01 INFO blockmanagement.BlockManager: defaultReplication         = 3
16/07/29 15:16:01 INFO blockmanagement.BlockManager: maxReplication             = 512
16/07/29 15:16:01 INFO blockmanagement.BlockManager: minReplication             = 1
16/07/29 15:16:01 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
16/07/29 15:16:01 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
16/07/29 15:16:01 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
16/07/29 15:16:01 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
16/07/29 15:16:01 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
16/07/29 15:16:01 INFO namenode.FSNamesystem: supergroup          = supergroup
16/07/29 15:16:01 INFO namenode.FSNamesystem: isPermissionEnabled = false
16/07/29 15:16:01 INFO namenode.FSNamesystem: Determined nameservice ID: mycluster
16/07/29 15:16:01 INFO namenode.FSNamesystem: HA Enabled: true
16/07/29 15:16:01 INFO namenode.FSNamesystem: Append Enabled: true
16/07/29 15:16:01 INFO util.GSet: Computing capacity for map INodeMap
16/07/29 15:16:01 INFO util.GSet: VM type       = 64-bit
16/07/29 15:16:01 INFO util.GSet: 1.0% max memory 958.5 MB = 9.6 MB
16/07/29 15:16:01 INFO util.GSet: capacity      = 2^20 = 1048576 entries
16/07/29 15:16:01 INFO namenode.FSDirectory: ACLs enabled? false
16/07/29 15:16:01 INFO namenode.FSDirectory: XAttrs enabled? true
16/07/29 15:16:01 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
16/07/29 15:16:01 INFO namenode.NameNode: Caching file names occuring more than 10 times
16/07/29 15:16:01 INFO util.GSet: Computing capacity for map cachedBlocks
16/07/29 15:16:01 INFO util.GSet: VM type       = 64-bit
16/07/29 15:16:01 INFO util.GSet: 0.25% max memory 958.5 MB = 2.4 MB
16/07/29 15:16:01 INFO util.GSet: capacity      = 2^18 = 262144 entries
16/07/29 15:16:01 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
16/07/29 15:16:01 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
16/07/29 15:16:01 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
16/07/29 15:16:01 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
16/07/29 15:16:01 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
16/07/29 15:16:01 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
16/07/29 15:16:01 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
16/07/29 15:16:01 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
16/07/29 15:16:01 INFO util.GSet: Computing capacity for map NameNodeRetryCache
16/07/29 15:16:01 INFO util.GSet: VM type       = 64-bit
16/07/29 15:16:01 INFO util.GSet: 0.029999999329447746% max memory 958.5 MB = 294.5 KB
16/07/29 15:16:01 INFO util.GSet: capacity      = 2^15 = 32768 entries
16/07/29 15:16:01 INFO common.Storage: Lock on /data/hadoop/dfs/namenode/in_use.lock acquired by nodename 58145@sda1
16/07/29 15:16:01 INFO namenode.FSImage: No edit log streams selected.
16/07/29 15:16:01 INFO namenode.FSImageFormatPBINode: Loading 34148 INodes.
16/07/29 15:16:02 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
16/07/29 15:16:02 INFO namenode.FSImage: Loaded image for txid 831713 from /data/hadoop/dfs/namenode/current/fsimage_0000000000000831713
16/07/29 15:16:02 INFO namenode.FSNamesystem: Need to save fs image? false (staleImage=true, haEnabled=true, isRollingUpgrade=false)
16/07/29 15:16:02 INFO namenode.NameCache: initialized with 222 entries 30740 lookups
16/07/29 15:16:02 INFO namenode.FSNamesystem: Finished loading FSImage in 272 msecs
Re-format filesystem in QJM to [XXX.XXX.XXX.44:8485, XXX.XXX.XXX.31:8485, XXX.XXX.XXX.32:8485] ? (Y or N) Y
16/07/29 15:16:33 INFO namenode.FileJournalManager: Recovering unfinalized segments in /data/hadoop/dfs/namenode/current
16/07/29 15:16:33 INFO client.QuorumJournalManager: Starting recovery process for unclosed journal segments...
16/07/29 15:16:33 INFO client.QuorumJournalManager: Successfully started new epoch 1
16/07/29 15:16:34 INFO util.ExitUtil: Exiting with status 0
16/07/29 15:16:34 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at sda1/XXX.XXX.XXX.43
************************************************************/

------------------------------오류내용-------------------------------
2016-06-12 15:02:34,760 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file http://so2:8480/getJournal?jid=mycluster&segmentTxId=185317&storageInfo=-63%3A1278801372%3A0%3ACID-c643ab86-79a1-481f-bd2e-f638d722ff4e, http://sda2:8480/
getJournal?jid=mycluster&segmentTxId=185317&storageInfo=-63%3A1278801372%3A0%3ACID-c643ab86-79a1-481f-bd2e-f638d722ff4e
"hadoop-root-namenode-sda1.log" 587921L, 201223721C
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

        at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
        at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:142)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:471)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:278)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1508)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1532)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:214)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:331)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:284)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:301)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:297)
2016-07-29 14:11:14,799 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state
2016-07-29 14:11:14,799 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Edit log tailer interrupted
java.lang.InterruptedException: sleep interrupted
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:347)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:284)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:301)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:297)
2016-07-29 14:11:14,800 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
2016-07-29 14:11:14,807 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Starting recovery process for unclosed journal segments...
2016-07-29 14:11:14,831 FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: recoverUnfinalizedSegments failed for required journal (JournalAndStream(mgr=QJM to [XXX.XXX.XXX.44:8485, XXX.XXX.XXX.31:8485, XXX.XXX.XXX.32:8485], stream=null))
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 1 successful responses:
XXX.XXX.XXX.44:8485: lastPromisedEpoch: 22
httpPort: 8480
fromURL: "http://0.0.0.0:8480"

2 exceptions thrown:
XXX.XXX.XXX.32:8485: Journal Storage Directory /data/hadoop/journal/data/mycluster not formatted
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkFormatted(Journal.java:461)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.getLastPromisedEpoch(Journal.java:244)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.getJournalState(JournalNodeRpcServer.java:123)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.getJournalState(QJournalProtocolServerSideTranslatorPB.java:118)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25415)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

XXX.XXX.XXX.31:8485: Journal Storage Directory /data/hadoop/journal/data/mycluster not formatted
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkFormatted(Journal.java:461)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.getLastPromisedEpoch(Journal.java:244)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.getJournalState(JournalNodeRpcServer.java:123)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.getJournalState(QJournalProtocolServerSideTranslatorPB.java:118)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25415)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

        at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
번호 제목 글쓴이 날짜 조회 수
27 hortonworks에서 제공하는 메모리 설정값 계산기 사용법 file 총관리자 2015.06.14 719
26 CentOS의 서버 5대에 yarn(hadoop 2.7.2)설치하기-ResourceManager HA/HDFS HA포함, JobHistory포함 총관리자 2016.03.29 1138
25 resouce manager에 dr.who가 아닌 다른 사용자로 로그인 하기 총관리자 2018.06.28 1207
24 ExWordCount jar파일 file 구퍼 2013.03.06 1336
» Journal Storage Directory /data/hadoop/journal/data/mycluster not formatted 오류시 조치사항 총관리자 2016.07.29 1518
22 physical memory used되면서 mapper가 kill되는 경우 오류 발생시 조치 총관리자 2018.09.20 1522
21 갑자기 DataNode가 java.io.IOException: Premature EOF from inputStream를 반복적으로 발생시키다가 java.lang.OutOfMemoryError: Java heap space를 내면서 죽는 경우 조치방법 총관리자 2017.07.19 1676
20 access=WRITE, inode="staging":ubuntu:supergroup:rwxr-xr-x 오류 총관리자 2014.07.05 1719
19 Hadoop wordcount 소스 작성 file 구퍼 2013.03.06 1888
18 Hadoop 설치 및 시작하기 file 구퍼 2013.03.06 1951
17 W/F수행후 Logs not available for 1. Aggregation may not to complete. 표시되며 로그내용이 보이지 않은 경우 총관리자 2020.05.08 2110
16 hadoop설치시 참고사항 구퍼 2013.03.08 2131
15 메이븐 (maven) 설치 및 이클립스 연동하기 file 구퍼 2013.03.06 2280
14 hadoop설치시 오류 총관리자 2013.12.18 2313
13 Cacti로 Hadoop 모니터링 하기 file 구퍼 2013.03.12 2367
12 hadoop 설치(3대) file 구퍼 2013.03.07 2613
11 banana pi에(lubuntu)에 hadoop설치하고 테스트하기 - 성공 file 총관리자 2014.07.05 2760
10 org.apache.hadoop.security.AccessControlException: Permission denied: user=hadoop, access=WRITE, inode="":root:supergroup:rwxr-xr-x 오류 처리방법 총관리자 2014.07.05 2834
9 이클립스에서 생성한 jar 파일 hadoop 으로 실행하기 file 구퍼 2013.03.06 2836
8 "java.net.NoRouteToHostException: 호스트로 갈 루트가 없음" 오류시 확인및 조치할 사항 총관리자 2016.04.01 3006

A personal place to organize information learned during the development of such Hadoop, Hive, Hbase, Semantic IoT, etc.
We are open to the required minutes. Please send inquiries to gooper@gooper.com.

위로