메뉴 건너뛰기

Bigdata, Semantic IoT, Hadoop, NoSQL

Bigdata, Hadoop ecosystem, Semantic IoT등의 프로젝트를 진행중에 습득한 내용을 정리하는 곳입니다.
필요한 분을 위해서 공개하고 있습니다. 문의사항은 gooper@gooper.com로 메일을 보내주세요.


 ./sbin/hadoop-daemon.sh start namenode를 실행할때 아래와 같은 오류가 뜨면서 namenode가 기동하지 못하는 경우가 있는데
이때는 "hdfs namenode -initializeSharedEdits" 명령을 master 서버에 실행하면 Re-format filesystem in QJM 여부를 묻는데
이때 Y를 선택하면 된다.

------hdfs namenode -initializeSharedEdits명령 실행시 로그내용----------------
16/07/29 15:16:01 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
16/07/29 15:16:01 INFO namenode.NameNode: createNameNode [-initializeSharedEdits]
16/07/29 15:16:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/07/29 15:16:01 WARN namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
16/07/29 15:16:01 WARN namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
16/07/29 15:16:01 INFO namenode.FSNamesystem: No KeyProvider found.
16/07/29 15:16:01 INFO namenode.FSNamesystem: fsLock is fair:true
16/07/29 15:16:01 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
16/07/29 15:16:01 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
16/07/29 15:16:01 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
16/07/29 15:16:01 INFO blockmanagement.BlockManager: The block deletion will start around 2016 7월 29 15:16:01
16/07/29 15:16:01 INFO util.GSet: Computing capacity for map BlocksMap
16/07/29 15:16:01 INFO util.GSet: VM type       = 64-bit
16/07/29 15:16:01 INFO util.GSet: 2.0% max memory 958.5 MB = 19.2 MB
16/07/29 15:16:01 INFO util.GSet: capacity      = 2^21 = 2097152 entries
16/07/29 15:16:01 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
16/07/29 15:16:01 INFO blockmanagement.BlockManager: defaultReplication         = 3
16/07/29 15:16:01 INFO blockmanagement.BlockManager: maxReplication             = 512
16/07/29 15:16:01 INFO blockmanagement.BlockManager: minReplication             = 1
16/07/29 15:16:01 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
16/07/29 15:16:01 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
16/07/29 15:16:01 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
16/07/29 15:16:01 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
16/07/29 15:16:01 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
16/07/29 15:16:01 INFO namenode.FSNamesystem: supergroup          = supergroup
16/07/29 15:16:01 INFO namenode.FSNamesystem: isPermissionEnabled = false
16/07/29 15:16:01 INFO namenode.FSNamesystem: Determined nameservice ID: mycluster
16/07/29 15:16:01 INFO namenode.FSNamesystem: HA Enabled: true
16/07/29 15:16:01 INFO namenode.FSNamesystem: Append Enabled: true
16/07/29 15:16:01 INFO util.GSet: Computing capacity for map INodeMap
16/07/29 15:16:01 INFO util.GSet: VM type       = 64-bit
16/07/29 15:16:01 INFO util.GSet: 1.0% max memory 958.5 MB = 9.6 MB
16/07/29 15:16:01 INFO util.GSet: capacity      = 2^20 = 1048576 entries
16/07/29 15:16:01 INFO namenode.FSDirectory: ACLs enabled? false
16/07/29 15:16:01 INFO namenode.FSDirectory: XAttrs enabled? true
16/07/29 15:16:01 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
16/07/29 15:16:01 INFO namenode.NameNode: Caching file names occuring more than 10 times
16/07/29 15:16:01 INFO util.GSet: Computing capacity for map cachedBlocks
16/07/29 15:16:01 INFO util.GSet: VM type       = 64-bit
16/07/29 15:16:01 INFO util.GSet: 0.25% max memory 958.5 MB = 2.4 MB
16/07/29 15:16:01 INFO util.GSet: capacity      = 2^18 = 262144 entries
16/07/29 15:16:01 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
16/07/29 15:16:01 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
16/07/29 15:16:01 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
16/07/29 15:16:01 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
16/07/29 15:16:01 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
16/07/29 15:16:01 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
16/07/29 15:16:01 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
16/07/29 15:16:01 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
16/07/29 15:16:01 INFO util.GSet: Computing capacity for map NameNodeRetryCache
16/07/29 15:16:01 INFO util.GSet: VM type       = 64-bit
16/07/29 15:16:01 INFO util.GSet: 0.029999999329447746% max memory 958.5 MB = 294.5 KB
16/07/29 15:16:01 INFO util.GSet: capacity      = 2^15 = 32768 entries
16/07/29 15:16:01 INFO common.Storage: Lock on /data/hadoop/dfs/namenode/in_use.lock acquired by nodename 58145@sda1
16/07/29 15:16:01 INFO namenode.FSImage: No edit log streams selected.
16/07/29 15:16:01 INFO namenode.FSImageFormatPBINode: Loading 34148 INodes.
16/07/29 15:16:02 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
16/07/29 15:16:02 INFO namenode.FSImage: Loaded image for txid 831713 from /data/hadoop/dfs/namenode/current/fsimage_0000000000000831713
16/07/29 15:16:02 INFO namenode.FSNamesystem: Need to save fs image? false (staleImage=true, haEnabled=true, isRollingUpgrade=false)
16/07/29 15:16:02 INFO namenode.NameCache: initialized with 222 entries 30740 lookups
16/07/29 15:16:02 INFO namenode.FSNamesystem: Finished loading FSImage in 272 msecs
Re-format filesystem in QJM to [XXX.XXX.XXX.44:8485, XXX.XXX.XXX.31:8485, XXX.XXX.XXX.32:8485] ? (Y or N) Y
16/07/29 15:16:33 INFO namenode.FileJournalManager: Recovering unfinalized segments in /data/hadoop/dfs/namenode/current
16/07/29 15:16:33 INFO client.QuorumJournalManager: Starting recovery process for unclosed journal segments...
16/07/29 15:16:33 INFO client.QuorumJournalManager: Successfully started new epoch 1
16/07/29 15:16:34 INFO util.ExitUtil: Exiting with status 0
16/07/29 15:16:34 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at sda1/XXX.XXX.XXX.43
************************************************************/

------------------------------오류내용-------------------------------
2016-06-12 15:02:34,760 INFO org.apache.hadoop.hdfs.server.namenode.FSImage: Start loading edits file http://so2:8480/getJournal?jid=mycluster&segmentTxId=185317&storageInfo=-63%3A1278801372%3A0%3ACID-c643ab86-79a1-481f-bd2e-f638d722ff4e, http://sda2:8480/
getJournal?jid=mycluster&segmentTxId=185317&storageInfo=-63%3A1278801372%3A0%3ACID-c643ab86-79a1-481f-bd2e-f638d722ff4e
"hadoop-root-namenode-sda1.log" 587921L, 201223721C
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

        at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
        at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:142)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.selectInputStreams(QuorumJournalManager.java:471)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.selectInputStreams(JournalSet.java:278)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1508)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1532)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.doTailEdits(EditLogTailer.java:214)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:331)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:284)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:301)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:297)
2016-07-29 14:11:14,799 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Stopping services started for standby state
2016-07-29 14:11:14,799 WARN org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer: Edit log tailer interrupted
java.lang.InterruptedException: sleep interrupted
        at java.lang.Thread.sleep(Native Method)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:347)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:284)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:301)
        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
        at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:297)
2016-07-29 14:11:14,800 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Starting services required for active state
2016-07-29 14:11:14,807 INFO org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager: Starting recovery process for unclosed journal segments...
2016-07-29 14:11:14,831 FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: recoverUnfinalizedSegments failed for required journal (JournalAndStream(mgr=QJM to [XXX.XXX.XXX.44:8485, XXX.XXX.XXX.31:8485, XXX.XXX.XXX.32:8485], stream=null))
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 1 successful responses:
XXX.XXX.XXX.44:8485: lastPromisedEpoch: 22
httpPort: 8480
fromURL: "http://0.0.0.0:8480"

2 exceptions thrown:
XXX.XXX.XXX.32:8485: Journal Storage Directory /data/hadoop/journal/data/mycluster not formatted
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkFormatted(Journal.java:461)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.getLastPromisedEpoch(Journal.java:244)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.getJournalState(JournalNodeRpcServer.java:123)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.getJournalState(QJournalProtocolServerSideTranslatorPB.java:118)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25415)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

XXX.XXX.XXX.31:8485: Journal Storage Directory /data/hadoop/journal/data/mycluster not formatted
        at org.apache.hadoop.hdfs.qjournal.server.Journal.checkFormatted(Journal.java:461)
        at org.apache.hadoop.hdfs.qjournal.server.Journal.getLastPromisedEpoch(Journal.java:244)
        at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.getJournalState(JournalNodeRpcServer.java:123)
        at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.getJournalState(QJournalProtocolServerSideTranslatorPB.java:118)
        at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25415)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:969)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2049)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2045)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2043)

        at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:223)
번호 제목 글쓴이 날짜 조회 수
660 java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error: Unable to deserialize reduce input key from...오류해결방법 총관리자 2015.06.16 1691
659 List<Map<String, String>>형태의 데이타에서 중복제거 하는 방법 총관리자 2016.12.23 1686
658 갑자기 DataNode가 java.io.IOException: Premature EOF from inputStream를 반복적으로 발생시키다가 java.lang.OutOfMemoryError: Java heap space를 내면서 죽는 경우 조치방법 총관리자 2017.07.19 1676
657 hue db에서 사용자가 가지는 정보 확인 총관리자 2020.02.10 1644
656 Cloudera Manager설치및 Uninstall 방법(순서) 총관리자 2018.05.28 1642
655 impald에서 idle_query_timeout 와 idle_session_timeout 구분 총관리자 2021.05.20 1630
654 centos 5.X에 hadoop 2.0.5 alpha 설치 총관리자 2013.12.16 1580
653 Jena 2.3를 Hadoop 2.7.2의 NFS로 mount하고 fuseki를 이용하여 start할때 오류 메세지 총관리자 2016.12.02 1557
652 sqoop에서 oracle관련 작업할때 테이블명, 사용자명, DB명은 모두 대문자로 사용할것 총관리자 2014.05.15 1528
651 physical memory used되면서 mapper가 kill되는 경우 오류 발생시 조치 총관리자 2018.09.20 1522
650 FAILED: IllegalStateException Variable substitution depth too large: 40 오류발생시 조치사항 총관리자 2014.08.19 1520
» Journal Storage Directory /data/hadoop/journal/data/mycluster not formatted 오류시 조치사항 총관리자 2016.07.29 1518
648 centsOS vsftpd설치하기 총관리자 2013.12.17 1515
647 jsoup 사용 예제 총관리자 2014.06.06 1506
646 [DataNode]org.apache.hadoop.security.KerberosAuthException: failure to login: for principal: hdfs/datanode03@GOOPER.COM from keytab hdfs.keytab오류 gooper 2023.04.18 1476
645 hiverserver2기동시 connection refused가 발생하는 경우 조치방법 총관리자 2014.05.22 1471
644 oozie의 meta정보를 mysql에서 관리하기 총관리자 2014.05.26 1466
643 apt-get install mysql-server수행시 "404 Not Found" 오류발생시 조치방법 총관리자 2014.09.10 1450
642 마이바티스(MyBatis)쿼리로그 출력및 정렬하기 총관리자 2015.12.01 1448
641 우분투 16.04 LTS에 apache2와 tomcat7 연동하여 설치하기 총관리자 2014.05.09 1429

A personal place to organize information learned during the development of such Hadoop, Hive, Hbase, Semantic IoT, etc.
We are open to the required minutes. Please send inquiries to gooper@gooper.com.

위로