메뉴 건너뛰기

Bigdata, Semantic IoT, Hadoop, NoSQL

Bigdata, Hadoop ecosystem, Semantic IoT등의 프로젝트를 진행중에 습득한 내용을 정리하는 곳입니다.
필요한 분을 위해서 공개하고 있습니다. 문의사항은 gooper@gooper.com로 메일을 보내주세요.


hadoop클러스터를 구성 하던 서버중 HA를 담당하는 서버의 hostname등이 변경되었을때는 "hadoop-daemon.sh start zkfc"를 수행할때

아래와 같은 오류가 발생할 수 있는데 zookeeper내의 /hadoop-ha/mycluster 노드에 있는 정보가 변경된 사항을 반영하지 못해서 문제가

발생한것이다.


이때는 "hdfs zkfc -formatZK"를 실행하여 ZKFC정보를 재생성해준다.


-----------------hdfs zkfc -formatZK실행 로그----------------

16/07/29 19:30:43 INFO zookeeper.ZooKeeper: Client environment:java.library.path=:/svc/apps/sda/bin/hadoop/hadoop/lib/native

16/07/29 19:30:43 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp

16/07/29 19:30:43 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>

16/07/29 19:30:43 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux

16/07/29 19:30:43 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64

16/07/29 19:30:43 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-573.22.1.el6.x86_64

16/07/29 19:30:43 INFO zookeeper.ZooKeeper: Client environment:user.name=root

16/07/29 19:30:43 INFO zookeeper.ZooKeeper: Client environment:user.home=/root

16/07/29 19:30:43 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/gooper/svc/apps/sda/bin/hadoop/hadoop-2.7.2/bin

16/07/29 19:30:43 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=gsda1:2181,gsda2:2181,gsda3:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@1e178745

16/07/29 19:30:43 INFO zookeeper.ClientCnxn: Opening socket connection to server sda2/XXX.XXX.XXX.44:2181. Will not attempt to authenticate using SASL (unknown error)

16/07/29 19:30:43 INFO zookeeper.ClientCnxn: Socket connection established to sda2/XXX.XXX.XXX.44:2181, initiating session

16/07/29 19:30:43 INFO zookeeper.ClientCnxn: Session establishment complete on server sda2/XXX.XXX.XXX.44:2181, sessionid = 0x25634f0fabb0300, negotiated timeout = 5000

16/07/29 19:30:43 INFO ha.ActiveStandbyElector: Session connected.

===============================================

The configured parent znode /hadoop-ha/mycluster already exists.

Are you sure you want to clear all failover information from

ZooKeeper?

WARNING: Before proceeding, ensure that all HDFS services and

failover controllers are stopped!

===============================================

Proceed formatting /hadoop-ha/mycluster? (Y or N) Y

16/07/29 19:36:25 INFO ha.ActiveStandbyElector: Recursively deleting /hadoop-ha/mycluster from ZK...

16/07/29 19:36:25 INFO ha.ActiveStandbyElector: Successfully deleted /hadoop-ha/mycluster from ZK.

16/07/29 19:36:26 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK.

16/07/29 19:36:26 INFO zookeeper.ZooKeeper: Session: 0x25634f0fabb0300 closed

16/07/29 19:36:26 INFO zookeeper.ClientCnxn: EventThread shut down



------------------오류내용-------------

2016-07-29 18:33:32,857 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.library.path=/home/gooper/svc/apps/sda/bin/hadoop/hadoop-2.7.2/lib

2016-07-29 18:33:32,857 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp

2016-07-29 18:33:32,857 INFO org.apache.zookeeper.ZooKeeper: Client environment:java.compiler=<NA>

2016-07-29 18:33:32,857 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.name=Linux

2016-07-29 18:33:32,857 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.arch=amd64

2016-07-29 18:33:32,857 INFO org.apache.zookeeper.ZooKeeper: Client environment:os.version=2.6.32-573.12.1.el6.x86_64

2016-07-29 18:33:32,857 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.name=root

2016-07-29 18:33:32,857 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.home=/root

2016-07-29 18:33:32,857 INFO org.apache.zookeeper.ZooKeeper: Client environment:user.dir=/home/gooper/svc/apps/sda/bin/hadoop/hadoop-2.7.2

2016-07-29 18:33:32,858 INFO org.apache.zookeeper.ZooKeeper: Initiating client connection, connectString=gsda1:2181,gsda2:2181,gsda3:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.Ac

tiveStandbyElector$WatcherWithClientRef@52bf72b5

2016-07-29 18:33:32,936 FATAL org.apache.hadoop.hdfs.tools.DFSZKFailoverController: Got a fatal error, exiting now

java.net.UnknownHostException: gsda3: unknown error

        at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)

        at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)

        at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)

        at java.net.InetAddress.getAllByName0(InetAddress.java:1276)

        at java.net.InetAddress.getAllByName(InetAddress.java:1192)

        at java.net.InetAddress.getAllByName(InetAddress.java:1126)

        at org.apache.zookeeper.client.StaticHostProvider.<init>(StaticHostProvider.java:61)

        at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:445)

        at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380)

        at org.apache.hadoop.ha.ActiveStandbyElector.getNewZooKeeper(ActiveStandbyElector.java:631)

        at org.apache.hadoop.ha.ActiveStandbyElector.createConnection(ActiveStandbyElector.java:775)

        at org.apache.hadoop.ha.ActiveStandbyElector.<init>(ActiveStandbyElector.java:229)

        at org.apache.hadoop.ha.ZKFailoverController.initZK(ZKFailoverController.java:350)

        at org.apache.hadoop.ha.ZKFailoverController.doRun(ZKFailoverController.java:191)

        at org.apache.hadoop.ha.ZKFailoverController.access$000(ZKFailoverController.java:61)

        at org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:172)

        at org.apache.hadoop.ha.ZKFailoverController$1.run(ZKFailoverController.java:168)

        at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)

        at org.apache.hadoop.ha.ZKFailoverController.run(ZKFailoverController.java:168)

        at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:181)

번호 제목 글쓴이 날짜 조회 수
260 spark2.0.0에서 hive 2.0.1 table을 읽어 출력하는 예제 소스(HiveContext, SparkSession, SQLContext) 총관리자 2017.03.09 163
259 It is indirectly referenced from required .class files 오류 발생시 조치방법 총관리자 2017.03.09 93
258 spark 2.0.0의 api를 이용하는 예제 프로그램 총관리자 2017.03.15 199
257 kafka-manager 1.3.3.4 설정및 실행하기 총관리자 2017.03.20 617
256 JavaStreamingContext를 이용하여 스트림으로 들어오는 문자열 카운트 소스 총관리자 2017.03.30 129
255 streaming작업시 입력된 값에 대한 사본을 만들게 되는데 이것이 실패했을때 발생하는 경고메세지 총관리자 2017.04.03 126
254 Container killed by the ApplicationMaster. Container killed on request. Exit code is 143 Container exited with a non-zero exit code 143 TaskAttempt killed because it ran on unusable node 오류시 조치방법 총관리자 2017.04.06 325
253 Caused by: java.lang.ClassNotFoundException: org.apache.spark.Logging 발생시 조치사항 총관리자 2017.04.19 284
252 Spark에서 Serializable관련 오류및 조치사항 총관리자 2017.04.21 4901
251 Hbase API를 이용하여 scan시 페이징을 고려하여 목록을 가져올때 사용할 수 있는 로직의 예시를 보여줌 총관리자 2017.04.26 239
250 Spark에서 KafkaUtils.createStream()를 이용하여 이용하여 kafka topic에 접근하여 객채로 저장된 값을 가져오고 처리하는 예제 소스 총관리자 2017.04.26 292
249 Kafka의 API중 Consumer.createJavaConsumerConnector()를 이용하고 다수의 thread를 생성하여 Kafka broker의 topic에 접근하여 데이타를 가져오고 처리하는 예제 소스 총관리자 2017.04.26 226
248 Ubuntu 16.04 LTS에 4대에 Hadoop 2.8.0설치 총관리자 2017.05.01 521
247 Ubuntu 16.04 LTS에 MariaDB 10.1설치 및 포트변경 및 원격접속 허용 총관리자 2017.05.01 1081
246 Cleaning up the staging area file시 'cannot access' 혹은 'Directory is not writable' 발생시 조치사항 총관리자 2017.05.02 336
245 hadoop에서 yarn jar ..를 이용하여 appliction을 실행하여 정상적으로 수행되었으나 yarn UI의 어플리케이션 목록에 나타나지 않는 문제 총관리자 2017.05.02 24
244 hadoop에서 yarn jar ..를 이용하여 appliction을 실행하여 정상적으로 수행되었으나 yarn UI의 어플리케이션 목록에 나타나지 않는 문제 총관리자 2017.05.02 51
243 hadoop에서 yarn jar ..를 이용하여 appliction을 실행하여 정상적으로 수행되었으나 yarn UI의 어플리케이션 목록에 나타나지 않는 문제 총관리자 2017.05.02 117
242 hadoop에서 yarn jar ..를 이용하여 appliction을 실행하여 정상적(?)으로 수행되었으나 yarn UI의 어플리케이션 목록에 나타나지 않는 문제 총관리자 2017.05.02 77
241 Ubuntu 16.04 LTS에 Hive 2.1.1설치하면서 "Version information not found in metastore"발생하는 오류원인및 조치사항 총관리자 2017.05.03 471

A personal place to organize information learned during the development of such Hadoop, Hive, Hbase, Semantic IoT, etc.
We are open to the required minutes. Please send inquiries to gooper@gooper.com.

위로