메뉴 건너뛰기

Cloudera, BigData, Semantic IoT, Hadoop, NoSQL

Cloudera CDH/CDP 및 Hadoop EcoSystem, Semantic IoT등의 개발/운영 기술을 정리합니다. gooper@gooper.com로 문의 주세요.


아래와 같이 collection을 생성할때 Core with name 'xx_shard4_replica1' already exists가 발생하는 경우 bin/solr.in.sh을 확인하여 solr.data.dir을 명시적으로 지정하지 않도록 하고 bin/solr를 이용하여 solr instance를 기동시 solr.data.dir관련 option을 지정하지 않도록 하여 default로 생성되도록 해준다. 그렇지 않으면 아래와 같이 "/user/root/solr/data/index/write.lock for client XXX.XXX.XXX.XXX already exists" Exception이 발생하면서 collection이 생성되지 않는다.



------------------------------------오류내용---------------------------

root@gsda1:~/solr/bin# ./solr create -c gc -shards 4 -replicationFactor 2

Connecting to ZooKeeper at gsda1:2181,gsda2:2181,gsda3:2181 ...
Re-using existing configuration directory gc

Creating new collection 'gc' using command:
http://localhost:8983/solr/admin/collections?action=CREATE&name=gc&numShards=4&replicationFactor=2&maxShardsPerNode=2&collection.configName=gc


ERROR: Failed to create collection 'gc' due to: {XXX.XXX.XXX.XXX:8983_solr=org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://XXX.XXX.XXX.XXX:8983/solr: Error CREATEing SolrCore 'gc_shard4_replica1': Unable to create core [gc_shard4_replica1] Caused by: /user/root/solr/data/index/write.lock for client XXX.XXX.XXX.XXX already exists
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.startFile(FSDirWriteFileOp.java:359)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2234)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2154)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:727)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:409)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)
, XXX.XXX.XXX.XXX:8983_solr=org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://XXX.XXX.XXX.XXX:8983/solr: Error CREATEing SolrCore 'gc_shard3_replica1': Unable to create core [gc_shard3_replica1] Caused by: /user/root/solr/data/index/write.lock for client XXX.XXX.XXX.XXX already exists
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.startFile(FSDirWriteFileOp.java:359)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2234)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2154)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:727)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:409)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)
  XXX.XXX.XXX.XXX:8983_solr=org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://XXX.XXX.XXX.XXX:8983/solr: Error CREATEing SolrCore 'gc_shard3_replica2': Unable to create core [gc_shard3_replica2] Caused by: /user/root/solr/data/index/write.lock for client XXX.XXX.XXX.XXX already exists
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.startFile(FSDirWriteFileOp.java:359)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2234)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2154)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:727)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:409)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)
, XXX.XXX.XXX.XXX:8983_solr=org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:Error from server at http://XXX.XXX.XXX.XXX:8983/solr: Error CREATEing SolrCore 'gc_shard4_replica2': Unable to create core [gc_shard4_replica2] Caused by: /user/root/solr/data/index/write.lock for client XXX.XXX.XXX.XXX already exists
        at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.startFile(FSDirWriteFileOp.java:359)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInt(FSNamesystem.java:2234)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:2154)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:727)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:409)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:845)
        at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:788)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2455)
}

번호 제목 날짜 조회 수
512 Apache Toree설치(Jupyter에서 Scala, PySpark, SparkR, SQL을 사용할 수 있도록 하는 Kernel) 2018.04.17 3989
511 우분투 16.04LTS에 Jupyter설치 2018.04.17 4072
510 beeline으로 접근시 "User: gooper is not allowed to impersonate anonymous (state=08S01,code=0)"가 발생하면서 "No current connection"이 발생하는 경우 조치 2018.04.15 4231
509 Cloudera Manager 5.x설치시 embedded postgresql를 사용하는 경우의 관리정보 2018.04.13 3709
508 jupyter, zeppelin, rstudio를 이용하여 spark cluster에 job를 실행시키기 위한 정보 2018.04.13 8900
507 Cloudera Manager web UI의 언어를 한글에서 영문으로 변경하기 2018.04.03 4082
506 [우분투] suppoie 채굴 프로세스 발생시 자동으로 삭제하는 shell프로그램 2018.04.01 4520
505 Impala daemon기동시 "Could not create temporary timezone file"오류 발생시 조치사항 2018.03.29 4676
504 각 서버에 설치되는 cloudera서비스 프로그램 목록(CDH 5.14.0의 경우) 2018.03.29 3484
503 Cloudera설치중 실패로 여러번 설치하는 과정에 "Running in non-interactive mode, and data appears to exist in Storage Directory /dfs/nn. Not formatting." 오류가 발생시 조치하는 방법 2018.03.29 4213
502 Cloudera설치중에 "Error, CM server guid updated"오류 발생시 조치방법 2018.03.29 3143
501 Cloudera가 사용하는 서비스별 포트 2018.03.29 3958
500 Cloudera가 사용하는 서비스별 디렉토리 2018.03.29 3881
499 cloudera-scm-agent 설정파일 위치및 재시작 명령문 2018.03.29 4453
498 [CentOS] 네트워크 설정 2018.03.26 3351
497 Components of the Impala Server 2018.03.21 3514
496 HDFS Balancer설정및 수행 2018.03.21 3246
495 hadoop 클러스터 실행 스크립트 정리 2018.03.20 4982
494 HA(Namenode, ResourceManager, Kerberos) 및 보안(Zookeeper, Hadoop) 2018.03.16 2849
493 자주쓰는 유용한 프로그램 2018.03.16 5269
위로