메뉴 건너뛰기

Bigdata, Semantic IoT, Hadoop, NoSQL

Bigdata, Hadoop ecosystem, Semantic IoT등의 프로젝트를 진행중에 습득한 내용을 정리하는 곳입니다.
필요한 분을 위해서 공개하고 있습니다. 문의사항은 gooper@gooper.com로 메일을 보내주세요.


hive-site.xml에서 아래 3개의 <value>에서 ms를 뺀다.


 <property>

    <name>hive.hmshandler.retry.interval</name>

    <value>2000ms</value>

    <description>

      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.

      The time between HMSHandler retry attempts on failure.

    </description>

  </property>


  <property>

    <name>hive.llap.am.liveness.connection.sleep.between.retries.ms</name>

    <value>2000ms</value>

    <description>

      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.

      Sleep duration while waiting to retry connection failures to the AM from the daemon for

      the general keep-alive thread (milliseconds).

    </description>

  </property>



  <property>

    <name>hive.llap.task.communicator.connection.sleep.between.retries.ms</name>

    <value>2000ms</value>

    <description>

      Expects a time value with unit (d/day, h/hour, m/min, s/sec, ms/msec, us/usec, ns/nsec), which is msec if not specified.

      Sleep duration (in milliseconds) to wait before retrying on error when obtaining a

      connection to LLAP daemon from Tez AM.

    </description>

  </property>




---------------------------------오류내용-----------------------------------------

16/06/09 13:57:38 INFO SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0

SET spark.sql.hive.version=0.13.1

16/06/09 13:57:39 ERROR log: Got exception: java.lang.NumberFormatException For input string: "2000ms"

java.lang.NumberFormatException: For input string: "2000ms"

        at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)

        at java.lang.Integer.parseInt(Integer.java:580)

        at java.lang.Integer.parseInt(Integer.java:615)

        at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1134)

        at org.apache.hadoop.hive.conf.HiveConf.getIntVar(HiveConf.java:1211)

        at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:87)

        at com.sun.proxy.$Proxy6.get_all_databases(Unknown Source)

        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getAllDatabases(HiveMetaStoreClient.java:837)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:497)

        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)

        at com.sun.proxy.$Proxy7.getAllDatabases(Unknown Source)

        at org.apache.hadoop.hive.ql.metadata.Hive.getAllDatabases(Hive.java:1098)

        at org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionNames(FunctionRegistry.java:671)

        at org.apache.hadoop.hive.ql.exec.FunctionRegistry.getFunctionNames(FunctionRegistry.java:662)

        at org.apache.hadoop.hive.cli.CliDriver.getCommandCompletor(CliDriver.java:540)

        at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:168)

        at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:497)

        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)

        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:166)

        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)

        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)

        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

16/06/09 13:57:39 ERROR log: Converting exception to MetaException

16/06/09 13:57:39 ERROR FunctionRegistry: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:Got exception: java.lang.NumberFormatException For input string: "2000ms")

spark-sql> 16/06/09 13:57:39 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@sda2:47136/user/Executor#-853352827] with ID 2

16/06/09 13:57:39 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@so2:35213/user/Executor#-894168911] with ID 0

16/06/09 13:57:39 INFO BlockManagerMasterActor: Registering block manager sda2:58779 with 265.1 MB RAM, BlockManagerId(2, sda2, 58779)

16/06/09 13:57:39 INFO BlockManagerMasterActor: Registering block manager so2:33469 with 265.1 MB RAM, BlockManagerId(0, so2, 33469)

16/06/09 13:57:41 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@so-db2:35159/user/Executor#-473750792] with ID 1

16/06/09 13:57:41 INFO BlockManagerMasterActor: Registering block manager so-db2:47990 with 265.1 MB RAM, BlockManagerId(1, so-db2, 47990)

번호 제목 글쓴이 날짜 조회 수
740 [CDP7.1.7] oozie sqoop action으로 import혹은 export수행시 발생한 오류에 대한 자세한 로그 확인 하는 방법 gooper 2024.04.19 0
739 [Impala] alter table구문수행시 "WARNINGS: Impala does not have READ_WRITE access to path 'hdfs://nameservice1/DATA/Temp/DB/source/table01_ccd'" 발생시 조치 gooper 2024.04.26 0
738 [CDP7.1.7, Replication]Encryption Zone내 HDFS파일을 비Encryption Zone으로 HDFS Replication시 User hdfs가 아닌 hadoop으로 수행하는 방법 gooper 2024.01.15 1
737 [CDP7.1.7]Hive Replication수행중 Specified catalog.database.table does not exist : hive.db명.table명 오류 발생시 조치방법 gooper 2024.04.05 1
736 [CDP7.1.7][Replication]Table does not match version in getMetastore(). Table view original text mismatch gooper 2024.01.02 2
735 ./gradlew :composeDown 및 ./gradlew :composeUp 를 성공했을때의 메세지 gooper 2023.02.20 6
734 호출 url현황 gooper 2023.02.21 6
733 [vue storefrontui]외부 API통합하기 참고 문서 총관리자 2022.02.09 7
732 [Cloudera Agent] Metadata-Plugin throttling_logger INFO (713 skipped) Unable to send data to nav server. Will try again. gooper 2022.05.16 7
731 [CDP7.1.7, Hive Replication]Hive Replication진행중 "The following columns have types incompatible with the existing columns in their respective positions " 오류 gooper 2023.12.27 7
730 [CDP7.1.7]Oozie job에서 ERROR: Kudu error(s) reported, first error: Timed out: Failed to write batch of 774 ops to tablet 8003f9a064bf4be5890a178439b2ba91가 발생하면서 쿼리가 실패하는 경우 gooper 2024.01.05 7
729 eclipse editor 설정방법 총관리자 2022.02.01 9
728 주문히스토리 조회 총관리자 2022.04.30 10
727 [bitbucket] 2022년 3월 2일 부터 git 작업시 기존에 사용하던 비빌번호를 사용할 수 없도록 변경되었다. 총관리자 2022.04.30 10
726 oozie의 sqoop action수행시 ooize:launcher의 applicationId를 이용하여 oozie:action의 applicationId및 관련 로그를 찾는 방법 gooper 2023.07.26 10
725 주문 생성 데이터 예시 총관리자 2022.04.30 11
724 [EncryptionZone]User:testuser not allowed to do "DECRYPT_EEK" on 'testkey' gooper 2023.06.29 11
723 [CDP7.1.7]Encryption Zone내부/외부 간 데이터 이동(mv,cp)및 CTAS, INSERT SQL시 오류(can't be moved into an encryption zone, can't be moved from an encryption zone) gooper 2023.11.14 11
722 [CDP7.1.7]impala-shell수행시 간헐적으로 "-k requires a valid kerberos ticket but no valid kerberos ticket found." 오류 gooper 2023.11.16 11
721 [Encryption Zone]Encryption Zone에 생성된 table을 select할때 HDFS /tmp/zone1에 대한 권한이 없는 경우 gooper 2023.06.29 12

A personal place to organize information learned during the development of such Hadoop, Hive, Hbase, Semantic IoT, etc.
We are open to the required minutes. Please send inquiries to gooper@gooper.com.

위로