메뉴 건너뛰기

Bigdata, Semantic IoT, Hadoop, NoSQL

Bigdata, Hadoop ecosystem, Semantic IoT등의 프로젝트를 진행중에 습득한 내용을 정리하는 곳입니다.
필요한 분을 위해서 공개하고 있습니다. 문의사항은 gooper@gooper.com로 메일을 보내주세요.


Spark에서 KafkaUtils.createStream을 이용하여 Kafka의 data를 가져올때 StorageLevel을 StorageLevel.MEMORY_ONLY()로 하는 경우 "Could not compute split, block input-0-1517397051800 not found"형태의 오류가 발생하는데 이는 Spark가 메모리 부족 상황이 되면 해당 데이타를 버리기 때문에 문제가 발생한다.

이때는 StorageLevel.MEMORY_ONLY()을 StorageLevel.MEMORY_AND_DISK_SER()로 변경해준다.



-------------소스코드 일부분-----

JavaPairReceiverInputDStream<byte[], byte[]> kafkaStream = KafkaUtils.createStream(jssc,byte[].class, byte[].class, kafka.serializer.DefaultDecoder.class, kafka.serializer.DefaultDecoder.class,
        conf, topic, StorageLevel.MEMORY_AND_DISK_SER());
JavaDStream<byte[]> lines = kafkaStream.map(tuple2 -> tuple2._2());


-----------------------------------오류 메세지------------------

[2018-01-31 20:17:26,404] [internal.Logging$class] [logError(#70)] [ERROR] Task 0 in stage 1020.0 failed 1 times; aborting job
[2018-01-31 20:17:26,404] [internal.Logging$class] [logError(#91)] [ERROR] Error running job streaming job 1517397060000 ms.0
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1020.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1020.0 (TID 1020, localhost, executor driver): java.lang.Exception: Could not compute split, block input-0-1517397051800 not found
        at org.apache.spark.rdd.BlockRDD.compute(BlockRDD.scala:50)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1435)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1423)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1422)
        at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
        at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1422)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
        at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:802)
        at scala.Option.foreach(Option.scala:257)
        at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:802)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1650)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1605)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1594)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
        at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:628)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1925)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1938)
        at org.apache.spark.SparkContext.runJob(SparkContext.scala:1951)
        at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1354)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
        at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
        at org.apache.spark.rdd.RDD.take(RDD.scala:1327)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$print$2$$anonfun$foreachFunc$3$1.apply(DStream.scala:734)
        at org.apache.spark.streaming.dstream.DStream$$anonfun$print$2$$anonfun$foreachFunc$3$1.apply(DStream.scala:733)
        at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(ForEachDStream.scala:51)
        at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
        at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1$$anonfun$apply$mcV$sp$1.apply(ForEachDStream.scala:51)
        at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:415)
        at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply$mcV$sp(ForEachDStream.scala:50)
        at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
        at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$1.apply(ForEachDStream.scala:50)
        at scala.util.Try$.apply(Try.scala:192)
        at org.apache.spark.streaming.scheduler.Job.run(Job.scala:39)
        at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply$mcV$sp(JobScheduler.scala:256)
        at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:256)
        at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$1.apply(JobScheduler.scala:256)
        at scala.util.DynamicVariable.withValue(DynamicVariable.scala:58)
        at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:255)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.Exception: Could not compute split, block input-0-1517397051800 not found
        at org.apache.spark.rdd.BlockRDD.compute(BlockRDD.scala:50)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
        at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
        at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
        at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
        at org.apache.spark.scheduler.Task.run(Task.scala:99)
        at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
        ... 3 more
[2018-01-31 20:17:26,415] [onem2m.AvroOneM2MDataSparkSubscribe$ConsumerT] [go(#142)] [DEBUG] count data from kafka broker stream in AvroOneM2MDataSparkSubscribe: 39981
[2018-01-31 20:17:29,039] [sf.QueryServiceFactory] [create(#28)] [DEBUG] query gubun : FUSEKISPARQL
[2018-01-31 20:17:29,040] [sf.QueryCommon] [makeFinal(#44)] [DEBUG] Count : 0 , Vals : [] 
[2018-01-31 20:17:29,040] [sf.SparqlFusekiQueryImpl] [runModifySparql(#162)] [DEBUG] runModifySparql() on DatWarehouse server start.................................. 
[2018-01-31 20:17:29,040] [sf.SparqlFusekiQueryImpl] [runModifySparql(#165)] [DEBUG] try (first).................................. 
[2018-01-31 20:17:29,042] [sf.SparqlFusekiQueryImpl] [runModifySparql(#207)] [DEBUG] runModifySparql() on DataWarehouse server end.................................. 
[2018-01-31 20:17:29,042] [sf.SparqlFusekiQueryImpl] [runModifySparql(#212)] [DEBUG] runModifySparql() on DataMart server start.................................. 
[2018-01-31 20:17:29,043] [sf.SparqlFusekiQueryImpl] [runModifySparql(#224)] [DEBUG] runModifySparql() on DataMart server end.................................. 
[2018-01-31 20:17:29,044] [sf.QueryCommon] [makeFinal(#44)] [DEBUG] Count : 0 , Vals : [] 
[2018-01-31 20:17:29,044] [sf.SparqlFusekiQueryImpl] [runModifySparql(#162)] [DEBUG] runModifySparql() on DatWarehouse server start.................................. 
[2018-01-31 20:17:29,044] [sf.SparqlFusekiQueryImpl] [runModifySparql(#165)] [DEBUG] try (first).................................. 
[2018-01-31 20:17:29,045] [sf.SparqlFusekiQueryImpl] [runModifySparql(#207)] [DEBUG] runModifySparql() on DataWarehouse server end.................................. 
[2018-01-31 20:17:29,045] [sf.SparqlFusekiQueryImpl] [runModifySparql(#212)] [DEBUG] runModifySparql() on DataMart server start.................................. 
[2018-01-31 20:17:29,046] [sf.SparqlFusekiQueryImpl] [runModifySparql(#224)] [DEBUG] runModifySparql() on DataMart server end.................................. 
[2018-01-31 20:17:29,047] [sf.QueryCommon] [makeFinal(#44)] [DEBUG] Count : 0 , Vals : [] 
[2018-01-31 20:17:29,047] [sf.SparqlFusekiQueryImpl] [runModifySparql(#212)] [DEBUG] runModifySparql() on DataMart server start.................................. 
[2018-01-31 20:17:29,049] [sf.SparqlFusekiQueryImpl] [runModifySparql(#224)] [DEBUG] runModifySparql() on DataMart server end.................................. 
[2018-01-31 20:17:29,049] [sf.TripleService] [makeTripleFile(#333)] [INFO] makeTripleFile start==========================>
[2018-01-31 20:17:29,049] [sf.TripleService] [makeTripleFile(#334)] [DEBUG] makeTripleFile ========triple_path_file=================>/svc/apps/sda/triples/20180131/AvroOneM2MDataSparkSubscribe_TT20180131T201100S0000000991_WRK20180131T201729.nt
[2018-01-31 20:17:29,170] [sf.TripleService] [makeTripleFile(#346)] [INFO] makeTripleFile end==========================>
[2018-01-31 20:17:29,170] [onem2m.AvroOneM2MDataSparkSubscribe] [sendTriples(#288)] [INFO] Sending triples in com.pineone.icbms.sda.kafka.onem2m.AvroOneM2MDataSparkSubscribe to DW start.......................
[2018-01-31 20:17:29,170] [sf.TripleService] [sendTripleFileToDW(#382)] [INFO] sendTripleFile to DW start==========================>
[2018-01-31 20:17:29,170] [sf.TripleService] [sendTripleFileToDW(#383)] [DEBUG] sendTripleFile ==============triple_path_file============>/svc/apps/sda/triples/20180131/AvroOneM2MDataSparkSubscribe_TT20180131T201100S0000000991_WRK20180131T201729.nt
[2018-01-31 20:17:29,171] [sf.TripleService] [sendTripleFileToDW(#396)] [DEBUG] sendTripleFile ==============args============>/svc/apps/sda/bin/fuseki/bin/s-post http://166.104.112.43:23030/icbms default /svc/apps/sda/triples/20180131/AvroOneM2MDataSparkSubscribe_TT20180131T201100S0000000991_WRK20180131T201729.nt 
[2018-01-31 20:17:29,171] [sf.TripleService] [sendTripleFileToDW(#399)] [DEBUG] try (first).......................
[2018-01-31 20:17:36,950] [util.Utils] [runShell(#737)] [DEBUG] Thread stdMsgT Status : TERMINATED
[2018-01-31 20:17:36,951] [util.Utils] [runShell(#738)] [DEBUG] Thread errMsgT Status : TERMINATED
[2018-01-31 20:17:36,951] [util.Utils] [runShell(#743)] [DEBUG] notTimeOver ==========================>true
[2018-01-31 20:17:36,951] [sf.TripleService] [sendTripleFileToDW(#402)] [DEBUG] resultStr in TripleService.sendTripleFileToDW() == > [, ]
[2018-01-31 20:17:36,951] [sf.TripleService] [sendTripleFileToDW(#433)] [INFO] sendTripleFile to DW  end==========================>
[2018-01-31 20:17:36,951] [onem2m.AvroOneM2MDataSparkSubscribe] [sendTriples(#290)] [INFO] Sending triples in com.pineone.icbms.sda.kafka.onem2m.AvroOneM2MDataSparkSubscribe to DW end.......................
[2018-01-31 20:17:36,951] [onem2m.AvroOneM2MDataSparkSubscribe] [sendTriples(#293)] [INFO] Sending triples in com.pineone.icbms.sda.kafka.onem2m.AvroOneM2MDataSparkSubscribe to Halyard start.......................
[2018-01-31 20:17:36,952] [sf.TripleService] [sendTripleFileToHalyard(#486)] [INFO] sendTripleFile to Halyard  start==========================>
[2018-01-31 20:17:37,294] [sf.QueryServiceFactory] [create(#31)] [DEBUG] query gubun : HALYARDSPARQL
[2018-01-31 20:17:37,317] [sf.SparqlHalyardQueryImpl] [insertByPost(#189)] [DEBUG] ------------------------insertByPost-----start-----------------------
[2018-01-31 20:17:37,317] [sf.SparqlHalyardQueryImpl] [insertByPost(#198)] [DEBUG] ------------------------insertByPost-----end-----------------------

번호 제목 글쓴이 날짜 조회 수
740 [CDP7.1.7] oozie sqoop action으로 import혹은 export수행시 발생한 오류에 대한 자세한 로그 확인 하는 방법 gooper 2024.04.19 0
739 [Impala] alter table구문수행시 "WARNINGS: Impala does not have READ_WRITE access to path 'hdfs://nameservice1/DATA/Temp/DB/source/table01_ccd'" 발생시 조치 gooper 2024.04.26 0
738 [CDP7.1.7, Replication]Encryption Zone내 HDFS파일을 비Encryption Zone으로 HDFS Replication시 User hdfs가 아닌 hadoop으로 수행하는 방법 gooper 2024.01.15 1
737 [CDP7.1.7]Hive Replication수행중 Specified catalog.database.table does not exist : hive.db명.table명 오류 발생시 조치방법 gooper 2024.04.05 1
736 [CDP7.1.7][Replication]Table does not match version in getMetastore(). Table view original text mismatch gooper 2024.01.02 2
735 ./gradlew :composeDown 및 ./gradlew :composeUp 를 성공했을때의 메세지 gooper 2023.02.20 6
734 호출 url현황 gooper 2023.02.21 6
733 [vue storefrontui]외부 API통합하기 참고 문서 총관리자 2022.02.09 7
732 [Cloudera Agent] Metadata-Plugin throttling_logger INFO (713 skipped) Unable to send data to nav server. Will try again. gooper 2022.05.16 7
731 [CDP7.1.7, Hive Replication]Hive Replication진행중 "The following columns have types incompatible with the existing columns in their respective positions " 오류 gooper 2023.12.27 7
730 [CDP7.1.7]Oozie job에서 ERROR: Kudu error(s) reported, first error: Timed out: Failed to write batch of 774 ops to tablet 8003f9a064bf4be5890a178439b2ba91가 발생하면서 쿼리가 실패하는 경우 gooper 2024.01.05 7
729 eclipse editor 설정방법 총관리자 2022.02.01 9
728 주문히스토리 조회 총관리자 2022.04.30 10
727 [bitbucket] 2022년 3월 2일 부터 git 작업시 기존에 사용하던 비빌번호를 사용할 수 없도록 변경되었다. 총관리자 2022.04.30 10
726 oozie의 sqoop action수행시 ooize:launcher의 applicationId를 이용하여 oozie:action의 applicationId및 관련 로그를 찾는 방법 gooper 2023.07.26 10
725 주문 생성 데이터 예시 총관리자 2022.04.30 11
724 [EncryptionZone]User:testuser not allowed to do "DECRYPT_EEK" on 'testkey' gooper 2023.06.29 11
723 [CDP7.1.7]Encryption Zone내부/외부 간 데이터 이동(mv,cp)및 CTAS, INSERT SQL시 오류(can't be moved into an encryption zone, can't be moved from an encryption zone) gooper 2023.11.14 11
722 [CDP7.1.7]impala-shell수행시 간헐적으로 "-k requires a valid kerberos ticket but no valid kerberos ticket found." 오류 gooper 2023.11.16 11
721 [Encryption Zone]Encryption Zone에 생성된 table을 select할때 HDFS /tmp/zone1에 대한 권한이 없는 경우 gooper 2023.06.29 12

A personal place to organize information learned during the development of such Hadoop, Hive, Hbase, Semantic IoT, etc.
We are open to the required minutes. Please send inquiries to gooper@gooper.com.

위로