메뉴 건너뛰기

Bigdata, Semantic IoT, Hadoop, NoSQL

Bigdata, Hadoop ecosystem, Semantic IoT등의 프로젝트를 진행중에 습득한 내용을 정리하는 곳입니다.
필요한 분을 위해서 공개하고 있습니다. 문의사항은 gooper@gooper.com로 메일을 보내주세요.


실행 : python3 DataSetCreator.py -i s2rdf/data/sparql.in -s 0.25

=>http://stackoverflow.com/questions/27792839/spark-fail-when-running-pi-py-example-with-yarn-client-mode 참조

-----------------------------로그내용------------------------------
Input RDF file ->"
16/05/27 18:22:57 INFO SparkContext: Running Spark version 1.6.1
16/05/27 18:22:57 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/05/27 18:22:57 WARN SparkConf: Detected deprecated memory fraction settings: [spark.storage.memoryFraction]. As of Spark 1.6, execution and storage memory management are unified. All memory fractions used in the old model are now deprecated and no longer read. If you wish to use the old memory management, you may explicitly enable `spark.memory.useLegacyMode` (not recommended).
16/05/27 18:22:57 INFO SecurityManager: Changing view acls to: hadoop
16/05/27 18:22:57 INFO SecurityManager: Changing modify acls to: hadoop
16/05/27 18:22:57 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Set(hadoop)
16/05/27 18:22:57 INFO Utils: Successfully started service 'sparkDriver' on port 56181.
16/05/27 18:22:58 INFO Slf4jLogger: Slf4jLogger started
16/05/27 18:22:58 INFO Remoting: Starting remoting
16/05/27 18:22:58 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@XXX.XXX.XXX.43:34384]
16/05/27 18:22:58 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 34384.
16/05/27 18:22:58 INFO SparkEnv: Registering MapOutputTracker
16/05/27 18:22:58 INFO SparkEnv: Registering BlockManagerMaster
16/05/27 18:22:58 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-cdc351b1-92b1-405c-9127-fca2f798daf3
16/05/27 18:22:58 INFO MemoryStore: MemoryStore started with capacity 1247.3 MB
16/05/27 18:22:58 INFO SparkEnv: Registering OutputCommitCoordinator
16/05/27 18:22:58 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/05/27 18:22:58 INFO SparkUI: Started SparkUI at http://XXX.XXX.XXX.43:4040
16/05/27 18:22:58 INFO HttpFileServer: HTTP File server directory is /tmp/spark-de18dde4-d74e-4197-beab-2bc3de517b74/httpd-8faa7605-d0e3-44b9-ba73-d18ce63fe8f1
16/05/27 18:22:58 INFO HttpServer: Starting HTTP Server
16/05/27 18:22:58 INFO Utils: Successfully started service 'HTTP file server' on port 49921.
16/05/27 18:22:58 INFO SparkContext: Added JAR file:/home/hadoop/DataSetCreator/./datasetcreator_2.10-1.1.jar at http://XXX.XXX.XXX.43:49921/jars/datasetcreator_2.10-1.1.jar with timestamp 1464340978585
16/05/27 18:22:58 WARN YarnClientSchedulerBackend: NOTE: SPARK_WORKER_CORES is deprecated. Use SPARK_EXECUTOR_CORES or --executor-cores through spark-submit instead.
16/05/27 18:22:58 INFO ConfiguredRMFailoverProxyProvider: Failing over to rm2
16/05/27 18:22:58 INFO Client: Requesting a new application from cluster with 4 NodeManagers
16/05/27 18:22:58 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (19288 MB per container)
16/05/27 18:22:58 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
16/05/27 18:22:58 INFO Client: Setting up container launch context for our AM
16/05/27 18:22:58 INFO Client: Setting up the launch environment for our AM container
16/05/27 18:22:58 INFO Client: Preparing resources for our AM container
16/05/27 18:22:59 INFO Client: Uploading resource file:/home/gooper/svc/apps/sda/bin/hadoop/spark-1.6.1-bin-hadoop2.6/lib/spark-assembly-1.6.1-hadoop2.6.0.jar -> hdfs://mycluster/user/hadoop/.sparkStaging/application_1464337540213_0018/spark-assembly-1.6.1-hadoop2.6.0.jar
16/05/27 18:23:01 INFO Client: Uploading resource file:/tmp/spark-de18dde4-d74e-4197-beab-2bc3de517b74/__spark_conf__2857474168024892319.zip -> hdfs://mycluster/user/hadoop/.sparkStaging/application_1464337540213_0018/__spark_conf__2857474168024892319.zip
16/05/27 18:23:01 INFO SecurityManager: Changing view acls to: hadoop
16/05/27 18:23:01 INFO SecurityManager: Changing modify acls to: hadoop
16/05/27 18:23:01 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Set(hadoop)
16/05/27 18:23:01 INFO Client: Submitting application 18 to ResourceManager
16/05/27 18:23:01 INFO YarnClientImpl: Submitted application application_1464337540213_0018
16/05/27 18:23:02 INFO Client: Application report for application_1464337540213_0018 (state: ACCEPTED)
16/05/27 18:23:02 INFO Client: 
         client token: N/A
         diagnostics: N/A
         ApplicationMaster host: N/A
         ApplicationMaster RPC port: -1
         queue: root.hadoop
         start time: 1464340977670
         final status: UNDEFINED
         tracking URL: http://sda2:8088/proxy/application_1464337540213_0018/
         user: hadoop
16/05/27 18:23:03 INFO Client: Application report for application_1464337540213_0018 (state: ACCEPTED)
16/05/27 18:23:04 INFO Client: Application report for application_1464337540213_0018 (state: ACCEPTED)
16/05/27 18:23:04 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null)
16/05/27 18:23:04 INFO YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> sda1, PROXY_URI_BASES -> http://sda1:8088/proxy/application_1464337540213_0018), /proxy/application_1464337540213_0018
16/05/27 18:23:04 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
16/05/27 18:23:05 INFO Client: Application report for application_1464337540213_0018 (state: RUNNING)
16/05/27 18:23:05 INFO Client: 
         client token: N/A
         diagnostics: N/A
         ApplicationMaster host: XXX.XXX.XXX.44
         ApplicationMaster RPC port: 0
         queue: root.hadoop
         start time: 1464340977670
         final status: UNDEFINED
         tracking URL: http://sda2:8088/proxy/application_1464337540213_0018/
         user: hadoop
16/05/27 18:23:05 INFO YarnClientSchedulerBackend: Application application_1464337540213_0018 has started running.
16/05/27 18:23:05 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 44676.
16/05/27 18:23:05 INFO NettyBlockTransferService: Server created on 44676
16/05/27 18:23:05 INFO BlockManagerMaster: Trying to register BlockManager
16/05/27 18:23:05 INFO BlockManagerMasterEndpoint: Registering block manager XXX.XXX.XXX.43:44676 with 1247.3 MB RAM, BlockManagerId(driver, XXX.XXX.XXX.43, 44676)
16/05/27 18:23:05 INFO BlockManagerMaster: Registered BlockManager
16/05/27 18:23:05 INFO EventLoggingListener: Logging events to hdfs://mycluster/user/hadoop/spark/application_1464337540213_0018
16/05/27 18:23:08 INFO YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as NettyRpcEndpointRef(null)
16/05/27 18:23:08 INFO YarnClientSchedulerBackend: Add WebUI Filter. org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter, Map(PROXY_HOSTS -> sda1, PROXY_URI_BASES -> http://sda1:8088/proxy/application_1464337540213_0018), /proxy/application_1464337540213_0018
16/05/27 18:23:08 INFO JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
16/05/27 18:23:09 ERROR YarnClientSchedulerBackend: Yarn application has already exited with state FINISHED!
16/05/27 18:23:09 INFO SparkUI: Stopped Spark web UI at http://XXX.XXX.XXX.43:4040
16/05/27 18:23:09 INFO YarnClientSchedulerBackend: Shutting down all executors
16/05/27 18:23:09 INFO YarnClientSchedulerBackend: Asking each executor to shut down
16/05/27 18:23:09 INFO YarnClientSchedulerBackend: Stopped
16/05/27 18:23:09 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/05/27 18:23:09 INFO MemoryStore: MemoryStore cleared
16/05/27 18:23:09 INFO BlockManager: BlockManager stopped
16/05/27 18:23:09 INFO BlockManagerMaster: BlockManagerMaster stopped
16/05/27 18:23:09 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/05/27 18:23:09 INFO SparkContext: Successfully stopped SparkContext
16/05/27 18:23:09 INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/05/27 18:23:09 INFO RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/05/27 18:23:09 INFO RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
16/05/27 18:23:28 INFO YarnClientSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
16/05/27 18:23:28 ERROR SparkContext: Error initializing SparkContext.
java.lang.NullPointerException
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:584)
        at dataCreator.Settings$.loadSparkContext(Settings.scala:69)
        at dataCreator.Settings$.<init>(Settings.scala:17)
        at dataCreator.Settings$.<clinit>(Settings.scala)
        at runDriver$.main(runDriver.scala:12)
        at runDriver.main(runDriver.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
16/05/27 18:23:28 INFO SparkContext: SparkContext already stopped.
Exception in thread "main" java.lang.ExceptionInInitializerError
        at runDriver$.main(runDriver.scala:12)
        at runDriver.main(runDriver.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:497)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.NullPointerException
        at org.apache.spark.SparkContext.<init>(SparkContext.scala:584)
        at dataCreator.Settings$.loadSparkContext(Settings.scala:69)
        at dataCreator.Settings$.<init>(Settings.scala:17)
        at dataCreator.Settings$.<clinit>(Settings.scala)
        ... 11 more
16/05/27 18:23:28 INFO ShutdownHookManager: Shutdown hook called
16/05/27 18:23:28 INFO ShutdownHookManager: Deleting directory /tmp/spark-de18dde4-d74e-4197-beab-2bc3de517b74/httpd-8faa7605-d0e3-44b9-ba73-d18ce63fe8f1
16/05/27 18:23:28 INFO ShutdownHookManager: Deleting directory /tmp/spark-de18dde4-d74e-4197-beab-2bc3de517b74



^CTraceback (most recent call last):
  File "DataSetCreator.py", line 128, in <module>
    main(sys.argv[1:])
  File "DataSetCreator.py", line 125, in main
    generateDatsets()
  File "DataSetCreator.py", line 83, in generateDatsets
    delay()
  File "DataSetCreator.py", line 45, in delay
    time.sleep(delTime)
KeyboardInterrupt
번호 제목 글쓴이 날짜 조회 수
320 W/F수행후 Logs not available for 1. Aggregation may not to complete. 표시되며 로그내용이 보이지 않은 경우 총관리자 2020.05.08 2110
319 A Cluster의 HDFS 디렉토리및 파일을 사용자및 권한 유지 하여 다운 받아서 B Cluster에 넣기 총관리자 2020.05.06 79
318 impala external 테이블 생성시 컬럼과 라인 구분자를 지정하여 테이블 생성하는 예시 총관리자 2020.02.20 109
317 [Kerberos]Kerberos상태의 클러스터에 JDBC로 접근할때 케이스별 오류내용 총관리자 2020.02.14 764
316 cloudera서비스 중지및 기동순서 총관리자 2020.02.14 178
315 impala테이블 쿼리시 max_row_size 관련 오류가 발생할때 조치사항 총관리자 2020.02.12 217
314 hue.axes_accessattempt테이블 데이터 샘플 총관리자 2020.02.10 108
313 hue.desktop_document2의 type의 종류 총관리자 2020.02.10 631
312 hue db에서 사용자가 가지는 정보 확인 총관리자 2020.02.10 1644
311 Cloudera의 CMS각 컴포넌트의 역할 총관리자 2020.02.10 71
310 Namenode Metadata백업하는 방법 총관리자 2020.02.10 374
309 cloudera의 hue에서 사용자가 사용한 쿼리 목록 총관리자 2020.02.07 156
308 hive metadata(hive, impala, kudu 정보가 있음) 테이블에서 db, table, owner, location를 조회하는 쿼리 총관리자 2020.02.07 380
307 oozie WF에서 참고할만한 내용 총관리자 2019.07.18 168
306 기준일자 이전의 hdfs 데이타를 지우는 shellscript 샘플 총관리자 2019.06.14 359
305 source, sink를 직접 구현하여 사용하는 예시 총관리자 2019.05.30 398
304 kerberos설정된 상태의 spooldir->memory->hdfs로 저장하는 과정의 flume agent configuration구성 예시 총관리자 2019.05.30 172
303 hive테이블의 물리적인 위치인 HDFS에 여러개의 데이터 파일이 존재할때 한개의 파일로 merge하여 동일한 테이블에 입력하는 방법 총관리자 2019.05.23 640
302 하둡 클러스터 전체 노드를 다시 기동하면 invalidate metadata를 수행해야 데이터가 틀어지지 않는다. 총관리자 2019.05.20 114
301 Could not configure server becase SASL configuration did not allow the Zookeeper server to authenticate itself properly: javax.security.auth.login.LoginException: Checksum failed 총관리자 2019.05.18 434

A personal place to organize information learned during the development of such Hadoop, Hive, Hbase, Semantic IoT, etc.
We are open to the required minutes. Please send inquiries to gooper@gooper.com.

위로