메뉴 건너뛰기

Bigdata, Semantic IoT, Hadoop, NoSQL

Bigdata, Hadoop ecosystem, Semantic IoT등의 프로젝트를 진행중에 습득한 내용을 정리하는 곳입니다.
필요한 분을 위해서 공개하고 있습니다. 문의사항은 gooper@gooper.com로 메일을 보내주세요.


*출처 : https://community.hortonworks.com/questions/191898/hdp-261-virus-crytalminer-drwho.html


HDP 2.6.1 Virus CrytalMiner (dr.who)

Question by Huy Duong May 16 at 01:00 PM hdp-2.6.0hdp-2.6.1

Hi!

I'm using HDP 2.6.1. Every ok, but recently, I has problem with Yarn application. I has found type of virus. It work flow:
1. Some service submit yarn application with user name "dr.who"

2. When submit yarn application, on worker will run script container. Script have malware to download Trojan CrytalMiner.

3. Trojan will run via command: /tmp/java -c /tmp/w.conf.

I has kill job, but it will re-run after about 15 minute. I don't know where submit yarn application with user "dr.who"!, Anybody has same problem?. Please check and show how to remove this!

Many thank!

virus.png (70.3 kB)
avatar image
BEST ANSWER

Answer by Sandeep Nemuri  

@Huy Duong

We've recently sent out a security notification regarding the same.

1. Stop further attacks:

a. Use Firewall / IP table settings to allow access only to whitelisted IP addresses for Resource Manager port (default 8088). Do this on both Resource Managers in your HA setup. This only addresses the current attack. To permanently secure your clusters, all HDP end-points ( e.g WebHDFS) must be blocked from open access outside of firewalls.

b. Make your cluster secure (kerberized).

2. Clean up existing attacks:

a. If you already see the above problem in your clusters, please filter all applications named “MYYARN” and kill them after verifying that these applications are not legitimately submitted by your own users.

b. You will also need to manually login into the cluster machines and check for any process with “z_2.sh” or “/tmp/java” or “/tmp/w.conf” and kill them.

Hortonworks strongly recommends affected customers to involve their internal security team to find out the extent of damage and lateral movement inside network. The affected customers will need to do a clean secure installation after backup and ensure that data is not contaminated.

 4 · Share
avatar image

Answer by Huy Duong 

Thanks Sandeep!

I have use firewall block port for yarn resource (8088)!. And all yarn application from user dr.who has gone!

 0 · Share
번호 제목 글쓴이 날짜 조회 수
561 namenode오류 복구시 사용하는 명령 총관리자 2016.04.01 376
560 Incompatible clusterIDs오류 원인및 해결방법 총관리자 2016.04.01 490
559 Elastic Search For Hadoop 2.2.0설치하기(5대 클러스터링) 총관리자 2016.04.04 447
558 elasticsearch 1.3.0에서 rdf및 hadoop plugin설치 총관리자 2016.04.06 108
557 failed to read local state, exiting...오류발생시 조치사항 총관리자 2016.04.06 141
556 서버 5대에 solr 5.5.0 설치하고 index data를 HDFS에 저장/search하도록 설치/설정하는 방법 총관리자 2016.04.08 54
555 서버 5대에 solr 5.5.0 설치하고 index data를 HDFS에 저장/search하도록 설치/설정하는 방법(SolrCloud) 총관리자 2016.04.08 656
554 딥러닝 수학/알고리즘 '한국어' 강의 총관리자 2016.04.10 110
553 Cassandra 3.4(3.10) 설치/설정 (5대로 clustering) 총관리자 2016.04.11 397
552 bin/cassandra -f -R로 startup할때 NullPointerException오류가 나면 조치할 내용 총관리자 2016.04.14 70
551 cumulusRDF 1.0.1설치및 "KeyspaceCumulus" keyspace확인하기 file 총관리자 2016.04.15 244
550 Hadoop 완벽 가이드 정리된 링크 총관리자 2016.04.19 205
549 Spark 2.1.1 clustering(5대) 설치(YARN기반) 총관리자 2016.04.22 1882
548 solrcloud에 solrdf1.1설치하고 테스트 하기 총관리자 2016.04.22 113
547 solrdf초기 기동시 "Caused by: java.lang.IllegalAccessError: tried to access field org.apache.solr.handler.RequestHandlerBase.log from class org.gazzax.labs.solrdf.handler.update.RdfUpdateRequestHandler" 오류가 발생시 조치사항 총관리자 2016.04.22 157
546 collection생성시 -shards와 -replicationFactor값을 잘못지정하면 write.lock for client xxx.xxx.xxx.xxx already exists오류가 발생한다. 총관리자 2016.04.28 88
545 solr 인스턴스 기동후 shard에 서버가 정상적으로 할당되지 않는 경우 해결책 총관리자 2016.04.29 654
544 kafka 0.9.0.1 for scala 2.1.1 설치및 테스트 총관리자 2016.05.02 412
543 kafka broker기동시 brokerId가 달라서 기동에 실패하는 경우 조치방법 총관리자 2016.05.02 2332
542 Master rejected startup because clock is out of sync 오류 해결방법 총관리자 2016.05.03 60

A personal place to organize information learned during the development of such Hadoop, Hive, Hbase, Semantic IoT, etc.
We are open to the required minutes. Please send inquiries to gooper@gooper.com.

위로