메뉴 건너뛰기

Bigdata, Semantic IoT, Hadoop, NoSQL

Bigdata, Hadoop ecosystem, Semantic IoT등의 프로젝트를 진행중에 습득한 내용을 정리하는 곳입니다.
필요한 분을 위해서 공개하고 있습니다. 문의사항은 gooper@gooper.com로 메일을 보내주세요.


*출처1 : https://www.cloudera.com/more/training/certification/cca-spark.html




*출처2 : http://www.hadoopexam.com/Cloudera_Certification/CCA175/CCA175_Cloudera_Hadoop_and_Spark_Developer_Tips_and_Tricks.pdf

1. Preparation: I have gone through all the CCA175 Questions and practice the code provided by
http://www.HadoopExam.com Thanks for your questions and code content. The content was
excellent and it helped me a lot. (Especially I have gone through all the Spark Professional
training module as well)
2. No. Of Questions: Generally you will get 10 questions in real exam: Topic will be coverings are
Sqoop, Hive, Pyspark and Scala and avro-tools to extract schema (All questions are covered in
CCA175 Certification Simulator).
3. Code Snippets: will be provided for Pyspark and Scala. You have to edit the snippets accordingly
as per the problem statement.
4. Real Exam Environment: Gateway node will be accessible for execution of the problems during
the exam. Keep in mind there will not be any on-screen timer available during the exam. You
have to keep asking for the time left. There are three sections for each problem i.e.
· Instructions
· Data Set
· Output Requirements.
Please go through all the three sections carefully before start developing the code.
Note: If you started developing code right after looking at the Instruction part of the question,
then later you will realized the exact details of the table like name of the table and HDFS
directory are also mentioned. This can waste your time if have to redo the code or might as well
cost you a question.
5. Editor: nano, gedit are not available. So if you have to edit any code snippets, you have to use vi
alone. Please make yourself familiar with vi editor if you are not.
6. Fill in blanks: You dont have to write entire code for Python and Scala for Apache Spark,
generally they will ask you to do fill in the blanks.
7. Flume: Very few questions on flume.
8. Difficulty Level: If you have enough knowledge, you will feel exam is quite easy. The questions
were logically easy and can be answered in the first attempt if you read the question carefully
(all three sections).
9. Common mistake in Sqoop: People use connector as localhost which is wrong, you have to use
full name instead of localhost (Avoid wasting your time). Use given hostname
10. Hive: Have initial knowledge of hive as well.
11. Spark: Using basic transform functions to get desired output. For instance filter according
particular scenario, sorting and ranking etc.
12. Avro-tool : avro-tool to get schema of avro file. (Very  nicely covered in CCA175
HadoopExam.com Simulator)
13. Big Mistake: Avoid accidently deleting your data: good practice is necessary to avoid such
mistakes. (Once you delete or drop hive table, you have to create it entirely once again.) Same is
instructed by www.HadoopExam.com during their videos  session provided at
http://cca175cloudera.training4exam.com/ (Please go through sample sessions)
14. Spark-sql: They will not ask questions based on Spark Sql learn importantly aggregate, reduce,
sort.
15. Time management: It is very important, (That’s the reason you need too much practice, use
CCA175 simulator to practice all the questions at least a week or two before your real exam).
16. Data sets in real exam is quite larger, hence it will take 2 to 5 mins for execution.

17. Attempts: try to attempt all questions at least 9/10, hence you must be able to score 70%.
18. File format: In most of questions there was tab delimited file to process.
19. Python or Scala: You will get a preloaded python or scala file to work with, so you don't have a
choice whether you want to attempt a question via scala or pyspark. (I have gone through all the
Video sessions provided by www.HadoopExam.com here
20. Connection Issue: If you got disconnected during exam, you may need to contact the proctor
immediately. If he/she is not available log back into examslocal.com and use their online help.
21. Shell scripts: Have good experience to use shell scripts.
22. Question types as mentioned in syllabus : Questions were from Sqoop(import and export),
Hive(table creation and dynamic partitioning), Pyspark and Scala(Joining, sorting and filtering
data), avro-tools. Snippets of code will be provided for Pyspark and Scala. You have to edit the
snippets accordingly as per the problem statement and can the script file(which is another file
apart from snippet) to get the results.
23. Overall exam is easy, but require lot of practice to complete on time and for accurate
solutions of the problem. Hence go through the all below material for CCA175 (It will not take
more than a month, if you are new and already know the Spark and Hadoop then 2-3 weeks
are good enough.
· CCA175 : Hadoop and Spark Developer Certification practice questions
· Hadoop professional training
· Spark professional training.

Wish you all the best

번호 제목 글쓴이 날짜 조회 수
81 uEnv.txt위치및 내용 총관리자 2014.07.09 643
80 banana pi(lubuntu)에서 한글 설정및 한글깨짐 문제 해결 총관리자 2014.07.06 3205
79 access=WRITE, inode="staging":ubuntu:supergroup:rwxr-xr-x 오류 총관리자 2014.07.05 1719
78 org.apache.hadoop.security.AccessControlException: Permission denied: user=hadoop, access=WRITE, inode="":root:supergroup:rwxr-xr-x 오류 처리방법 총관리자 2014.07.05 2835
77 banana pi에(lubuntu)에 hadoop설치하고 테스트하기 - 성공 file 총관리자 2014.07.05 2760
76 2개 data를 join하고 마지막으로 code정보를 join하여 결과를 얻는 mr 프로그램 총관리자 2014.06.30 408
75 jsoup 사용 예제 총관리자 2014.06.06 1506
74 Cannot create /var/run/oozie/oozie.pid: Directory nonexistent오류 총관리자 2014.06.03 479
73 oozie job 구동시 JA009: User: hadoop is not allowed to impersonate hadoop 오류나는 경우 총관리자 2014.06.02 807
72 원보드pc인 bananapi를 이용하여 hadoop 클러스터 구성하기(준비물) file 총관리자 2014.05.29 3840
71 의사분산모드에 hadoop설치및 ecosystem 환경 정리 총관리자 2014.05.29 3170
70 hadoop및 ecosystem에서 사용되는 명령문 정리 총관리자 2014.05.28 3456
69 hive job실행시 meta정보를 원격의 mysql에 저장하는 경우 설정방법 총관리자 2014.05.28 1088
68 oozie의 meta정보를 mysql에서 관리하기 총관리자 2014.05.26 1466
67 hive query에서 mapreduce돌리지 않고 select하는 방법 총관리자 2014.05.23 812
66 hiverserver2기동시 connection refused가 발생하는 경우 조치방법 총관리자 2014.05.22 1472
65 import 혹은 export할때 hive파일의 default 구분자는 --input-fields-terminated-by "x01"와 같이 지정해야함 총관리자 2014.05.20 4245
64 hive에서 insert overwrite directory.. 로 하면 default column구분자는 'SOH'혹은 't'가 됨 총관리자 2014.05.20 999
63 source의 type을 spooldir로 하는 경우 해당 경로에 파일이 들어오면 파일단위로 전송함 총관리자 2014.05.20 687
62 특정파일이 생성되어야 action이 실행되는 oozie job만들기(coordinator.xml) 총관리자 2014.05.20 984

A personal place to organize information learned during the development of such Hadoop, Hive, Hbase, Semantic IoT, etc.
We are open to the required minutes. Please send inquiries to gooper@gooper.com.

위로