메뉴 건너뛰기

Bigdata, Semantic IoT, Hadoop, NoSQL

Bigdata, Hadoop ecosystem, Semantic IoT등의 프로젝트를 진행중에 습득한 내용을 정리하는 곳입니다.
필요한 분을 위해서 공개하고 있습니다. 문의사항은 gooper@gooper.com로 메일을 보내주세요.


출처 : http://letsexplorehadoop.blogspot.com/2016/05/upsert-in-hive-3-step-process.html



아래설명을 기준으로 hive에서 실행해본 hive script

--------------------------------------------------------------------------------------------------

-- create table if not exists site_view_hist(

-- brower_name string,

-- clicks_count int,

-- impressions_count int)

-- partitioned by (hit_date date)

-- row format delimited

-- fields terminated by ',';


-- gooper@gsda1:/var/log$ hdfs dfs -cat /user/hive/warehouse/site_view_hist/hit_date=2016-01-01/000000_0

--iexplorer,123,456


SET hive.support.concurrency = true;

SET hive.enforce.bucketing = true;

SET hive.exec.dynamic.partition.mode = nonstrict;

SET hive.txn.manager =org.apache.hadoop.hive.ql.lockmgr.DbTxnManager;

SET hive.compactor.initiator.on = true;

SET hive.compactor.worker.threads = 1;


truncate table site_view_hist;

truncate table site_view_raw;



insert into table site_view_hist partition(hit_date='2016-01-01') values('iexplorer', 123, 456);

insert into table site_view_hist partition(hit_date='2016-01-01') values('firefox', 123, 456);

insert into table site_view_hist partition(hit_date='2016-01-01') values('chrome', 123, 456);

insert into table site_view_hist partition(hit_date='2016-01-02') values('firefox', 111, 431);

insert into table site_view_hist partition(hit_date='2016-01-03') values('chrome', 234, 567);

insert into table site_view_hist partition(hit_date='2016-01-03') values('iexplorer', 234, 567);

insert into table site_view_hist partition(hit_date='2016-01-03') values('firefox', 987, 654);

insert into table site_view_hist partition(hit_date='2016-01-04') values('chrome', 529, 912);

insert into table site_view_hist partition(hit_date='2016-01-05') values('firefox', 911, 888);

insert into table site_view_hist partition(hit_date='2016-01-06') values('iexplorer', 900, 833);



select * from site_view_hist;


-- create table if not exists site_view_raw(

-- brower_name string,

-- clicks_count int,

-- impressions_count int)

-- partitioned by (hit_date date)

-- row format delimited

-- fields terminated by ',';


insert into table site_view_raw partition(hit_date='2016-01-01') values('chrome', 246, 789);

insert into table site_view_raw partition(hit_date='2016-01-01') values('firefox', 999, 200);

insert into table site_view_raw partition(hit_date='2016-01-31') values('iexplorer', 144, 999);


select * from site_view_raw;



select h.* from site_view_hist h where h.hit_date in (select distinct hit_date from site_view_raw r);


drop table site_view_temp1;


--아래 설명에서 subquery부분에 brower_name is not null을 추가하여 파티션만 있고 데이타 없는 경우는 포함되지 않도록함

create table site_view_temp1

as select h.* from site_view_hist h where h.hit_date in (select distinct hit_date from site_view_raw r where r.brower_name is not null);


select * from site_view_temp1;


create table site_view_temp2 as select t1.* from site_view_temp1 t1

where not exists

(select 1 from site_view_raw r

where t1.brower_name=r.brower_name

and t1.hit_date=r.hit_date);


select * from site_view_temp2;



insert into table site_view_temp2

select * 

from site_view_raw;


select * from site_view_temp2;


insert overwrite table site_view_hist

partition(hit_date)

select * from site_view_temp2;



select * from site_view_hist;

--------------------------------------------------------------------------------------------------

UPSERT in Hive(3 Step Process)

In this post I am providing a 3 step process for performing UPSERT in hive on a large size table containing entire history.
Just for the audience not aware of UPSERT - It is a combination of UPDATE and INSERT. If on a table containing history data, we receive new data which needs to be inserted as well as some data which is an UPDATE to the existing data, then we have to perform an UPSERT operation to achieve this.

Prerequisite – The table containing history being very large in size should be partitioned, which is also a best practice for efficient data storage, when working with large data in hive.

Business scenario – Lets take a scenario of a website table containing website metrics as gathered from different browsers of visitors who visited the website. The site_view_hist table contains the clicks and page impressions counts from different browsers and the table is partitioned on hit_date(the date on which the visitor visited the website).
Clicks – number of clicks(Eg on adds displayed) done by visitor on website page.
Impressions – number of times the website pages or different sections were viewed by the visitor.

Problem statement - If we receive correction in the number of clicks and impressions as recorded by browser, we need to update them in the history table and also insert any new records we received.
Lets dive into it:
In the history table we have browser_name and hit_date as a composite key which will remain constant and we receive updates in the values of clicks_count and impressions_count columns.
DDL of history table

Data:

Now suppose we receive records for date 2016-01-01(marked in blue) for firefox and chrome browsers, with an updated value of clicks and impressions, and we also received a new record(iexplorer) for 2016-01-31. Let us store these new and updated records in the following raw table:
DDL of Raw table




Data

Now we need an UPSERT solution, which updates the records of site_view_hist table for hit_date 2016-01-01 and insert the new record for 2016-01-31.
                                               SOLUTION (3 STEP):
To achieve this in an efficient way, we will use the following 3 step process:
Prep Step - We should first get those partitions from the history table which needs to be updated. So we create a temp table site_view_temp1 which contains the rows from history with hit_date equal to the hit_date of raw table.
This will bring us all the hit_date partitions from history table for which atleast one updated record exists in the raw table.
Note - Instead of table we can also create a view for efficient processing and saving storage space.


Data of site_view_temp1 table:

Step 1 – From these fetched partitions we will separate the old unchanged rows. These are the rows in which there is no change in the clicks and impressions count. For this we will create a temp table site_view_temp2 as follows:








Data of site_view_temp2 table:

Step2 – Now we will insert into this new temp table, all the rows from the raw table. This step will bring in the updated rows as well as any new rows. And since site_view_temp2 already contained the old rows, so it will now have all the rows including new, updated, and unchanged old rows. Following query does this: 



New Data of site_view_temp2 table

Step3 – Now simply insert overwriting the site_view_hist table with site_view_temp2 table, will provide us the required output rows including two updated rows for 2016-01-01 and one new inserted row for 2016-01-31.
Catch – Since the history table is partitioned on the hit_date, the respective partitions will only be overwritten as follows:




Final history table  with updated and inserted rows:

Benefits of this approach:         
  1. In the prep step itself since we are fetching just the partitions we have to update, so we are not scanning the whole history table. This makes our processing faster.
  2. In the final step as we are insert overwriting the history with the temp table, we are touching just the partition we want to update along with a new partition created for the new record.This gives a high performance gain, as I gained for my production process on a 6.7 TB history table with over 5 billion records. But since my 3 step process(included in one hive script) just touched few partitions of few thousand rows, the process completed in just minutes.
번호 제목 글쓴이 날짜 조회 수
380 [Hue]Hue의 메타정보를 담고 있는 desktop_document테이블과 desktop_document2의 관계 총관리자 2022.05.09 38
379 [TLS/SSL]Cloudera 6.3.4기준 Oozie Web UI TLS설정 항목및 설정값 총관리자 2022.05.13 38
378 new Gson().toJson(new ObjectId())을 사용하면 값이 다르게 나오는 경우가 있음 총관리자 2016.12.23 44
377 you are accessing a non-optimized hue please switch to one of the available addresses 총관리자 2021.10.06 45
376 [CDP7.1.7]impala-shell을 이용하여 kudu table에 insert/update수행시 발생하는 오류(Transport endpoint is not connected (error 107)) 발생시 확인할 내용 gooper 2023.11.30 45
375 hadoop cluster구성된 노드를 확인시 Capacity를 보면 색이 붉은색으로 표시되어 있는 경우나 Unhealthy인 경우 처리방법 총관리자 2017.08.30 46
374 [application수행 로그]Failed to read the application application_123456789012_123456시 조치 방법 총관리자 2022.03.21 46
373 hue메타 정보를 저장(oracle DB)하는 내부 테이블을 이용하여 전체 테이블목록, 전체 코디네이터 목록, 코디네이터기준 workflow구조를 추출하는 쿼리문 총관리자 2022.04.01 48
372 "bad handshake: Error([('SSL routines', 'ssl3_get_server_certificate', 'certificate verify failed')])" 오류는 CA인증을 하지 못해서 발생함 총관리자 2022.05.13 48
371 Cloudera Manager 5.x설치시 embedded postgresql를 사용하는 경우의 관리정보 총관리자 2018.04.13 49
370 [Ranger]계정에 admin권한(grant, create등)의 권한 부여 방법 gooper 2023.04.18 49
369 spark 온라인 책자링크 (제목 : mastering-apache-spark) 총관리자 2016.05.25 51
368 hadoop에서 yarn jar ..를 이용하여 appliction을 실행하여 정상적으로 수행되었으나 yarn UI의 어플리케이션 목록에 나타나지 않는 문제 총관리자 2017.05.02 51
367 [HDFS]Encryption Zone에 생성된 테이블 조회시 Failed to open HDFS file hdfs://nameservice1/tmp/zone1/sec_test_file.txt Error(255): Unknown error 255 Root cause: AuthorizationException: User:impala not allowd to do 'DECRYPT_EEK' on 'testkey' gooper 2023.06.29 53
366 oracle 12에 sqoop해서 데이터 import하기 (console에서 sqoop import하는 방법) 총관리자 2021.12.31 54
365 [hive] hive.tbls테이블의 owner컬럼값은 hadoop.security.auth_to_local에 의해서 filtering된다. 총관리자 2022.04.14 55
364 HDFS에서 quota 설정 방법및 확인 방법 총관리자 2022.03.30 58
363 [Impala jdbc]CDP7.1.7환경에서 java프로그램을 이용하여 kerberized impala cluster에 접근하여 SQL을 수행하는 방법 gooper 2023.08.22 58
362 Master rejected startup because clock is out of sync 오류 해결방법 총관리자 2016.05.03 60
361 Mysql DB 생성 및 권한. 특정아이피, 대역에 대한 접근 허용 총관리자 2017.05.04 60

A personal place to organize information learned during the development of such Hadoop, Hive, Hbase, Semantic IoT, etc.
We are open to the required minutes. Please send inquiries to gooper@gooper.com.

위로