离线数仓(五):数仓搭建
文章目录
- 一、创建数据库
- 二、ODS 层(原始数据层)
- 三、DWD 层(明细数据层)
- 3.1 get_json_object 函数使用
- 3.2 启动日志表 DWD层创建
- 四、DWS 层(服务数据层)
- 五、DWT 层(数据主题层)
- 六、ADS 层(数据应用层)
- 保持数据原貌不做任何修改,起到备份数据的作用
- 数据采用
LZO
压缩,减少磁盘存储空间。100G数据可以压缩到10G以内 - 创建分区表,防止后续的全表扫描,在企业开发中大量使用分区表
- 创建外部表。在企业开发中,除了自己用的临时表,创建内部表外,绝大多数场景都是创建外部表
一、创建数据库
[root@hadoop100 hive-3.1.2]# bin/hivehive (default)> create database mall;
hive (default)> use mall
二、ODS 层(原始数据层)
创建 ODS 层表通用步骤如下:
① 创建启动日志表
hive (mall)> drop table if exists ods_event_log;
hive (mall)> create external table ods_event_log(`line` string)
partitioned by(`dt` string)
stored AS inputformat 'com.hadoop.mapred.DeprecatedLzoTextInputFormat'
outputformat 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
location '/warehouse/mall/ods/ods_event_log';
② 加载数据
hive (mall)> load data inpath '/origin_data/mall/log/topic_event/2021-01-08'
into table mall.ods_event_log partition(dt='2021-01-08');
③ 为 lzo 压缩文件创建索引
[root@hadoop100 ~]# hadoop jar /opt/module/hadoop-3.1.3/share/hadoop/common/hadoop-lzo-0.4.20.jar \
com.hadoop.compression.lzo.DistributedLzoIndexer /warehouse/mall/ods/ods_event_log/dt=2021-01-08
④ 查看是否加载成功
hive (mall)> select * from ods_event_log limit 1;
{"action":"1","ar":"MX","ba":"Huawei","detail":"","en":"start","entry":"3","extend1":"",
"g":"8844J1F0@gmail.com","hw":"750*1134","l":"es","la":"-36.5","ln":"-43.3",
"loading_time":"15","md":"Huawei-2","mid":"2","nw":"WIFI","open_ad_type":"1","os":"8.2.8",
"sr":"L","sv":"V2.3.6","t":"1609368942552","uid":"2","vc":"19","vn":"1.0.1"} 2021-01-08
Time taken: 0.214 seconds, Fetched: 1 row(s)
⑤ 通用加载数据脚本
#!/bin/bashdb=mall
hive=/opt/module/hive/bin/hive-3.1.2
do_date=`date -d '-1 day' +%F`if [[ -n "$1" ]]; thendo_date=$1
fisql="
load data inpath '/origin_data/mall/log/topic_start/$do_date' into table ${db}.ods_start_log partition(dt='$do_date');
load data inpath '/origin_data/mall/log/topic_event/$do_date' into table ${db}.ods_event_log partition(dt='$do_date');
"$hive -e "$sql"
hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/common/hadoop-lzo-0.4.20.jar com.hadoop.compression.lzo.DistributedLzoIndexer /warehouse/mall/ods/ods_start_log/dt=$do_date
hadoop jar /opt/module/hadoop-2.7.2/share/hadoop/common/hadoop-lzo-0.4.20.jar com.hadoop.compression.lzo.DistributedLzoIndexer /warehouse/mall/ods/ods_event_log/dt=$do_date
Shell中单引号和双引号区别
- 单引号不取变量值
- 双引号取变量值
- 反引号`,执行引号中命令
- 双引号内部嵌套单引号,取出变量值
- 单引号内部嵌套双引号,不取出变量值
三、DWD 层(明细数据层)
- 对用户行为数据解析
- 对核心数据进行判空过滤
- 对业务数据采用维度模型重新建模,即维度退化
3.1 get_json_object 函数使用
数据 xjson:[{"name":"大郎","sex":"男","age":"25"},{"name":"西门庆","sex":"男","age":"47"}]
取出第一个 json 对象:select get_json_object('[{"name":"大郎","sex":"男","age":"25"},{"name":"西门庆","sex":"男","age":"47"}]','$[0]');
结果:{"name":"大郎","sex":"男","age":"25"}
取出第一个 json 的 age 字段的值:SELECT get_json_object('[{"name":"大郎","sex":"男","age":"25"},{"name":"西门庆","sex":"男","age":"47"}]',"$[0].age");
结果:25
3.2 启动日志表 DWD层创建
① 创建启动日志表
hive (mall)> CREATE EXTERNAL TABLE dwd_start_log(
`mid_id` string,
`user_id` string,
`version_code` string,
`version_name` string,
`lang` string,
`source` string,
`os` string,
`area` string,
`model` string,
`brand` string,
`sdk_version` string,
`gmail` string,
`height_width` string,
`app_time` string,
`network` string,
`lng` string,
`lat` string,
`entry` string,
`open_ad_type` string,
`action` string,
`loading_time` string,
`detail` string,
`extend1` string
)
PARTITIONED BY (dt string)
stored as parquet
location '/warehouse/mall/dwd/dwd_start_log/'
TBLPROPERTIES('parquet.compression'='lzo');
② 导入数据
hive (mall)> insert overwrite table dwd_start_log
PARTITION (dt='2021-01-11')
select get_json_object(line,'$.mid') mid_id,get_json_object(line,'$.uid') user_id,get_json_object(line,'$.vc') version_code,get_json_object(line,'$.vn') version_name,get_json_object(line,'$.l') lang,get_json_object(line,'$.sr') source,get_json_object(line,'$.os') os,get_json_object(line,'$.ar') area,get_json_object(line,'$.md') model,get_json_object(line,'$.ba') brand,get_json_object(line,'$.sv') sdk_version,get_json_object(line,'$.g') gmail,get_json_object(line,'$.hw') height_width,get_json_object(line,'$.t') app_time,get_json_object(line,'$.nw') network,get_json_object(line,'$.ln') lng,get_json_object(line,'$.la') lat,get_json_object(line,'$.entry') entry,get_json_object(line,'$.open_ad_type') open_ad_type,get_json_object(line,'$.action') action,get_json_object(line,'$.loading_time') loading_time,get_json_object(line,'$.detail') detail,get_json_object(line,'$.extend1') extend1
from ods_start_log
where dt='2021-01-11';
③ 通用加载数据脚本
#!/bin/bash# 定义变量方便修改
APP=mall
hive=/opt/module/hive/bin/hive-3.1.2# 如果是输入的日期按照取输入日期;如果没输入日期取当前时间的前一天
if [ -n "$1" ] ;thendo_date=$1
else do_date=`date -d "-1 day" +%F`
fi sql="
insert overwrite table "$APP".dwd_start_log
PARTITION (dt='$do_date')
select get_json_object(line,'$.mid') mid_id,get_json_object(line,'$.uid') user_id,get_json_object(line,'$.vc') version_code,get_json_object(line,'$.vn') version_name,get_json_object(line,'$.l') lang,get_json_object(line,'$.sr') source,get_json_object(line,'$.os') os,get_json_object(line,'$.ar') area,get_json_object(line,'$.md') model,get_json_object(line,'$.ba') brand,get_json_object(line,'$.sv') sdk_version,get_json_object(line,'$.g') gmail,get_json_object(line,'$.hw') height_width,get_json_object(line,'$.t') app_time,get_json_object(line,'$.nw') network,get_json_object(line,'$.ln') lng,get_json_object(line,'$.la') lat,get_json_object(line,'$.entry') entry,get_json_object(line,'$.open_ad_type') open_ad_type,get_json_object(line,'$.action') action,get_json_object(line,'$.loading_time') loading_time,get_json_object(line,'$.detail') detail,get_json_object(line,'$.extend1') extend1
from "$APP".ods_start_log
where dt='$do_date';
"$hive -e "$sql"