Skip to main content

Sqoop- Import MySql Database to/from hive

Import MySql database to Hive
  • >hive
    To enter into hive
  • --hive-import
    To enable hive import in sqoop import command
  • --hive-table
    To specify table in which import will done
  • --hive-database
    To specify hive database. You can also use --hive-table to specify db by passing <DBName>.<TableName>
    In each hive import following command will be common.
> sqoop-import --connect jdbc:mysql://<mysql-URL>:<port>/ <DBName> \ 
--username <USERNAME> --password <PASSWORD> \   
--table <table-name> \        
--hive-import \         
--hive-table <table-name> \     
--hive-database <db-name>        

  • Internally hive-import is run with following steps.
1. It will import a table into HDFS
2. Copy imported a table into hive
3. Delete directory from HDFS.
Note: If this operation fails in between then directory in HDFS will not delete.


  • Default field delimiter for hive is ^A
  • If you fire same –hive-import command two times then it will add new files in hive with appending _copy to the name. So to overcome it, we can use –hive-overwrite
  • --map-column-hive : This command is used for map sql type to hive type.

Sqoop Export to database from HDFS
  • Sqoop export is for exporting data to traditional RDMS from data stored in HDFS.

> sqoop-export --connect jdbc:mysql://<mysql-URL>:<port>/<DBName> \
--username <USERNAME> --password <PASSWORD> \
--export-dir <dir-of-HDFS>
--table <target-table-name> \
--input-fields-terminated-by “<delimeter>” \
[--num-mappers <no-of-mapper>] \
[--columns <col1,col2,col3>] \
[--update-key <sql-column-name>] \
[--update-mode <mode>] \
[--staging-table <stage-table-name>] \
[--clear-staging-table]

Where
  • --export-dir: from where we have to copy data.
  • --table: target table where we need to export our data.
  • --input-fields-terminated-by: delimiter of input fields in HDFS data. For hive “\001”.
  • --num-mappers: Number of mappers.
  • --column: we can export data to a table by specifying the column. This will help when if an order of source data different then table or no of the field in source data is different then target table column fields. Column’s order will be as per our source data order but –columns argument should be as per table column name.
  • --update-key: It will update only those records from exported data to the target table having the same key as exported data but it will not insert non-existing key data.
  • --update-mode: updateonly(default) or allowinsert. allowinsert will update existing records and insert new records i.e merges data.
  • --staging-table: for exporting data with prevention of exporting intermediate data before any error occur. So it will copy first target table data to stage table then it will export data from HDFS to stage table then if operation success then it will copy data to target table.
  • --clear-staging-table: for staging table, it is necessary that the stage table should be empty. So it will clear stage table before an export operation.


Comments

Popular posts from this blog

AWS IOT Thing Job

AWS IOT Thing Job AWS Iot Thing Job Creation, new job notification, start job and update the job after downloading firmware through JAVA SDK with UI in JAVAFX | Presigned S3 URL creation This Application is made for firmware download. Refer to this GIT repository:  AWS IOT POC A repository contains 3 projects: Aws-Pre-Signed-Url-Generation: To generate presigned url and use it into job document. NOTE: AWS CLI should be configured Iot-Create-Thing-Job-App: To create iot thing job with UI. NOTE: Access key and secret key should be mentioned in aws-iot-sdk-samples.properties Iot-Start-Update-Thing-Job-App: To get notification for new job and to start job and then get job document from aws. After getting thing job document, it will download firmware zip from mention url and update the status of job to SUCCEDED or FAILED. NOTE: aws-iot-sdk-samples.properties files properties should be mention as per your aws account. JOB Document: sample-job-document.json { "ope

AWS Kinesis - Stream, Firehose, Analytics Overview

AWS Kinesis: AWS Kinesis is managed alternative of Apache Kafka. It can be used for big data real-time stream processing. It can be used for applications logs, metrics, forecast data, IoT. It can be used for streaming processing framework like Spark, NiFi, etc.   Kinesis Capabilities: Kinesis Streams : Streaming data ingest at scale with low latency. It is a data stream. Kinesis Analytics : To perform analytics on real-time streaming data using SQL. You can filter or aggregate data in real time. Kinesis Firehose : To load streams of data into S3, Redshift, Splunk or Elastic Search. It is a delivery stream. Kinesis Data Streams : Streams are divided into shards. To scale up application we can update shard configuration by increasing number of shards. By default shard's data can be retained for 1 Day but you can extend it for 7 days. Multiple application can use same stream. Real-time processing of data with a scale of throughput. Record size should not

AWS IOT JITR (Just in Time registration) with Thing and Policy creation using JAVA

AWS IOT JITR with Thing and Policy creation using JAVA. This POC will provide Just In Time Registration (JITR) of custom certificate and Thing creation with connect policy for AWS IOT Devices. You just need to add name of thing in common name while creation of device certificate and thing will be created with attached policy & certificate and common name as thing name. Project Overview: Get certificate details from certificate id. Parse certificate details and get common name from certificate. Creates IOT policy having action of connect. Creates IOT thing with name from certificate common name. Attach policy and thing to certificate. Activate Certificate. Now your device can connect to AWS using this custom certificate. Step for JITR & Thing creation Create CA Certificate: openssl genrsa -out CACertificate.key 2048 openssl req -x509 -new -nodes -key CACertificate.key -sha256 -days 365 -out CACertificate.pem Enter necessary details like city, country, et