Wednesday, 27 April 2022

Sqoop : import only selected columns of a table

Using ‘—columns’ option, we can import specific columns of a table.

 

Example

sqoop import \
--connect "jdbc:mysql://quickstart.cloudera:3306/retail_db" \
--username "root" \
--password "cloudera" \
--table "customers" \
--target-dir /columns_demo_1 \
-m 1 \
--columns customer_id,customer_fname,customer_email \
--where "customer_id < 10"

Above snippet import the columns customer_id ,customer_fname, customer_email from customers table where the customer id is < 10.

[cloudera@quickstart Desktop]$ sqoop import \
> --connect "jdbc:mysql://quickstart.cloudera:3306/retail_db" \
> --username "root" \
> --password "cloudera" \
> --table "customers" \
> --target-dir /columns_demo_1 \
> -m 1 \
> --columns customer_id,customer_fname,customer_email \
> --where "customer_id < 10"
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
22/04/03 02:21:46 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.13.0
22/04/03 02:21:46 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
22/04/03 02:21:46 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
22/04/03 02:21:46 INFO tool.CodeGenTool: Beginning code generation
22/04/03 02:21:46 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `customers` AS t LIMIT 1
22/04/03 02:21:46 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `customers` AS t LIMIT 1
22/04/03 02:21:46 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce
Note: /tmp/sqoop-cloudera/compile/fdd9db917433f63ef68432fb1dfa601a/customers.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
22/04/03 02:21:48 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-cloudera/compile/fdd9db917433f63ef68432fb1dfa601a/customers.jar
22/04/03 02:21:48 WARN manager.MySQLManager: It looks like you are importing from mysql.
22/04/03 02:21:48 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
22/04/03 02:21:48 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
22/04/03 02:21:48 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
22/04/03 02:21:48 INFO mapreduce.ImportJobBase: Beginning import of customers
22/04/03 02:21:48 INFO Configuration.deprecation: mapred.job.tracker is deprecated. Instead, use mapreduce.jobtracker.address
22/04/03 02:21:49 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
22/04/03 02:21:49 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
22/04/03 02:21:49 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
22/04/03 02:21:50 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
	at java.lang.Object.wait(Native Method)
	at java.lang.Thread.join(Thread.java:1281)
	at java.lang.Thread.join(Thread.java:1355)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
22/04/03 02:21:50 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
	at java.lang.Object.wait(Native Method)
	at java.lang.Thread.join(Thread.java:1281)
	at java.lang.Thread.join(Thread.java:1355)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
22/04/03 02:21:50 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
	at java.lang.Object.wait(Native Method)
	at java.lang.Thread.join(Thread.java:1281)
	at java.lang.Thread.join(Thread.java:1355)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
22/04/03 02:21:50 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
	at java.lang.Object.wait(Native Method)
	at java.lang.Thread.join(Thread.java:1281)
	at java.lang.Thread.join(Thread.java:1355)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
22/04/03 02:21:51 WARN hdfs.DFSClient: Caught exception 
java.lang.InterruptedException
	at java.lang.Object.wait(Native Method)
	at java.lang.Thread.join(Thread.java:1281)
	at java.lang.Thread.join(Thread.java:1355)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.closeResponder(DFSOutputStream.java:967)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.endBlock(DFSOutputStream.java:705)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:894)
22/04/03 02:21:51 INFO db.DBInputFormat: Using read commited transaction isolation
22/04/03 02:21:51 INFO mapreduce.JobSubmitter: number of splits:1
22/04/03 02:21:51 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1647946797614_0016
22/04/03 02:21:51 INFO impl.YarnClientImpl: Submitted application application_1647946797614_0016
22/04/03 02:21:51 INFO mapreduce.Job: The url to track the job: http://quickstart.cloudera:8088/proxy/application_1647946797614_0016/
22/04/03 02:21:51 INFO mapreduce.Job: Running job: job_1647946797614_0016
22/04/03 02:21:58 INFO mapreduce.Job: Job job_1647946797614_0016 running in uber mode : false
22/04/03 02:21:58 INFO mapreduce.Job:  map 0% reduce 0%
22/04/03 02:22:06 INFO mapreduce.Job:  map 100% reduce 0%
22/04/03 02:22:06 INFO mapreduce.Job: Job job_1647946797614_0016 completed successfully
22/04/03 02:22:06 INFO mapreduce.Job: Counters: 30
	File System Counters
		FILE: Number of bytes read=0
		FILE: Number of bytes written=171894
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=87
		HDFS: Number of bytes written=161
		HDFS: Number of read operations=4
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=2
	Job Counters 
		Launched map tasks=1
		Other local map tasks=1
		Total time spent by all maps in occupied slots (ms)=3858
		Total time spent by all reduces in occupied slots (ms)=0
		Total time spent by all map tasks (ms)=3858
		Total vcore-milliseconds taken by all map tasks=3858
		Total megabyte-milliseconds taken by all map tasks=3950592
	Map-Reduce Framework
		Map input records=9
		Map output records=9
		Input split bytes=87
		Spilled Records=0
		Failed Shuffles=0
		Merged Map outputs=0
		GC time elapsed (ms)=54
		CPU time spent (ms)=700
		Physical memory (bytes) snapshot=138530816
		Virtual memory (bytes) snapshot=1512255488
		Total committed heap usage (bytes)=60751872
	File Input Format Counters 
		Bytes Read=0
	File Output Format Counters 
		Bytes Written=161
22/04/03 02:22:06 INFO mapreduce.ImportJobBase: Transferred 161 bytes in 16.4452 seconds (9.7901 bytes/sec)
22/04/03 02:22:06 INFO mapreduce.ImportJobBase: Retrieved 9 records.
[cloudera@quickstart Desktop]$

 Query the folder ‘/columns_demo_1’ to confirm the same.

[cloudera@quickstart Desktop]$ hadoop fs -ls /columns_demo_1
Found 2 items
-rw-r--r--   1 cloudera supergroup          0 2022-04-03 02:22 /columns_demo_1/_SUCCESS
-rw-r--r--   1 cloudera supergroup        161 2022-04-03 02:22 /columns_demo_1/part-m-00000
[cloudera@quickstart Desktop]$ 
[cloudera@quickstart Desktop]$ hadoop fs -cat /columns_demo_1/*
1,Richard,XXXXXXXXX
2,Mary,XXXXXXXXX
3,Ann,XXXXXXXXX
4,Mary,XXXXXXXXX
5,Robert,XXXXXXXXX
6,Mary,XXXXXXXXX
7,Melissa,XXXXXXXXX
8,Megan,XXXXXXXXX
9,Mary,XXXXXXXXX

 

  

Previous                                                    Next                                                    Home

No comments:

Post a Comment