SAINORA

HomeBlogsEXP IMP

EXP IMP

Exp and Imp are logical backups.

They must connect to a opened database and take the backup or do the recovery.
It does cross platform data movement like linux to windows / solaris to linux etc..
Used for rectifying database fragmentation
Reads the contents of database and stores them as sql statements into export dump.
Can takes all the schema other than SYS?

 

Related roles

exp_full_database
imp_full_database

The  type of backup levels are

1. User
2. Table
3. Full
4. Tablespace

Parameters

buffer : Specifies the size, in bytes, of the buffer used to fetch the rows.  If 0 is specified, only one row is fetched at a time. 

compress : If Y, compresses all the extents into single extent for each table.

consistent : If Y  , it internally specifies "set transaction read only" statement and ensures data
consistency. It may lead to "ORA-1555 Snapshot too old error" if you dont have sufficient undo size.

Direct : This is a performance parameter which  bypass the SQL buffer and directly writes to external buffer.

file : The name of the export file and multiple files can be specified using comma.

filesize : The maximum file size specified in bytes.

full : The entire database is exported.

help = Y : Displays the parameters/option of export command.

log : The Log filename used by export to write messages.

object_consistent : Consistent at objects level

owner : Schema will expoerted

parfile :  The file that contains parameter list.

query :  Retrieves set of records that satisfies where condition

recordlength :  May increase the performace of export when spefified with direct option.

rows :  specified to retrive rows or only segment structures.

statistics : Mostly none is specified.

tables      :  Takes backup of specific tables and partitions.

tablespaces : takes tablespace level backups. User should have EXP_FULL_DATABASE role.

transport_tablespace – Exports the metadata needed for transportable tablespaces.

tts_full_check – If TRUE, export will verify that when creating a transportable tablespace, a consistent set of objects is exported.

userid :  userid/password of the user doing the export.

volsize  : Specifies the maximum number of bytes in an export file on each tape volume.
======================================================================

In environments we give the following mandatory parameters.

For export

userid
file
log
direct or buffer (anyone should be given to increase the performance)
statistics=none (which will not take segment statistics)

and other parameters
full=y
(takes all schemas other than SYS). But it will take the objects which are created in sys.

tables = <table_name> 
With this option we can take backup of tables and partitions.

Owner = <schema>
With this option you can take one or more schemas

 

Frequently Asked Questions

========================================================================================================
Given a export full dump from production, how will create a database with the dump.

      Hint :  View the contents of the import dump using SHOW=Y
========================================================================================================
How will you do schema refresh from PROD to UAT with the below specification?

  Schema_name=SSS
  PROD ipaddress  = 10.10.1.10
  UAT ipaddress   = 10.10.1.11

  Note : Make sure that  there is sufficient space for the export dump.

  Use export in PROD environment like
$ exp system/manager file=/backup/exp/sss.dmp log=/backup/exp/sss.log owner=sss direct=y statistics=none
then we need to transfer the file from PROD to UAT. Before that compress the dumpfile using
 
$ gzip /backup/exp/sss.dmp
which will create /backup/exp/sss.dmp.gz

Then move this file to UAT server under /backup directory using scp or ftp.

$scp /backup/exp/sss.dmp oracle@10.10.1.11:/backup
Now on UAT Server

 Note : Make sure that size of UNDO and archive log destination are sufficient.

There are two ways to import the data

1. Drop the schema and recreate it.
2. Truncate the schema and import  it. (if the user SSS has given any grants to other schema, this truncate
option is best choice) and tablespace change.

1. Drop the user and recreate it with the default tablespace name as of production.

SQL > conn sys/sys as sysdba
SQL > drop user sss cascade;

SQL > create user sss identified by sakthi default tablespace  USERS temporary tablespace temp;
SQL > grant connect,resource to sss;

then, do the import.

First unzip the dumpfile.

$ gunzip /backup/sss.dmp.gz
$ imp system/manager file=/backup/sss.dmp  log=/backup/sss.log fromuser=sss touser=sss buffer=10485760 commit=y

Import will be done successfully if you  have sufficient space in the database. The storage that should be monitored is
UNDO
USERS
archive log destination
and other index tablespaces related to the objects that is in sss.

2. Truncate and import

If you have grants that is given from SSS schema, then you can use this option so that the grants need not
be given again.
Using TRUNCATE TABLE <tablename> ;
you can delete all the rows. But you might get issues when deleting records from  parent tables.
We can create the script for deleting as shown below.

SQL> conn system/manager
SQL> spool /tmp/truncate_tables.sql
SQL> set pagesize 2000 head off
SQL> select 'truncate table '||owner||'.'||table_name||';' from dba_tables where owner='SSS';
SQL> spool off;

then execute the /tmp/truncate_tables.sql in sql window.
if you get child table error, again execute the script to delete.
Now,  import the dump using the below script.

$ imp system/manager file=/backup/sss.dmp log=/backup/sss.log fromuser=sss touser=sss buffer=10485760 commit=y  ignore=y

ignore=y will append the data if the table already exists.

Now we have imported the data. But we need to confirm whether it is successful.
ie.

POST REFRESH ACTIVITIES

Check all the objects are copied into SSS@UAT Schema.

SQL>  conn system/manager@PROD
SQL> Select count(*) from dba_objects where owner="SSS';
SQL>  conn system/manager@UAT
SQL>  Select count(*) from dba_objects where owner="SSS';

Note : The objects count should be equal in case of any mismatch then

Find what type of object is missing.

SQL>  conn system/manager@PROD
SQL> select object_type,count(*) from dba_objects where owner='SSS' group by object_type;

SQL>  conn system/manager@UAT
SQL> select object_type,count(*) from dba_objects where owner='SSS' group by object_type;
Still, if we want to find the missing object. Use dblink to compare the schema.
 
Create a DBlink in UAT for PROD.

SQL> conn system/manager@UAT

SQL> create database link LINK_PRD connect to system identified by manager using 'PROD';
Note : 'PROD' is the connect string that is configured in tnsnames.ora in UAT environment.

SQL> select object_name from dba_objects@PROD where owner='SSS' group by object_type
minus
select object_name from dba_objects where owner='SSS' group by object_type;

Import/create the missing objects.

Then,
 1. Recompile all the objects in the SSS schema of UAT environment.
SQL> conn system/manager@sss
SQL> @?/rdbms/admin/utlrp.sql

2. Analyze the schema objects. 
SQL> exec dbms_stats.gather_schema_stats('SSS');
 
================================================================================================
How will you monitor imp command?

 

SQL > select
sql_text,
rows_processed
from v$sqlarea 
where
parsing_schema_name='TEST'
and sql_text like 'INS%'
order by last_load_time;

Note : The rows can be seen only if you specify commit=y option.
==========================================================================

How will you import a table into different tablespace?
Create the table structure with the new tablespace in the target database like

create table sss(a number,b number) tablespace SSS_TBS;

then import the table using

$ imp sakthi/software file=/backup/table.dmp log=/backup/table.log fromuser=sakthi touser=sakthi buffer=10485760 commit=y  ignore=y

 

===========================================================================
What is Logical Backup?

What are the advantages of logical Backup?
 
move data between different hardware and operating- system environments
move data from one owner to another.
move data from one tablespace or schema to another
save space or reduce fragmentation in your database

Which sql file will create exp info in database?

CATEXP.SQL or CATALOG.SQL(which calls catexp.sql)

what are the roles/privileges you need to exp and imp?

CREATE SESSION privilege  : to export objects belonging to your schema
EXP_FULL_DATABASE :  To export objects owned by another user . DBA role will be granted this privilege.
IMP_FULL_DATABASE :  To imp objects owned by another user . DBA role will be granted this privilege.

What are the parameters you will use when doing export?
Userid
file
log
direct or buffer
statistics = nome

What are the other important parameters you will use when doing exp?
compress -  compresses multiple extents to single extent
consistent - It internally sets " set transaction read only " . Using this option you may get ora-1555 error.
full - export full database except SYS
tables - exports specified tables
owner - exports schema
parfile -  you specify the parameters in a file and can be called using this option. Mostly used when you have more tables to be listed and with QUERY option.
query -  exports set of records from a specified table. It is used with TABLES option.
rows - can be specified with or without rows)
filesize

What are the parameters you will use when doing imp?
Userid
file
log
buffer
commit

What are the other important parameters you will use when doing imp?
fromuser - data to be read from
touser - data tobe imported
full - full database
ignore -While importing if table already exists, it will append the data else it will create the table.
parfile - Parameters are listed in it
query - import set of records from a table
indexfile

What are performance parameters in exp?
direct (optionally recordlength)
buffer

What are performance parameters in imp?
buffer with commit

How will you see the contents of the export dump file.
Use SHOW=y

 

Given a FULL database export dump bcakup on new server with linux and oracle software is pre installed. How will you import the database in it?
Hint
1. Create a database in the new server
2. Use imp command with full=y and show =y to see the contents of the export dump
3. Create the required tablespaces
4. now import the database

What are the mode you can use in logical backup?
full
owner (schema/user)
tables ( also, partitions)
tablespace

How will you import a table to different tablespace?

Need to take a export of database where the dump file size will be 5gb? But the mout point sizes are
/u01  - 2Gb
/u02 - 1Gb
/u03  - 3Gb
How will you take the export dump?
Use : filesize option

How will you export and import PLSQL TYPES?

Assignment :

Export a schema backup from 9i and import it in 10g
Export a schema backup from 9i and import it in 11g
Export a schema backup from 10g and import it in 11g
Export a schema backup from 11g and import it in 10g
Export a schema backup from 10g and import it in 9i
Export a schema backup from 11g and import it in 9i
Do a schema refresh with out specifying a dump file. Hint : Use mknod
Do a schema refresh from 9i to windows 10G
Character set converion using exp and import?

 

DataPump

It is new feature of 10G and can do  high speed data loading and unloading from database.

It can be called using

1. DBMS_DATAPUMP
2. command line export/import
3. OEM

 

Tables

Master Table :

      This is the heart of every datapump operation.  Created in the users schema where it is running. It  contains :

current state of all objects
location of the object in the dumpfileset
parameters specified
status of every worker process
current set of dumpfiles.

 

It is created and exported with the dumpfile and imported in the target schema/database.

 Process Structure :

Client Process : Call Datapump API

Shadow Process : It is created when a client connects to the database. Also creates the job that creates master table, AQ queues used for communicating various processes and the master control process. It detatches when the client logs out.

Master control process :
It controls the execution and sequencing of a datapump job.
There will be one MCP per datapump job.
maintains
Job state
job description
restart
dumpfile information in the master table.

MCP process name is instance_DMnn_pid

Worker Process :
this is equal to parallel parameters.
It supports MCP and does the work given by MCP.

   It maintains : status of objects like pending, completed, failed and sonon.

                  type completion rows like tables,indexes, views etc.

The name is *DWnn*

 

Parallel Query Process (PQ) :

   It is created by worker process, which does loading and unloading the data.
It also acts as Qurty co-ordinator and used for parallel mechanism.

 

Data Movement :

1. Data file copying
2. Direct path load and unload
3. External tables
4. Conventional Path

Metadata movement

      For metadata loading and unloading metadata api is used by worked process. The metadata api extracts the ddls and wirtes into dumpfile as XML document.

It uses XSLT (XML Stylesheet Language Transformation) when importing data. The dumpfile may take more space when compared to normal exp and imp but it gives more flexibility than exp and dept.

Interprocess Communication :

AQ is used for communication between various DataPump processes. It has two queues.
1. Command and control Queue
All process subscribe to this queue. All API Commands, Work request and responses, file requests, log messages are processed in this queue.

2. Status Queue
Only shadow process can read from this queue.
it is used to receive work-in-progress and error messages queued by MCP.
Only MCP can write to this queue.

These queues are named as  : KUPC${C|S}_job-unique_time_stamp.

 

Performance Parameters

 

expdp direct path unload is 2 times faster than exp
impdp load is 15-45 times faster than imp.
when there are 2 or more worke process , data and metadata unloading can be done in parallel.

 

Initialization Parameters :

disk_async_io=true
db_block_checksum=false
streams_pool_size  for AQ

Network mode Parameters

 NETWORK_LINK

uses insert into t1 as select ... concept.
Also uses hints to improve the performance.

uses external table concepts.

Restart Jobs

 

ATTACH
START=SKIP_CURRENT

Object Selection

EXCLUDE
= function|procedure|package:"like 'PAY%'"

will not take the mentioned objects

INCLUDE
= function|package:"like 'PAY%'"
Will take only mentioned objects

CONTENT
= METADATA_ONLY, DATA_ONLY and BOTH

QUERY
= to export records  based on query condition.
SAMPLE

 

Monitoring Scripts

 

col owner_name format a10
col job_name format a25
col operation format a10
col job_mode format a10
col state format a10
set  linesize 130

select owner_name,
job_name,
operation,
job_mode,
state,
degree,
attached_sessions,
datapump_sessions
from dba_datapump_jobs
/

SQL> /

OWNER_NAME JOB_NAME        OPERATION  JOB_MODE   STATE      DEGREE ATTACHED_SESSIONS DATAPUMP_SESSIONS
---------- -------------- ---------- ---------- ---------- ---------- ----------------- -----------------
SCOTT SYS_EXPORT_SCHEMA_01  EXPORT SCHEMA   EXECUTING       1                    1                    3
==============================================

Related Errors

[oracle@sss-oracle ~]$ expdp scott/tiger dumpfile=scott.dmp logfile=scott.log schemas=scott

Export: Release 10.2.0.4.0 - Production on Wednesday, 25 April, 2012 15:43:55

Copyright (c) 2003, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
ORA-39002: invalid operation
ORA-39070: Unable to open the log file.
ORA-39145: directory object parameter must be specified and non-null

Solution : Create a directory as sys

Create directory SSS as '/sss_backup';
grant read, write on directory sss to public;

 ============================================

 

FREQUENTLY ASKED QUESTIONS

 

Tuning Parameters
Unlike Original Export and Import, which used the BUFFER, COMMIT, COMPRESS, CONSISTENT, DIRECT, and RECORDLENGTH parameters, Data Pump needs no tuning to achieve maximum performance. Data Pump chooses the best method to ensure that data and metadata are exported and imported in the most efficient manner. Initialization parameters should be sufficient upon installation. 

 Why are directory objects needed?      
They are needed to ensure data security and integrity. Otherwise, users would be able to read data that they should not have access to and perform unwarranted operations on the server.

What makes Data Pump faster than original Export and Import?
There are three main reasons that Data Pump is faster than original Export and Import. First, the Direct Path data access method (which permits the server to bypass SQL and go right to the data blocks on disk) has been rewritten to be much more efficient and now supports Data Pump Import and Export. Second, because Data Pump does its processing on the server rather than in the client, much less data has to be moved between client and server. Finally, Data Pump was designed from the ground up to take advantage of modern hardware and operating system architectures in ways that original Export/ and Import cannot. These factors combine to produce significant performance improvements for Data Pump over original Export and Import

How much faster is Data Pump than the original Export and Import utilities?
For a single stream, Data Pump Export is approximately 2 times faster than original Export and Data Pump Import is approximately 15 to 40 times faster than original Import. These performance improvements are for data movement only, not metadata. Speed can be dramatically improved using the PARALLEL parameter.

Why is Data Pump slower on small jobs?
Data Pump was designed for big jobs with lots of data. Each Data Pump job has a master table that has all the information about the job and is needed for restartability. The overhead of creating this master table makes small jobs take longer, but the speed in processing large amounts of data gives Data Pump a significant advantage in medium and larger jobs.

Are original Export and Import going away?
Original Export is being deprecated with the Oracle Database 11g release. Original Import will always be supported so that dump files from earlier releases (release 5.0 and later) will be able to be imported. Original and Data Pump dump file formats are not compatible.

Are Data Pump dump files and original Export and Import dump files compatible?
No, the dump files are not compatible or interchangeable. If you have original Export dump files, you must use original Import to load them.

How can I monitor my Data Pump jobs to see what is going on?
In interactive mode, you can get a lot of detail through the STATUS command. In SQL, you can query the following views:

• DBA_DATAPUMP_JOBS - all active Data Pump jobs and the state of each job
• USER_DATAPUMP_JOBS – summary of the user’s active Data Pump jobs
• DBA_DATAPUMP_SESSIONS – all active user sessions that are attached to
a Data Pump job
• V$SESSION_LONGOPS – shows all progress on each active Data Pump job

Can I use Oracle Enterprise Manager with Data Pump?
Yes, OEM supports a fully functional interface to Data Pump.

Can I use gzip with Data Pump?
You can do a Data Pump Export and then run the dump files through gzip to compress them. You cannot compress the dump files prior to writing them. (The new COMPRESS parameter can be used to compress metadata, and metadata compression is enabled by default in Oracle Database 10g Release 2.)

How will you move data between versions

The Data Pump method for moving data between different database versions is different from the method used by original Export and Import. With original Export, you had to run an older version of Export to produce a dump file that was compatible with an older database version. With Data Pump, you use the current Export version and simply use the VERSION parameter to specify the target database version. You cannot specify versions earlier than Oracle Database 10g (since Data Pump did not exist before 10g).

Reference : metalink ID 413965.1

           

Published on 17th April 2012

Published by Gopinathan D