Oracle DBA

Oracle DBA tasks for operations and infrastructure.

Move Database Files

In Oracle 12c, a data file can be relocated while it is online. The command updates the control file and dictionary, and moves the file at the OS level. The procedure for relocating temp files, online redo logs, and control files remains unchanged.

-- move data file

SQL> alter database move datafile '/u01/oradata/db1/db1/system01.dbf' to '/u01/oradata/db1/system01.dbf';

Here are some noteworthy facts about relocating data files:
– The datafile must be online when this command is issued.
– The file is first copied to the new location, and then deleted from the old; this requires twice the space.
– There is an option to keep the source file; the default is to delete it.
– If the destination file exits, you will get an error, unless the reuse option is specified.
– In Windows, the source file is sometimes not deleted, even if the keep option is not used.
– If you flashback the database, a moved file will not be placed back in its origianl location.
– Moving a datafile on a data guard primary DB does not move it on the standby.
– PDB datafiles cannot be moved from the root container.
– You can move files from/to ASM.
– There are no special considerations for RAC.


Demo

The example illustrates how to move database files. The demo environment consists of an Oracle 12cR1 database running on an Oracle Linux guest, using Virtual Box on Windows. The example below shows how to move data files online. It also shows one procedure for moving other database files.

Example

The online move feature introduced in 12c applies to all data files, inlcuding those belonging to the system tablespace. However, there is no change in the procedure for moving temp files, online redo logs, and control files. Temp files and online redo logs can be added and dropped while the database is running. Moving control files requires a database bounce.

Move all datafiles, including system and undo.

SQL> select file_id,file_name,status from dba_data_files order by file_id;

FILE_ID    FILE_NAME                              STATUS
---------- -------------------------------------- ---------
1          /u01/oradata/db1/db1/system01.dbf      AVAILABLE
3          /u01/oradata/db1/db1/sysaux01.dbf      AVAILABLE
4          /u01/oradata/db1/db1/undotbs01.dbf     AVAILABLE
5          /u01/oradata/db1/db1/example01.dbf     AVAILABLE
6          /u01/oradata/db1/db1/users01.dbf       AVAILABLE

SQL> alter database move datafile '/u01/oradata/db1/db1/system01.dbf' to '/u01/oradata/db1/system01.dbf';

Database altered.

SQL> alter database move datafile '/u01/oradata/db1/db1/sysaux01.dbf' to '/u01/oradata/db1/sysaux01.dbf';

Database altered.

SQL> alter database move datafile '/u01/oradata/db1/db1/undotbs01.dbf' to '/u01/oradata/db1/undotbs01.dbf';

Database altered.

SQL> alter database move datafile '/u01/oradata/db1/db1/example01.dbf' to '/u01/oradata/db1/example01.dbf';

Database altered.

SQL> alter database move datafile '/u01/oradata/db1/db1/users01.dbf' to '/u01/oradata/db1/users01.dbf';

Database altered.

Verify dictionary has been updated.

SQL> select file_id,file_name,status from dba_data_files order by file_id;

FILE_ID    FILE_NAME                        STATUS
---------- -------------------------------- ---------
1          /u01/oradata/db1/system01.dbf    AVAILABLE
3          /u01/oradata/db1/sysaux01.dbf    AVAILABLE
4          /u01/oradata/db1/undotbs01.dbf   AVAILABLE
5          /u01/oradata/db1/example01.dbf   AVAILABLE
6          /u01/oradata/db1/users01.dbf     AVAILABLE

Verify data files have been relocated to destination directory at OS level.

SQL> !ls /u01/oradata/db1
db1 example01.dbf sysaux01.dbf system01.dbf undotbs01.dbf users01.dbf

… and deleted from source location.

SQL> !ls /u01/oradata/db1/db1
control01.ctl control02.ctl redo01.log redo02.log redo03.log temp01.dbf

Temp files cannot be moved online; to avoid downtime, you can add a new file and delete the old one.

SQL> alter tablespace temp add tempfile '/u01/oradata/db1/temp01.dbf' size 32m autoextend on next 16m maxsize 1024m;

Tablespace altered.

-- drop temp file; if there are active temp segments in it, you will get an error.

SQL> alter database tempfile '/u01/oradata/db1/db1/temp01.dbf' drop including datafiles;

Database altered.

Redo logs also cannot be moved online; to avoid downtime, you can add a new file and then delete the old one.

SQL> select f.group#,member,bytes,l.status from v$log l,v$logfile f where l.group#=f.group# order by 1;

GROUP#     MEMBER                   BYTES        STATUS
---------- ------------------------ ------------ -----------
1 /u01/oradata/db1/db1/redo01.log     52,428,800 CURRENT
2 /u01/oradata/db1/db1/redo02.log     52,428,800 INACTIVE
3 /u01/oradata/db1/db1/redo03.log     52,428,800 INACTIVE

Drop ORLs in /u01/oradata/db1/db1 and recreate them in /u01/oradata/db1.

Start with group 2 which is inactive.

SQL> alter database drop logfile group 2;

Database altered.

SQL> alter database add logfile group 2 '/u01/oradata/db1/redo02.log' size 50m reuse;

Database altered.

Group 3 is also inactive and can be dropped & recreated.

SQL> alter database drop logfile group 3;

Database altered.

SQL> alter database add logfile group 3 '/u01/oradata/db1/redo03.log' size 50m reuse;

Database altered.

Switch logfile to make group 2 current.

SQL> alter system switch logfile;

System altered.

SQL> select f.group#,member,bytes,l.status from v$log l,v$logfile f where l.group#=f.group# order by 1;

GROUP#     MEMBER                             BYTES        STATUS
---------- ---------------------------------- ------------ ----------
1          /u01/oradata/db1/db1/redo01.log      52,428,800 ACTIVE
2          /u01/oradata/db1/redo02.log          52,428,800 CURRENT
3          /u01/oradata/db1/redo03.log          52,428,800 UNUSED

Group 1 is still active; perform a checkpoint to make it inactive.

SQL> alter system checkpoint;

System altered.

SQL> select f.group#,member,bytes,l.status from v$log l,v$logfile f where l.group#=f.group# order by 1;

GROUP#     MEMBER                                    BYTES STATUS
———- ———————————- ———— ———–
1          /u01/oradata/db1/db1/redo01.log      52,428,800 INACTIVE
2          /u01/oradata/db1/redo02.log          52,428,800 CURRENT
3          /u01/oradata/db1/redo03.log          52,428,800 UNUSED

Now group 1 is inactive and can be dropped & recreated.

SQL> alter database drop logfile group 1;

Database altered.

SQL> alter database add logfile group 1 '/u01/oradata/db1/redo01.log' size 50m reuse;

Database altered.

Verify all ORLs are now in /u01/oradata/db1.

SQL> select f.group#,member,bytes,l.status from v$log l,v$logfile f where l.group#=f.group# order by 1;

GROUP#     MEMBER                        BYTES        STATUS
---------- ----------------------------- ------------ ----------
1          /u01/oradata/db1/redo01.log     52,428,800 UNUSED
2          /u01/oradata/db1/redo02.log     52,428,800 CURRENT
3          /u01/oradata/db1/redo03.log     52,428,800 UNUSED

Delete ORLs in /u01/oradata/db1/db1.

SQL> !rm /u01/oradata/db1/db1/redo0?.log

 

Moving control files requires downtime. They are currently in /u01/oradata/db1/db1 and will be moved to /u01/oradata/db1.

SQL> select name from v$controlfile;

NAME
-----------------------------------
/u01/oradata/db1/db1/control01.ctl
/u01/oradata/db1/db1/control02.ctl

Change their location in the spfile to /u01/oradata/db1.

SQL> alter system set control_files='/u01/oradata/db1/control01.ctl','/u01/oradata/db1/control02.ctl' scope=spfile;

System altered.

Shut down the database.

SQL> shu immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

Move the files using OS commands.

SQL> !mv /u01/oradata/db1/db1/control01.ctl /u01/oradata/db1/control01.ctl

SQL> !mv /u01/oradata/db1/db1/control02.ctl /u01/oradata/db1/control02.ctl

Startup the database.

SQL> startup 
ORACLE instance started.

Total System Global Area 3607101440 bytes
Fixed Size 2930608 bytes
Variable Size 419432528 bytes
Database Buffers 3170893824 bytes
Redo Buffers 13844480 bytes
Database mounted.
Database opened.

Verify control files are in /u01/oradata/db1/.

SQL> select name from v$controlfile;

NAME
-----------------------------------
/u01/oradata/db1/control01.ctl
/u01/oradata/db1/control02.ctl

 

Full Database Caching

Oracle decides what to cache in the buffer pool. Full scans of large tables are not cached. This has changed in 12c – if the database is smaller than the buffer, then Oracle automatically caches everything, except NOCACHE LOBs.

It is also possible to force the database to cache everything, including NOCACHE LOBs. This can be done even if the database is larger than the buffer. This is called force full database caching mode. The setting is stored in the control file, not the parameter file. Enabling and disabling force full database caching requires a database bounce.

-- database must be in mounted state
startup mount

-- enable
alter database force full database caching;

-- disable
alter database no force full database caching;

-- open
alter database open;

-- verify
select force_full_db_caching from v$database;

Here are some noteworthy facts about force full database caching:
– Oracle recommends it should only be used when the buffer cache is larger than the logical db size
– setting stored in control file, not parameter file
– in RAC, it is enabled either for all instances, or none of them
– in RAC, buffer cache of each instance should be larger than db
– if RAC instances are ‘well partitioned’, then it will suffice if combined buffer is larger than db
– in a multi-tenant configuration, it applies to the CDB and all PDBs
– when using ASMM, the buffer size should be assumed to be 60% of sga_target
– when using AMM, the buffer size should be assumed to be 36% of memory_target
– performance is most likely to improve when db is I/O bound, and there are repeated large table scans, and LOB reads

Keep in mind that the above apply only to force full database caching mode. The default full database caching mode is automatic and is triggered when Oracle detects that the buffer cache is larger than the database.

 

Demo

Two examples demonstrate the use of force full database caching. The first one uses a buffer cache which is larger than the logical database size. The second example explores the use of this feature when the buffer is a little smaller than the database.


Example 1

Force full database caching is enabled for a 4219 MB database with a 4500 MB buffer cache. A full table scan is run against a table with a 1309 MB data segment. As expected, a query against V$BH shows that table data is in the buffer cache.

Set environment and display contents of parameter file.

$ . oraenv
ORACLE_SID = [db1] ? db1
The Oracle base remains unchanged with value /u01/app/oracle
oracle@d12c1:/home/oracle [db1]

$ cat $ORACLE_HOME/dbs/initdb1.ora
audit_file_dest='/u01/app/oracle/admin/db1/adump'
audit_trail='db'
compatible='12.1.0.2.0'
control_files='/u01/oradata/db1/db1/control01.ctl','/u01/oradata/db1/db1/control02.ctl'
db_block_size=8192
db_domain=''
db_name='db1'
diagnostic_dest='/u01/app/oracle'
dispatchers='(PROTOCOL=TCP) (SERVICE=db1XDB)'
local_listener='LISTENER_DB1'
open_cursors=300
processes=300
remote_login_passwordfile='EXCLUSIVE'
undo_tablespace='UNDOTBS1'
db_cache_size=4500m
shared_pool_size=384m

Mount the database, enable force full database caching, and verify setting.

SQL> startup mount
ORACLE instance started.

Total System Global Area 5167382528 bytes
Fixed Size 2935080 bytes
Variable Size 419432152 bytes
Database Buffers 4731174912 bytes
Redo Buffers 13840384 bytes
Database mounted.

SQL> alter database force full database caching;

Database altered.

SQL> alter database open;

Database altered.

SQL> select force_full_db_caching from v$database;

FOR
---
YES

SQL>

Run a full table scan against table TEST.T.

SQL> select cust_income_level,count(*) from test.t group by cust_income_level order by 1;

CUST_INCOME_LEVEL                COUNT(*)
------------------------------ ----------
A: Below 30,000                    336640
B: 30,000 - 49,999                 353792
C: 50,000 - 69,999                 544640
D: 70,000 - 89,999                 667776
E: 90,000 - 109,999               1015808
F: 110,000 - 129,999              1348736
G: 130,000 - 149,999               699520
H: 150,000 - 169,999               699520
I: 170,000 - 189,999               584448
J: 190,000 - 249,999               384768
K: 250,000 - 299,999               247552
L: 300,000 and above               215552
                                     5248

13 rows selected.

Display data segment size of table TEST.T, against which a full table scan was just run.

SQL> select owner,segment_name,segment_type,round(bytes/1024/1204) from dba_segments where segment_name='T';

OWNER                 SEGMENT_NAME                   SEGMENT_TYPE       ROUND(BYTES/1024/1204)
--------------------- ------------------------------ ------------------ ----------------------
TEST                  T                              TABLE                                1306


Check contents of buffer cache by querying V$BH. Of the 202027 blocks in the buffer, 193381 are occupied by table TEST.T data.

SQL> SELECT o.OBJECT_NAME, o.OBJECT_TYPE, o.OWNER, COUNT(*) NUMBER_OF_BLOCKS, round(count(*) * 8/1024) Tot_mb
       FROM DBA_OBJECTS o, V$BH bh WHERE o.DATA_OBJECT_ID = bh.OBJD AND o.OBJECT_NAME = 'T'
      GROUP BY o.OBJECT_NAME, o.OWNER, o.OBJECT_TYPE;

OBJECT_NAME  OBJECT_TYPE     OWNER      NUMBER_OF_BLOCKS     TOT_MB
------------ --------------- ---------- ---------------- ----------
T            TABLE           TEST                 193381       1511


SQL> select count(*) from v$bh;

COUNT(*)
----------
202027

Example 2

Force full database caching is enabled for a 4219 MB database with a smaller buffer cache of only 3000 MB. A full table scan is run against a table with a 1309 MB data segment. As in the previous example, the scan populates the buffer, demonstrating that this feature can be enabled even if the buffer is smaller than the DB. However, this is not a desirable situation, unless we can exclude this large table from the buffer. To do this a small recycle pool is configured, and the table modified to use it.

Display contents of parameter file.

$ cat $ORACLE_HOME/dbs/initdb1.ora
audit_file_dest='/u01/app/oracle/admin/db1/adump'
audit_trail='db'
compatible='12.1.0.2.0'
control_files='/u01/oradata/db1/db1/control01.ctl','/u01/oradata/db1/db1/control02.ctl'
db_block_size=8192
db_domain=''
db_name='db1'
diagnostic_dest='/u01/app/oracle'
dispatchers='(PROTOCOL=TCP) (SERVICE=db1XDB)'
local_listener='LISTENER_DB1'
open_cursors=300
processes=300
remote_login_passwordfile='EXCLUSIVE'
undo_tablespace='UNDOTBS1'
db_cache_size=3000m
db_recycle_cache_size=16M
shared_pool_size=384m

Mount the database, enable force full database caching, and verify setting

SQL> startup mount
ORACLE instance started.

Total System Global Area 3623878656 bytes
Fixed Size 2930656 bytes
Variable Size 419432480 bytes
Database Buffers 3187671040 bytes
Redo Buffers 13844480 bytes
Database mounted.

SQL> alter database force full database caching;

Database altered.

SQL> alter database open;

Database altered.

SQL> select force_full_db_caching from v$database;

FOR
---
YES

Display size of database.

SQL> select round(sum(bytes/1024/1024)) from dba_segments;

ROUND(SUM(BYTES/1024/1024))
---------------------------
                       4219

Display size of table TEST.T, and perform full table scan on it.

SQL> select owner,segment_name,segment_type,round(bytes/1024/1204) from dba_segments where segment_name='T';

OWNER      SEGMENT_NAME     SEGMENT_TYPE    ROUND(BYTES/1024/1204)
---------- ---------------- --------------- ----------------------
TEST       T                TABLE                             1306


SQL> select cust_income_level,count(*) from test.t group by cust_income_level order by 1;

CUST_INCOME_LEVEL                COUNT(*)
------------------------------ ----------
A: Below 30,000                    336640
B: 30,000 - 49,999                 353792
C: 50,000 - 69,999                 544640
D: 70,000 - 89,999                 667776
E: 90,000 - 109,999               1015808
F: 110,000 - 129,999              1348736
G: 130,000 - 149,999               699520
H: 150,000 - 169,999               699520
I: 170,000 - 189,999               584448
J: 190,000 - 249,999               384768
K: 250,000 - 299,999               247552
L: 300,000 and above               215552
                                     5248

13 rows selected.

Query V$BH to verify that the scan did populate the buffer, as a result of force full database caching.

SQL> SELECT o.OBJECT_NAME, o.OBJECT_TYPE, o.OWNER, COUNT(*) NUMBER_OF_BLOCKS, round(count(*) * 8/1024) Tot_mb
       FROM DBA_OBJECTS o, V$BH bh WHERE o.DATA_OBJECT_ID = bh.OBJD AND o.OBJECT_NAME = 'T'
      GROUP BY o.OBJECT_NAME, o.OWNER, o.OBJECT_TYPE;

OBJECT_NAME  OBJECT_TYPE     OWNER      NUMBER_OF_BLOCKS     TOT_MB
------------ --------------- ---------- ---------------- ----------
T            TABLE           TEST                 193383       1511

Modify table TEST.T so that it uses the 16 MB recycle pool, rather than clean out the main buffer.

SQL> alter table test.t storage (buffer_pool recycle);

Table altered.

Bounce the database.

SQL> shu immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup pfile=?/dbs/initdb1.ora
ORACLE instance started.

Total System Global Area 3607101440 bytes
Fixed Size 2930608 bytes
Variable Size 419432528 bytes
Database Buffers 3170893824 bytes
Redo Buffers 13844480 bytes
Database mounted.
Database opened.
SQL>

Verify TEST.T is not in the buffer.

SQL> SELECT o.OBJECT_NAME, o.OBJECT_TYPE, o.OWNER, COUNT(*) NUMBER_OF_BLOCKS, round(count(*) * 8/1024) Tot_mb
       FROM DBA_OBJECTS o, V$BH bh WHERE o.DATA_OBJECT_ID = bh.OBJD AND o.OBJECT_NAME = 'T'
      GROUP BY o.OBJECT_NAME, o.OWNER, o.OBJECT_TYPE;

no rows selected

Perform full table scan on TEST.T.

SQL> select cust_income_level,count(*) from test.t group by cust_income_level order by 1;

CUST_INCOME_LEVEL                COUNT(*)
------------------------------ ----------
A: Below 30,000                    336640
B: 30,000 - 49,999                 353792
C: 50,000 - 69,999                 544640
D: 70,000 - 89,999                 667776
E: 90,000 - 109,999               1015808
F: 110,000 - 129,999              1348736
G: 130,000 - 149,999               699520
H: 150,000 - 169,999               699520
I: 170,000 - 189,999               584448
J: 190,000 - 249,999               384768
K: 250,000 - 299,999               247552
L: 300,000 and above               215552
5248

13 rows selected.

Query V$BH and verify that the scan populated only the 16 MB recycle pool, and did not force out other residents of the main buffer.

SQL> SELECT o.OBJECT_NAME, o.OBJECT_TYPE, o.OWNER, COUNT(*) NUMBER_OF_BLOCKS, round(count(*) * 8/1024) Tot_mb
       FROM DBA_OBJECTS o, V$BH bh WHERE o.DATA_OBJECT_ID = bh.OBJD AND o.OBJECT_NAME = 'T'
      GROUP BY o.OBJECT_NAME, o.OWNER, o.OBJECT_TYPE;

OBJECT_NAME  OBJECT_TYPE     OWNER      NUMBER_OF_BLOCKS     TOT_MB
------------ --------------- ---------- ---------------- ----------
T            TABLE           TEST                   1967         15

SQL> select count(*) from v$bh;

COUNT(*)
----------
      9875

 

Enable Smart Flash Cache

 

Smart Flash Cache is enabled by setting two initialization parameters. The file can be on the operating system file system, or on an ASM disk group. After instance startup, a flash cache file can be disabled by dynamically changing its size to zero. Once disabled in this manner, it can also be re-enabled by dynamically resetting its value to the original; adjusting the original value is not permitted.

db_flash_cache_file='/flash1/db1','/flash2/db1'
db_flash_cache_size=256M,128M

After instance startup, a flash cache file can be disabled by dynamically changing its size to zero. Once disabled in this manner, it can also be re-enabled by dynamically resetting its value to the original; adjusting the original value is not permitted.

-- disable /flash2
alter system set db_flash_cache_size = 256M,0;

-- re-enable /flash2
alter system set db_flash_cache_size = 256M, 256M;

When the database needs a buffer, it can overwrite a clean buffer. If this buffer is needed later, it must be read again from magnetic disk. However, if flash cache is enabled, then the buffer is written to it before being overwritten. Now if it is needed later, then it can be read from the SSD rather than magnetic disk.

Note that only the body of the block is written to SSD; the header remains in the buffer cache, and takes up about 100 bytes. In a RAC instance, it takes 200 bytes in the buffer cache, plus another 208 bytes in the shared pool for GCS.

Smart Flash Cache usage statistics are available in v$flashfilestat. For each file, it show whether the flash file is enable, and the number of reads and their latency.

select * from v$flashfilestat;

The storage clause of a table can be modified to alter the default algorithms for moving blocks from the buffer cache the flash cache.

-- keep schema object blocks in flash cache, space permitting
alter table test.flashtab storage (flash_cache keep);

-- never keep schema object blocks in flash cache
alter table test.flashtab storage (flash_cache none);

-- let database decide whether to keep schema object blocks in flash cache
alter table test.flashtab storage (flash_cache default);

 

See MOS Doc ID 2123908.1 for information about unpublished Bug 19504946 – FLASH CACHE DOESN’T WORK IN OEL7. Please apply Patch 19504946 to fix the issue. Otherwise you will get ORA-01261 during startup.

 

Manage Instance Memory

 

The goals of instance memory management are:

  • Minimize I/O
  • Minimize hard parses
  • Minimize disk sorts

They are accomplished by setting initialization parameters which allocate memory to instance structures. This is an iterative process. The initial allocation can be made based on rules of thumb. Refinements can then be made based on results of memory monitoring during typical workloads. Note that instance memory tuning should be undertaken only after application tuning has been completed.

 

1.  Identify available memory

 

Let’s assume a Linux machine has 12 GB of memory. As a starting point, leave 3 GB (25%) for the operating system, and allocate the remaining 9 GB to Oracle. Monitor swapping and make further adjustments. The goal is to have little or no swapping. Reduce file system buffers to a minimum, as they are not useful to Oracle.

 

2.  Implement Initial Allocation

 

Using AMM (automatic memory management) is easier and provides more flexibility. However, it has some pitfalls. Under certain workloads, the PGA can balloon and squeeze out the buffer and library caches to ridiculously low levels! For this reason, some safeguards are required. First, set a minimum for sga_target. This should reflect the minimum memory required to eliminate/minimize library cache misses, and keep the buffer hit ratio above 90 percent. Second, set a maximum for the PGA. This may cause the database to terminate some processes, but this is preferable to the entire database hanging.

If an spfile is being used, create an init file from it. Edit the init file and set the following parameters, as a starting point. Delete all other memory parameters.

memory_target=9g             # total memory available for instance
sga_target=2304m             # minimum 25% for SGA
pga_aggregate_limit=6912m    # maximum 75% for PGA

 

3.  Shared Server Precautions

 

When using shared server, some PGA memory components are moved to the SGA. For this reason, setting a minimum for the SGA is not sufficient. To protect the library and dictionary caches, and the buffer cache, a minimum needs to be set for the shared pool and the buffer cache. In addition, the large pool needs to be configured. As a first step, set these additional memory parameters in the init file.

shared_pool_size=460m         # 20% of sga_target
db_cache_size=1382m           # 60% of sga target
large_pool_size=230m          # 10% of sga target

 

4.  Monitor and Adjust

 

After running a typical workload, review diagnostic data to determine if adjustments are required. When using AMM, v$memory_target_advice is the first stop. It contains tuning advice for memory_target. Note that this information represents all data since instance startup, and may not be very useful; it may average out and thus hide peaks which need attention. To get data for specific time periods, us dba_hist_memory_target_advice.

In most cases, this setup will provide satisfactory performance, as long as there is sufficient memory available to handle the workload. If performance is not satisfactory, or  memory consumption needs to be reduced, then available memory needs to be manually divided into SGA and PGA, as described in the next step.

 

5.  Set SGA and PGA

 

Instead of using AMM as described above, an alternate approach is to manually allocate memory to the SGA and PGA. The first step is to determine the kind of workload. For OLTP or general purpose applications, allocate 80% to the SGA, and 20% to the PGA. These applications tend to query the same data, and saving it in memory helps reduce IO. This requires more memory for the SGA.

For data warehouse applications, divide the available memory equally between the SGA and PGA. These applications have different characteristics. They perform sorts and merges on large amounts of data. These operations are performed in work areas, which are part of the PGA. The data they retrieve is not usually shared, and often uses direct path reads which bypass the SGA. For these reasons, they benefit from a larger PGA. The SGA can be reduced because it provides limited benefits.

Start by setting sga_target and pga_aggregate_target to the values calculated above. After running a typical workload, adjust the size of the SGA based on data in v$sga_target_advice and dba_hist_sga_target_advice. The PGA can be adjusted based on v$pga_target_advice and dba_hist_pga_target_advice. The procedure is similar to the one for adjusting memory target, as described in the previous step.

 

6.  Manual Approach

 

If setting SGA and PGA sizes does not yield acceptable results, then you can implement a fully manual approach. This requires detail knowledge about the application. There is a variety of features available to implement this approach. The following is by no means an exhaustive treatment of the subject. It merely serves to briefly identify the most common options available to the DBA.

The first step is to configure the shared pool. Typically, this does not require much memory. However, an undersized shared pool can significantly impact performance. Set the initial size of shared_pool_size based upon application knowledge, and then monitor and make adjustments. Information for these adjustments is available in v$shared_pool_advice, dba_hist_shared_pool_advice, v$libraryache, dba_hist_librarycache, v$rowcache, and dba_hist_rowcache_summary. The goal is to eliminate, or minimize cache misses by making sure that there is enough memory to store all required SQL, PLSQL, and dictionary information.

The large pool should be configured if you are using shared server, parallel query, or RMAN. Information for sizing the large pool is available in v$sesstat in rows for ‘session uga memory’ and ‘session uga max memory’.

The second step is to configure the data buffer. This is done by setting the db_cache_size parameter. The initial value can be fine-tuned using information in v$db_cache_advice and dba_hist_db_cache_advice. The goal is to have a hit ratio above 90 percent. If you have multiple block sizes, then you need a different pool for each one of them. These are configured using the db_nk_cache_size parameters. Information for tuning them is available in v$buffer_pool_statistics and dba_hist_buffer_pool_stat.

Sometimes, the hit ratio can be improved by configuring the optional keep and recycle pools, and assigning database objects to them. Using the result cache can improve performance by eliminating unnecessary executions of queries and functions. The in memory column store is another feature which can improve performance for some queries; however, it takes up memory in the buffer cache.

The third and final step is to size the PGA. Oracle does not recommend manually sizing work areas, unless the instance is configured with the shared server option. This means pga_aggregate_target should be set to the value determined for PGA size, and allow Oracle to automatically determine work area sizes. The goal is to eliminate multi-pass executions. Ideally, all executions should be in optimal mode. However, if available memory is not sufficient, then a one-pass execution is used, increasing response time. Tuning information is available in v$sql_workarea_histogram and dba_hist_sql_workarea_hstgrm.

 


 

Demo

The examples below use a Linux guest on Oracle VBox. The first example implements the Automated Memory Management approach described in this post. The second one demonstrates how memory required for an instance can be reduced to a bare minimum.

 

Example 1

The Linux host has 12 GB of memory, out of which 9 GB area initially allocated to the instance. All memory settings and precautions described above for using AMM are included in the init file.

 

$ uname -a
Linux d12c1.localdomain 4.1.12-37.5.1.el6uek.x86_64 #2 SMP Thu Jun 9 15:56:37 PDT 2016 x86_64 x86_64 x86_64 GNU/Linux

$ . oraenv
ORACLE_SID = [db1] ? db1
The Oracle base remains unchanged with value /u01/app/oracle

$ cd $ORACLE_HOME/dbs
oracle@d12c1:/u01/app/oracle/product/12.1.0/dbhome_1/dbs [db1]

$ ls -ltr *db1*.ora
-rw-r--r--. 1 oracle oinstall 791 Jul  4 11:18 initdb1.ora
$ cat initdb1.ora

audit_file_dest='/u01/app/oracle/admin/db1/adump'
audit_trail='db'
compatible='12.1.0.2.0'
control_files='/u01/oradata/db1/db1/control01.ctl','/u01/oradata/db1/db1/control02.ctl'
db_block_size=8192
db_domain=''
db_name='db1'
diagnostic_dest='/u01/app/oracle'
dispatchers='(PROTOCOL=TCP) (SERVICE=db1XDB)'
undo_tablespace='UNDOTBS1'
#
# memory management parameters
#
memory_target=9g              # total memory available for instance
sga_target=2304m              # minimum 25% for SGA
pga_aggregate_limit=6912m     # maximum 75% for PGA
shared_pool_size=460m         # 20% of sga_target
db_cache_size=1382m           # 60% of sga target
large_pool_size=230m          # 10% of sga target

 

$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jul 4 11:21:10 2018
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to an idle instance.

 

SQL> startup

ORACLE instance started.

Total System Global Area 9663676416 bytes
Fixed Size                  2934168 bytes
Variable Size           8053066344 bytes
Database Buffers         1577058304 bytes
Redo Buffers               30617600 bytes
Database mounted.
Database opened.
SQL>

 


Example 2

This example demonstrates what happens when no memory parameters are set. The instance uses only 260 MB of SGA and a PGA target of 10 MB.

 

$ uname -a

Linux d12c1.localdomain 4.1.12-37.5.1.el6uek.x86_64 #2 SMP Thu Jun 9 15:56:37 PDT 2016 x86_64 x86_64 x86_64 GNU/Linux
$ . oraenv

ORACLE_SID = [db1] ? db1

The Oracle base remains unchanged with value /u01/app/oracle
$ cd $ORACLE_HOME/dbs

oracle@d12c1:/u01/app/oracle/product/12.1.0/dbhome_1/dbs [db1]

$ ls -ltr *db1*.ora

-rw-r--r--. 1 oracle oinstall 377 Jul  4 13:17 initdb1.ora
$ cat initdb1.ora

audit_file_dest='/u01/app/oracle/admin/db1/adump'
audit_trail='db'
compatible='12.1.0.2.0'
control_files='/u01/oradata/db1/db1/control01.ctl','/u01/oradata/db1/db1/control02.ctl'
db_block_size=8192
db_domain=''
db_name='db1'
diagnostic_dest='/u01/app/oracle'
dispatchers='(PROTOCOL=TCP) (SERVICE=db1XDB)'
undo_tablespace='UNDOTBS1'
#
# use defaults for all memory parameters
#
$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Wed Jul 4 13:18:59 2018
Copyright (c) 1982, 2014, Oracle.  All rights reserved.
Connected to an idle instance.
SQL> startup
ORACLE instance started.

Total System Global Area  272629760 bytes
Fixed Size                  2923336 bytes
Variable Size             213910712 bytes
Database Buffers           50331648 bytes
Redo Buffers                5464064 bytes
Database mounted.
Database opened.
SQL> select name,value/1024/1024 MB from v$parameter where name in ('memory_target','sga_target','pga_aggregate_target');

NAME                             MB
------------------------ ----------
sga_target                        0
memory_target                     0
pga_aggregate_target             10

 

SQL> select * from v$sgainfo;

NAME                                 BYTES  RES  CON_ID
-------------------------------  ---------- --- ----------
Fixed SGA Size                      2923336 No        0
Redo Buffers                        5464064 No        0
Buffer Cache Size                  50331648 Yes       0
In-Memory Area Size                       0 No        0
Shared Pool Size                  209715200 Yes       0
Large Pool Size                           0 Yes       0
Java Pool Size                      4194304 Yes       0
Streams Pool Size                         0 Yes       0
Shared IO Pool Size                 4194304 Yes       0
Data Transfer Cache Size                  0 Yes       0
Granule Size                        4194304 No        0
Maximum SGA Size                  272629760 No        0
Startup overhead in Shared Pool   145462768 No        0
Free SGA Memory Available                 0           0

14 rows selected.

SQL>

 

Setup CloneDB

 

With the help of Oracle’s in-built copy-on-write technology, a consistent backup of a database can be used to create a clone. The clone database accesses the backup data files in read-only mode. A PL/SQL procedure is used to create pointers to the backup data files. When a clone changes a block, or creates a new one, an unshared copy of the block is created. Multiple clones can share a single source database backup.

To setup a clone database, an Oracle supplied perl script can be used to generate three scripts: a pfile for the clone, a create control file script, and a rename data file script. These scripts can also be created manually. After running these scripts, the clone DB is available for use.

1.  Enable dNFS (optional)

Oracle direct NFS provides faster access to files stored on NFS. If your target backup, or clone data files, are on NFS and you want to avail of the faster access provided by dNFS, then use the following command to enable Oracle dNFS. Shut down all Oracle processes running from the Oracle home. (The documentation states that dNFS is required for CloneDB. However, as I have shown in Example 2 below, CloneDB works without dNFS.)

$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk dnfs_on
2.  Create Backup

Create a consistent image backup of the target database. If you cannot shutdown the target database, you can make a hot backup and apply archive logs to roll it forward.

$ rman target /
RMAN> startup mount
RMAN> backup as copy database format '/uo1/fra/db1/rman/clone/db1_%U';
3.  Generate Scripts

Setup environment variables and run perl script. Oracle home should point to the target database, and directories should be pre-created before runnning the perl script. The full path of the target database parameter file needs to be passed to the perl script (in the example below, it is /u01/oradata/clone1/initdb1.ora.) The MASTER_COPY_DIR should contain only backups of target database data files. You must remove all other files, including backups of control files and spfiles.

$ export MASTER_COPY_DIR='/uo1/fra/db1/rman/clone'
$ export CLONE_FILE_CREATE_DEST='/u01/oradata/clone1'
$ export CLONEDB_NAME=clone1

$ perl /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/install/clonedb.pl /u01/oradata/clone1/initdb1.ora crtdb.sql dbren.sql
4.  Create Clone

The perl script create three files in the CLONE_FILE_CREATE_DEST directory. Edit these files as needed, and make sure all directories required have been created. Oracle home should now point to the clone db. The generated script dbren.sql has a bug which will cause errors while trying to drop/create the TEMP table space. You can get around this by adding a temp file, as shown below.

SQL> @crtdb
SQL> @dbren
SQL> alter tablespace temp add tempfile '/u01/oradata/clone1/clone1_temp_1.dbf' size 256M;

Demo

The examples below use a Linux guest on Oracle VBox. Both the target and clone DBs run on the same host, and use the same Oracle home. The target backups and clone data files are on the local file system. Both examples share the same target database backup.

 

Example 1

After creating a consistent image backup of target database db1, manually create the three files needed for cloning. Create clone database cp1 and verify space usage.

— log into target database and create a consistent image backup

$ . oraenv
ORACLE_SID = [db1] ? db1
The Oracle base remains unchanged with value /u01/app/oracle
oracle@d12c1:/home/oracle/gs [db1]
$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Sun Jun 10 04:27:19 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

connected to target database (not started)

RMAN> startup mount

Oracle instance started
database mounted

Total System Global Area 578813952 bytes

Fixed Size 2926856 bytes
Variable Size 536872696 bytes
Database Buffers 33554432 bytes
Redo Buffers 5459968 bytes

RMAN> backup as copy database format '/u01/fra/db1/rman/clone/db1_%U';

Starting backup at 2018/06/10 04:28:40
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=124 device type=DISK
channel ORA_DISK_1: starting datafile copy
input datafile file number=00005 name=/u01/oradata/db1/db1/example01.dbf
output file name=/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-EXAMPLE_FNO-5_13t52m7q tag=TAG20180610T042841 RECID=26 STAMP=978409730
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:15
channel ORA_DISK_1: starting datafile copy
input datafile file number=00001 name=/u01/oradata/db1/db1/system01.dbf
output file name=/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-SYSTEM_FNO-1_14t52m89 tag=TAG20180610T042841 RECID=27 STAMP=978409743
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile file number=00003 name=/u01/oradata/db1/db1/sysaux01.dbf
output file name=/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-SYSAUX_FNO-3_15t52m8g tag=TAG20180610T042841 RECID=28 STAMP=978409749
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:07
channel ORA_DISK_1: starting datafile copy
input datafile file number=00004 name=/u01/oradata/db1/db1/undotbs01.dbf
output file name=/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-UNDOTBS1_FNO-4_16t52m8o tag=TAG20180610T042841 RECID=29 STAMP=978409752
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile copy
copying current control file
output file name=/u01/fra/db1/rman/clone/db1_cf_D-DB1_id-1571084687_17t52m8p tag=TAG20180610T042841 RECID=30 STAMP=978409753
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting datafile copy
input datafile file number=00006 name=/u01/oradata/db1/db1/users01.dbf
output file name=/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-USERS_FNO-6_18t52m8q tag=TAG20180610T042841 RECID=31 STAMP=978409754
channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_DISK_1: starting piece 1 at 2018/06/10 04:29:15
channel ORA_DISK_1: finished piece 1 at 2018/06/10 04:29:16
piece handle=/u01/fra/db1/rman/clone/db1_19t52m8r_1_1 tag=TAG20180610T042841 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 2018/06/10 04:29:16

RMAN> exit

Recovery Manager complete.
oracle@d12c1:/home/oracle/gs [db1]

 

—  These are the five datafile copies we need; the controlfile and spfile backups can be deleted.

$ ls -ltr /u01/fra/db1/rman/clone/*_TS-*
-rw-r-----. 1 oracle oinstall 1304174592 Jun 10 04:28 /u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-EXAMPLE_FNO-5_13t52m7q
-rw-r-----. 1 oracle oinstall 838868992 Jun 10 04:28 /u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-SYSTEM_FNO-1_14t52m89
-rw-r-----. 1 oracle oinstall 660611072 Jun 10 04:29 /u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-SYSAUX_FNO-3_15t52m8g
-rw-r-----. 1 oracle oinstall 83894272 Jun 10 04:29 /u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-UNDOTBS1_FNO-4_16t52m8o
-rw-r-----. 1 oracle oinstall 5251072 Jun 10 04:29 /u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-USERS_FNO-6_18t52m8q
oracle@d12c1:/home/oracle/gs [db1]
$

 

— Create audit dump and datafile directories.

$ mkdir -p /u01/app/oracle/admin/cp1/adump
$ mkdir -p /u01/oradata/cp1

 

—  Review three scripts required to create the clone; verify backup file names are correctly entered in the create controlfile and rename datafile statements.

$ ls -ltr
total 12
-rw-r--r--. 1 oracle oinstall 225 Jun 9 18:13 initcp1.ora
-rw-r--r--. 1 oracle oinstall 653 Jun 10 04:47 crtdb.sql
-rw-r--r--. 1 oracle oinstall 927 Jun 10 04:53 dbren.sql
oracle@d12c1.localdomain:/u01/oradata/cp1 [db1]
$ cat initcp1.ora
control_files=/u01/oradata/cp1/cp1_ctl.dbf
clonedb=TRUE
compatible='12.1.0.2.0'
db_name=cp1
undo_tablespace='UNDOTBS1'
db_cache_size=32m
java_pool_size=32m
large_pool_size=96m
pga_aggregate_target=256m
shared_pool_size=384m

$ cat crtdb.sql
STARTUP NOMOUNT PFILE=/u01/oradata/cp1/initcp1.ora
CREATE CONTROLFILE REUSE SET DATABASE cp1 RESETLOGS
LOGFILE
GROUP 1 '/u01/oradata/cp1/cp1_log1.log' SIZE 50M,
GROUP 2 '/u01/oradata/cp1/cp1_log2.log' SIZE 50M
DATAFILE
'/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-SYSTEM_FNO-1_14t52m89',
'/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-SYSAUX_FNO-3_15t52m8g',
'/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-UNDOTBS1_FNO-4_16t52m8o',
'/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-USERS_FNO-6_18t52m8q',
'/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-EXAMPLE_FNO-5_13t52m7q'
CHARACTER SET WE8MSWIN1252;

$ cat dbren.sql
declare
begin
dbms_dnfs.clonedb_renamefile('/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-SYSTEM_FNO-1_14t52m89', '/u01/oradata/cp1/system01.dbf');
dbms_dnfs.clonedb_renamefile('/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-SYSAUX_FNO-3_15t52m8g', '/u01/oradata/cp1/sysaux01.dbf');
dbms_dnfs.clonedb_renamefile('/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-UNDOTBS1_FNO-4_16t52m8o', '/u01/oradata/cp1/undotbs01.dbf');
dbms_dnfs.clonedb_renamefile('/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-USERS_FNO-6_18t52m8q', '/u01/oradata/cp1/users01.dbf');
dbms_dnfs.clonedb_renamefile('/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-EXAMPLE_FNO-5_13t52m7q', '/u01/oradata/cp1/example01.dbf');
end;
/
show errors;
-- if there are no errors, run these commands manually
/*
alter database open resetlogs;
alter tablespace temp add tempfile '/u01/oradata/cp1/temp01.dbf' size 256M;
*/

oracle@d12c1.localdomain:/u01/oradata/cp1 [db1]
$

 

—  Create the clone

$ . oraenv
ORACLE_SID = [db1] ? cp1
The Oracle base remains unchanged with value /u01/app/oracle
oracle@d12c1.localdomain:/u01/oradata/cp1 [cp1]
$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Sun Jun 10 04:57:08 2018

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SQL> @crtdb
ORACLE instance started.

Total System Global Area 578813952 bytes
Fixed Size 2926856 bytes
Variable Size 536872696 bytes
Database Buffers 33554432 bytes
Redo Buffers 5459968 bytes

Control file created.

SQL> @dbren

PL/SQL procedure successfully completed.

No errors.
SQL> alter database open resetlogs;

Database altered.

SQL> alter tablespace temp add tempfile '/u01/oradata/cp1/temp01.dbf' size 256M;

Tablespace altered.

SQL>

-- verify clone files are just pointers and use less space than the target

SQL> !du -sk /u01/oradata/cp1/*
7920 /u01/oradata/cp1/cp1_ctl.dbf
51204 /u01/oradata/cp1/cp1_log1.log
51204 /u01/oradata/cp1/cp1_log2.log
4 /u01/oradata/cp1/crtdb.sql
4 /u01/oradata/cp1/dbren.sql
16 /u01/oradata/cp1/example01.dbf
4 /u01/oradata/cp1/initcp1.ora
604 /u01/oradata/cp1/sysaux01.dbf
372 /u01/oradata/cp1/system01.dbf
1024 /u01/oradata/cp1/temp01.dbf
500 /u01/oradata/cp1/undotbs01.dbf
16 /u01/oradata/cp1/users01.dbf




SQL> !du -sk /u01/fra/db1/rman/clone/*
1273612 /u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-EXAMPLE_FNO-5_13t52m7q
645132 /u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-SYSAUX_FNO-3_15t52m8g
819212 /u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-SYSTEM_FNO-1_14t52m89
81928 /u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-UNDOTBS1_FNO-4_16t52m8o
5128 /u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-USERS_FNO-6_18t52m8q

SQL>

Example 2

Use the backup of target database db1 created in the previous example. Generate the required clone scripts by running the Oracle supplied perl script. Create clone database cp2 and verify space used.

 

— These are the consistent image backups of all five datafiles in the target database; there are no other files in this directory.

$ ls -ltr /u01/fra/db1/rman/clone/
total 2825012
-rw-r-----. 1 oracle oinstall 1304174592 Jun 10 04:28 db1_data_D-DB1_I-1571084687_TS-EXAMPLE_FNO-5_13t52m7q
-rw-r-----. 1 oracle oinstall 838868992 Jun 10 04:28 db1_data_D-DB1_I-1571084687_TS-SYSTEM_FNO-1_14t52m89
-rw-r-----. 1 oracle oinstall 660611072 Jun 10 04:29 db1_data_D-DB1_I-1571084687_TS-SYSAUX_FNO-3_15t52m8g
-rw-r-----. 1 oracle oinstall 83894272 Jun 10 04:29 db1_data_D-DB1_I-1571084687_TS-UNDOTBS1_FNO-4_16t52m8o
-rw-r-----. 1 oracle oinstall 5251072 Jun 10 04:29 db1_data_D-DB1_I-1571084687_TS-USERS_FNO-6_18t52m8q
oracle@d12c1:/home/oracle/gs [db1]
$

—  Disable Oracle direct NFS for the purpose of demonstrating that clonedb can work without it. Both target and clone use the same Oracle home.

$ cd $ORACLE_HOME/rdbms/lib
oracle@d12c1:/u01/app/oracle/product/12.1.0/dbhome_1/rdbms/lib [db1]
$ make -f ins_rdbms.mk dnfs_off
rm -f /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/lib/odm/libnfsodm12.so
oracle@d12c1:/u01/app/oracle/product/12.1.0/dbhome_1/rdbms/lib [db1]

 

— Create a pfile of the target database; it will be used by the perl script

$ . oraenv
ORACLE_SID = [db1] ? db1
The Oracle base remains unchanged with value /u01/app/oracle
oracle@d12c1:/home/oracle/gs [db1]
$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Sun Jun 10 05:38:13 2018

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SQL> create pfile=’/u01/oradata/cp2/initdb1.ora’ from spfile;

File created.

SQL> exit
Disconnected

 

—  Create directories for clone cp2

$ mkdir -p /u01/app/oracle/admin/cp2/adump
$ mkdir -p /u01/oradata/cp2

 

—  Set up variables for perl script

$ . oraenv
ORACLE_SID = [cp1] ? cp2
The Oracle base remains unchanged with value /u01/app/oracle
oracle@d12c1.localdomain:/u01/oradata/cp2 [cp2]

$ export MASTER_COPY_DIR='/u01/fra/db1/rman/clone'
$ export CLONE_FILE_CREATE_DEST='/u01/oradata/cp2'
$ export CLONEDB_NAME=cp2

 

—  Run the perl script and review generated files

$ cd /u01/oradata/cp2
$ perl /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/install/clonedb.pl /u01/oradata/cp2/initdb1.ora crtdb.sql dbren.sql
$ ls -ltr
total 16
-rw-r--r--. 1 oracle oinstall 643 Jun 10 05:38 initdb1.ora
-rw-r--r--. 1 oracle oinstall 643 Jun 10 05:49 initcp2.ora
-rw-r--r--. 1 oracle oinstall 882 Jun 10 05:49 dbren.sql
-rw-r--r--. 1 oracle oinstall 868 Jun 10 05:49 crtdb.sql
$

 

— Edit initcp2.ora; comment out four lines, as shown; replace ‘db1’ with ‘cp2’ in audit_file_dest and dispatchers. here is the final version.

oracle@d12c1.localdomain:/u01/oradata/cp2 [cp2]
$ cat initcp2.ora
##db1.__oracle_base='/u01/app/oracle'#ORACLE_BASE set from environment
*.audit_file_dest='/u01/app/oracle/admin/cp2/adump'
*.audit_trail='db'
*.compatible='12.1.0.2.0'
control_files=/u01/oradata/cp2/cp2_ctl.dbf
*.db_block_size=8192
*.db_cache_size=32m
*.db_domain=''
db_name=cp2
*.dispatchers='(PROTOCOL=TCP) (SERVICE=cp2XDB)'
*.java_pool_size=32m
*.large_pool_size=96m
##*.local_listener='LISTENER_DB1'
*.open_cursors=300
*.pga_aggregate_target=256m
*.processes=300
*.remote_login_passwordfile='EXCLUSIVE'
*.shared_pool_size=384m
*.undo_tablespace='UNDOTBS1'
##db_create_file_dest=/u01/oradata/cp2/
##log_archive_dest=/u01/oradata/cp2/
clonedb=TRUE

 

—  Create the clone using crtdb.sql and dbren.sql, as is.

oracle@d12c1.localdomain:/u01/oradata/cp2 [cp2]
$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Sun Jun 10 06:01:51 2018

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Connected to an idle instance.

SQL> @crtdb
SQL> SET FEEDBACK 1
SQL> SET NUMWIDTH 10
SQL> SET LINESIZE 80
SQL> SET TRIMSPOOL ON
SQL> SET TAB OFF
SQL> SET PAGESIZE 100
SQL>
SQL> STARTUP NOMOUNT PFILE=/u01/oradata/cp2/initcp2.ora
ORACLE instance started.

Total System Global Area 578813952 bytes
Fixed Size 2926856 bytes
Variable Size 536872696 bytes
Database Buffers 33554432 bytes
Redo Buffers 5459968 bytes
SQL> CREATE CONTROLFILE REUSE SET DATABASE cp2 RESETLOGS
2 MAXLOGFILES 32
3 MAXLOGMEMBERS 2
4 MAXINSTANCES 1
5 MAXLOGHISTORY 908
6 LOGFILE
7 GROUP 1 '/u01/oradata/cp2/cp2_log1.log' SIZE 100M BLOCKSIZE 512,
8 GROUP 2 '/u01/oradata/cp2/cp2_log2.log' SIZE 100M BLOCKSIZE 512
9 DATAFILE
10 '/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-EXAMPLE_FNO-5_13t52m7q',
11 '/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-SYSAUX_FNO-3_15t52m8g',
12 '/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-SYSTEM_FNO-1_14t52m89',
13 '/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-UNDOTBS1_FNO-4_16t52m8o',
14 '/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-USERS_FNO-6_18t52m8q'
15 CHARACTER SET WE8DEC;

Control file created.

SQL> @dbren
SQL> declare
2 begin
3 dbms_dnfs.clonedb_renamefile('/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-EXAMPLE_FNO-5_13t52m7q' , '/u01/oradata/cp2//ora_data_cp20.dbf');
4 dbms_dnfs.clonedb_renamefile('/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-SYSAUX_FNO-3_15t52m8g' , '/u01/oradata/cp2//ora_data_cp21.dbf');
5 dbms_dnfs.clonedb_renamefile('/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-SYSTEM_FNO-1_14t52m89' , '/u01/oradata/cp2//ora_data_cp22.dbf');
6 dbms_dnfs.clonedb_renamefile('/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-UNDOTBS1_FNO-4_16t52m8o' , '/u01/oradata/cp2//ora_data_cp23.dbf');
7 dbms_dnfs.clonedb_renamefile('/u01/fra/db1/rman/clone/db1_data_D-DB1_I-1571084687_TS-USERS_FNO-6_18t52m8q' , '/u01/oradata/cp2//ora_data_cp24.dbf');
8 end;
9 /

PL/SQL procedure successfully completed.

SQL> show errors;
No errors.
SQL> alter database open resetlogs;

Database altered.

SQL> drop tablespace TEMP;
drop tablespace TEMP
*
ERROR at line 1:
ORA-12906: cannot drop default temporary tablespace

SQL> create temporary tablespace TEMP;
create temporary tablespace TEMP
*
ERROR at line 1:
ORA-02199: missing DATAFILE/TEMPFILE clause

 

—  Ignore TEMP errors, and add temp file

SQL> alter tablespace temp add tempfile ‘/u01/oradata/cp2/temp01.dbf’ size 256M;

Tablespace altered.

— Verify clone space usage

SQL> !du -sk /u01/oradata/cp2/*
9072 /u01/oradata/cp2/cp2_ctl.dbf
102404 /u01/oradata/cp2/cp2_log1.log
102404 /u01/oradata/cp2/cp2_log2.log
4 /u01/oradata/cp2/crtdb.sql
4 /u01/oradata/cp2/dbren.sql
4 /u01/oradata/cp2/initcp2.ora
4 /u01/oradata/cp2/initdb1.ora
16 /u01/oradata/cp2/ora_data_cp20.dbf
620 /u01/oradata/cp2/ora_data_cp21.dbf
404 /u01/oradata/cp2/ora_data_cp22.dbf
548 /u01/oradata/cp2/ora_data_cp23.dbf
16 /u01/oradata/cp2/ora_data_cp24.dbf
1024 /u01/oradata/cp2/temp01.dbf

SQL>

SQL> !ls -ltr /u01/oradata/cp2/*
-rw-r–r–. 1 oracle oinstall 643 Jun 10 05:38 /u01/oradata/cp2/initdb1.ora
-rw-r–r–. 1 oracle oinstall 882 Jun 10 05:49 /u01/oradata/cp2/dbren.sql
-rw-r–r–. 1 oracle oinstall 868 Jun 10 05:49 /u01/oradata/cp2/crtdb.sql
-rw-r–r–. 1 oracle oinstall 650 Jun 10 05:58 /u01/oradata/cp2/initcp2.ora
-rw-r—–. 1 oracle oinstall 104858112 Jun 10 06:02 /u01/oradata/cp2/cp2_log2.log
-rw-r—–. 1 oracle oinstall 1304174592 Jun 10 06:02 /u01/oradata/cp2/ora_data_cp20.dbf
-rw-r—–. 1 oracle oinstall 5251072 Jun 10 06:02 /u01/oradata/cp2/ora_data_cp24.dbf
-rw-r—–. 1 oracle oinstall 268443648 Jun 10 06:05 /u01/oradata/cp2/temp01.dbf
-rw-r—–. 1 oracle oinstall 838868992 Jun 10 06:07 /u01/oradata/cp2/ora_data_cp22.dbf
-rw-r—–. 1 oracle oinstall 660611072 Jun 10 06:07 /u01/oradata/cp2/ora_data_cp21.dbf
-rw-r—–. 1 oracle oinstall 83894272 Jun 10 06:07 /u01/oradata/cp2/ora_data_cp23.dbf
-rw-r—–. 1 oracle oinstall 104858112 Jun 10 06:07 /u01/oradata/cp2/cp2_log1.log
-rw-r—–. 1 oracle oinstall 9289728 Jun 10 06:07 /u01/oradata/cp2/cp2_ctl.dbf

SQL>

Terminate Datapump Job

 

A data pump job runs on the server. It cannot be terminated by killing the client session which started it. Follow these steps to terminate a data pump export job.

1. Open a separate terminal window and start SQL*Plus. Find the name of the running job.

SQL> select owner_name, job_name from dba_datapump_jobs;

2. Start new datapump session and attach to job name retrieved in previous step.

$ expdp system/mypwd attach=<job_name>

3. Terminate the job

Export> kill_job

Demo

The examples below use a Linux guest on Oracle VBox. The first one runs a data pump export in the background, and attaches to it using the job name. The second one runs the export in the foreground and then terminates the client process using CTL-C; the server process is then killed from data pump.

 

Example 1

Start a DP export job in the background. Query the dictionary to find it’s name. Attach to it and issue the kill job command.

— Start data pump export in the background.

$ nohup expdp system/blogtest directory=dpump full=y dumpfile=full.dmp logfile=full.log &

— Log into SQLPLUS and find job name.

$ sqlplus system/blogtest

SQL*Plus: Release 12.1.0.2.0 Production on Sun Jun 3 10:30:04 201

SQL> select job_name,state from user_datapump_jobs;

JOB_NAME              STATE
--------------------- ------------------------------
SYS_EXPORT_FULL_01    EXECUTING

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production

— Attach to the data pump job and kill it.

$ expdp system/blogtest attach=SYS_EXPORT_FULL_01

Export: Release 12.1.0.2.0 - Production on Sun Jun 3 10:30:48 2018

Job: SYS_EXPORT_FULL_01
Owner: SYSTEM
Operation: EXPORT
Creator Privs: TRUE
GUID: 6DBEA56196A71FBCE0536338A8C0DC89
Start Time: Sunday, 03 June, 2018 10:29:36
Mode: FULL
Instance: ocp12c
Max Parallelism: 1
Timezone: +00:00
Timezone version: 18
Endianness: LITTLE
NLS character set: AL32UTF8
NLS NCHAR character set: AL16UTF16
EXPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND system/******** directory=dpump full=y dumpfile=full.dmp logfile=full.log
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: /u01/fra/db1/dp/full.dmp
bytes written: 4,096

Worker 1 Status:
Instance ID: 1
Instance name: ocp12c
Host name: d12c1.localdomain
Process Name: DW00
State: EXECUTING
Object Schema: SYS
Object Name: AUD$Job: SYS_EXPORT_FULL_01
Owner: SYSTEM
Operation: EXPORT
Creator Privs: TRUE
GUID: 6DBEA56196A71FBCE0536338A8C0DC89
Start Time: Sunday, 03 June, 2018 10:29:36
Mode: FULL
Instance: ocp12c
Max Parallelism: 1
Timezone: +00:00
Timezone version: 18
Endianness: LITTLE
NLS character set: AL32UTF8
NLS NCHAR character set: AL16UTF16
EXPORT Job Parameters:
Parameter Name Parameter Value:
CLIENT_COMMAND system/******** directory=dpump full=y dumpfile=full.dmp logfile=full.log
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: /u01/fra/db1/dp/full.dmp
bytes written: 4,096 Object Type: DATABASE_EXPORT/NORMAL_OPTIONS/TABLE Completed Objects: 60 Worker Parallelism: 1 Export> kill_job Are you sure you wish to stop this job ([yes]/no): yes

— Verify job has been terminated.

$ sqlplus system/blogtest

SQL*Plus: Release 12.1.0.2.0 Production on Sun Jun 3 10:31:07 2018
Copyright (c) 1982, 2014, Oracle. All rights reserved.
Last Successful login time: Sun Jun 03 2018 10:30:48 -04:00
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select job_name,state from user_datapump_jobs;

no rows selected

Example 2

Start a DP export job, and press CTL-C. Issue kill job command to terminate it.

—  Start data pump export job in the foreground and press CTL-C to terminate the client session.

$ expdp system/blogtest directory=dpump full=y dumpfile=full.dmp logfile=full.log

Export: Release 12.1.0.2.0 – Production on Sun Jun 3 10:47:39 2018

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

Starting “SYSTEM”.”SYS_EXPORT_FULL_01″: system/******** directory=dpump full=y dumpfile=full.dmp logfile=full.log
Estimate in progress using BLOCKS method…
Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 5.578 MB
Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER
Processing object type DATABASE_EXPORT/PRE_INSTANCE_IMPCALLOUT/MARKER
^C

-- Pressing CTL-C while the job is running brings up inter-active command mode. Type HELP to see the options available.

Export> help
------------------------------------------------------------------------------

The following commands are valid while in interactive mode.
Note: abbreviations are allowed.

ADD_FILE
Add dumpfile to dumpfile set.

CONTINUE_CLIENT
Return to logging mode. Job will be restarted if idle.

EXIT_CLIENT
Quit client session and leave job running.

FILESIZE
Default filesize (bytes) for subsequent ADD_FILE commands.

HELP
Summarize interactive commands.

KILL_JOB
Detach and delete job.

PARALLEL
Change the number of active workers for current job.

REUSE_DUMPFILES
Overwrite destination dump file if it exists [NO].

START_JOB
Start or resume current job.
Valid keyword values are: SKIP_CURRENT.

STATUS
Frequency (secs) job status is to be monitored where
the default [0] will show new status when available.

STOP_JOB
Orderly shutdown of job execution and exits the client.
Valid keyword values are: IMMEDIATE.

Export> status

Job: SYS_EXPORT_FULL_01
Operation: EXPORT
Mode: FULL
State: EXECUTING
Bytes Processed: 0
Current Parallelism: 1
Job Error Count: 0
Dump File: /u01/fra/db1/dp/full.dmp
bytes written: 4,096

Worker 1 Status:
Instance ID: 1
Instance name: ocp12c
Host name: d12c1.localdomain
Process Name: DW00
State: EXECUTING
Object Name: +*
Object Type: DATABASE_EXPORT/TRUSTED_DB_LINK
Completed Objects: 1
Total Objects: 1
Worker Parallelism: 1

— issue the kill job command

Export> kill_job
Are you sure you wish to stop this job ([yes]/no): yes

— verify job is not running anymore.

$ sqlplus / as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Sun Jun 3 10:48:34 2018

SQL> select owner_name, job_name,state from dba_datapump_jobs;

no rows selected

— check the export job log.

$ cd /u01/fra/db1/dp

$ cat full.log

Export: Release 12.1.0.2.0 - Production on Sun Jun 3 10:47:39 2018

Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_FULL_01": system/******** directory=dpump full=y dumpfile=full.dmp logfile=full.log
Estimate in progress using BLOCKS method...
Processing object type DATABASE_EXPORT/EARLY_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE_DATA
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/VIEWS_AS_TABLES/TABLE_DATA
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 5.578 MB
Processing object type DATABASE_EXPORT/PRE_SYSTEM_IMPCALLOUT/MARKER
Processing object type DATABASE_EXPORT/PRE_INSTANCE_IMPCALLOUT/MARKER
Processing object type DATABASE_EXPORT/TABLESPACE
Processing object type DATABASE_EXPORT/PROFILE
Processing object type DATABASE_EXPORT/SYS_USER/USER
Processing object type DATABASE_EXPORT/RADM_FPTM
Processing object type DATABASE_EXPORT/GRANT/SYSTEM_GRANT/PROC_SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
Processing object type DATABASE_EXPORT/SCHEMA/ON_USER_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/TABLESPACE_QUOTA
Processing object type DATABASE_EXPORT/RESOURCE_COST
Processing object type DATABASE_EXPORT/TRUSTED_DB_LINK
Processing object type DATABASE_EXPORT/DIRECTORY/DIRECTORY
;;; Export> help
;;; Export> status
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/PRE_SYSTEM_ACTIONS/PROCACT_SYSTEM
;;; Export> kill_job
Job "SYSTEM"."SYS_EXPORT_FULL_01" stopped due to fatal error at Sun Jun 3 10:48:19 2018 elapsed 0 00:00:38

$