Saturday, November 23, 2013

Insert Multi-million Rows into a Table

To load large volume of data into a table quickly, we shall turn off the logging on the table to reduce REDO log files. Below steps worked well:

SQL> alter table table_A nologging;
SQL> alter index table_A_ind nologging;

SQL> INSERT /*+ append */ INTO table_A
           SELECT /*+ parallel(T 4) */ col1, col2, clo3
              FROM table_another_T T
           WHERE  ......

SQL> commit;

-- option: to turn logging back on
-- SQL> alter table table_A logging;
-- SQL> alter index table_A_ind logging;

Before run this, make sure enough space is allocated to the tablespace where table_A resides.

======================================================
If data are loaded by creating the table (CTAS), the below code will do it:

SQL> CREATE TABLE table_A TABLESPACE tablesapce_name
           NOLOGGING
           as  SELECT /*+ parallel(T 4) */ * FROM table_another_T T;

Table table_A can be alerted to LOGGING if needed, after table creation.

======================================================
I do not think UPDATE statement can reduce REDO log by changing to NOLOGGING option on tables. One option is to turn off archivelog mode in the database.

======================================================
For database with mining log data, such as Oracle Streans (CDC) or Data Gard, logging may be required.  Two statements can check the status on database level and tablespace level:
SQL> SELECT force_logging FROM v$database;
SQL> SELECT tablespace_name, force_logging FROM dba_tablespaces; 
The default setup is "NO". They can be turned on or off by below statements:
SQL> ALTER DATABASE force logging;
SQL> ALTER TABLESPACE users FORCE LOGGING;

SQL> ALTER DATABASE NO force logging;
SQL> ALTER TABLESPACE users NO FORCE LOGGING; 

No comments: