Sunday, February 17, 2013

Actual Cardinality

We have seen Extended Stats, little about dynamic sampling and all these are targeting towards obtaining Actual Cardinality. Now in this row we have "Cardinality Feedback", lets review it in Oracle version 11.2.0.1
SQL> create table tbl_t1 (id number,id_1 number,id_val1 varchar2(10),id_val2 varchar2(10));

Table created.

Elapsed: 00:00:00.15
SQL> 
SQL> insert into tbl_t1
  2  select round(dbms_random.value(4,50000)) id
  3  ,rownum Id_1
  4  ,round(dbms_random.value(-100000,50)) id_val1
  5  ,round(dbms_random.value(4,50000)) id_val2 from dual connect by rownum < 10000;

9999 rows created.
SQL> commit;

Commit complete.

Elapsed: 00:00:00.00
SQL> exec dbms_stats.gather_table_stats('HARI','TBL_T1');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.21


Let's try it with a simple query


SQL> select * from tbl_t1 where id_val1 > 18 and id < 99999;

        ID       ID_1 ID_VAL1    ID_VAL2                                                            
---------- ---------- ---------- ----------                                                         
     43725       7536 42         9958                                                               
     14761       6433 44         17751                                                              
      5992       8624 50         18125                                                              
      1512       9577 28         40933                                                              

Elapsed: 00:00:00.15


For selecting just 4 records, lets check what was the estimated cardinality for the above query

SQL> 
SQL> SELECT * FROM table(DBMS_XPLAN.DISPLAY_CURSOR(null,null,'ALL'));

PLAN_TABLE_OUTPUT                                                                                   
----------------------------------------------------------------------------------------------------
SQL_ID  fk3bgjgnj0w3w, child number 0                                                               
-------------------------------------                                                               
select * from tbl_t1 where id_val1 > 18 and id < 99999                                              
                                                                                                    
Plan hash value: 3205898331                                                                         
                                                                                                    
----------------------------------------------------------------------------                        
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |                        
----------------------------------------------------------------------------                        
|   0 | SELECT STATEMENT  |        |       |       |    13 (100)|          |                        
|*  1 |  TABLE ACCESS FULL| TBL_T1 |   500 | 11000 |    13   (0)| 00:00:01 |                        

PLAN_TABLE_OUTPUT                                                                                   
----------------------------------------------------------------------------                        
                                                                                                    
Query Block Name / Object Alias (identified by operation id):                                       
-------------------------------------------------------------                                       
                                                                                                    
   1 - SEL$1 / TBL_T1@SEL$1                                                                         
                                                                                                    
Predicate Information (identified by operation id):                                                 
---------------------------------------------------                                                 
                                                                                                    
   1 - filter((TO_NUMBER("ID_VAL1")>18 AND "ID"<99999 
PLAN_TABLE_OUTPUT                                                                                   
----------------------------------------------------------------------------------------------------
                                                                                                    
Column Projection Information (identified by operation id):                                         
-----------------------------------------------------------                                         
                                                                                                    
   1 - "ID"[NUMBER,22], "TBL_T1"."ID_1"[NUMBER,22],                                                 
       "ID_VAL1"[VARCHAR2,10], "TBL_T1"."ID_VAL2"[VARCHAR2,10]                                      
                                                                                                    

29 rows selected.

Elapsed: 00:00:01.03


As expected the estimates are wrong, so what's the universal slogan "when something is running slow, run it again it would run faster " :), Let's rerun the query
SQL> select * from tbl_t1 where id_val1 > 18 and id < 99999;

        ID       ID_1 ID_VAL1    ID_VAL2                                                            
---------- ---------- ---------- ----------                                                         
     43725       7536 42         9958                                                               
     14761       6433 44         17751                                                              
      5992       8624 50         18125                                                              
      1512       9577 28         40933                                                              

Elapsed: 00:00:00.00
SQL> 
SQL> SELECT * FROM table(DBMS_XPLAN.DISPLAY_CURSOR(null,null,'ALL'));

PLAN_TABLE_OUTPUT                                                                                   
----------------------------------------------------------------------------------------------------
SQL_ID  fk3bgjgnj0w3w, child number 1                                                               
-------------------------------------                                                               
select * from tbl_t1 where id_val1 > 18 and id < 99999                                              
                                                                                                    
Plan hash value: 3205898331                                                                         
                                                                                                    
----------------------------------------------------------------------------                        
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |                        
----------------------------------------------------------------------------                        
|   0 | SELECT STATEMENT  |        |       |       |    13 (100)|          |                        
|*  1 |  TABLE ACCESS FULL| TBL_T1 |     4 |    88 |    13   (0)| 00:00:01 |                        

PLAN_TABLE_OUTPUT                                                                                   
----------------------------------------------------------------------------------------------------
                                                                                                    
Query Block Name / Object Alias (identified by operation id):                                       
-------------------------------------------------------------                                       
                                                                                                    
   1 - SEL$1 / TBL_T1@SEL$1                                                                         
                                                                                                    
Predicate Information (identified by operation id):                                                 
---------------------------------------------------                                                 
                                                                                                    
   1 - filter((TO_NUMBER("ID_VAL1")>18 AND "ID"<99999 
PLAN_TABLE_OUTPUT                                                                                   
----------------------------------------------------------------------------------------------------
                                                                                                    
Column Projection Information (identified by operation id):                                         
-----------------------------------------------------------                                         
                                                                                                    
   1 - "ID"[NUMBER,22], "TBL_T1"."ID_1"[NUMBER,22],                                                 
       "ID_VAL1"[VARCHAR2,10], "TBL_T1"."ID_VAL2"[VARCHAR2,10]                                      
                                                                                                    
Note                                                                                                
-----                                                                                               
   - cardinality feedback used for this statement                                                   
                                                                                                    

33 rows selected.

Elapsed: 00:00:00.08

Yes the optimizer has got the correct number of rows. Look at the last section of the DBMS_XPLAN output, its the NOTE section indicating "cardinality feedback used for this statement" 
                It is cool, but what is the cost involved in doing this ? and the cost is HARD PARSE. Through being exactly the same query it has gone for hard parse so as to use the Cardinality Feedback. Do you think this feature as a boon in all cases, will it be used under all circumstances ? if so what happens when
  1. When the base tables are highly volatile ? 
  2. When i opt for dynamic sampling ? 
  3. when extended stats are collected ? i.,e already have given better information about the columns with the extended stats 
  4. When bind variables are used ?    
Well the above conditions are wherein Cardinality Feedback is likely not being used. It is more likely to occur when you happen to run more then once the same query on a less volatile tables. When you expect oracle to identify the correct stats on the very first run, well Cardinality Feedback is not going to help and they are not persistent too. Is there a way to validate the hard parse, check CFB in Oracle 11.2.0.3

Thursday, January 3, 2013

GoldenGate with CSN

  • I will have to perform a zero downtime migration, i.,e migrate when the application is live,
  • My table's doesn't have primary keys or unique index, 
  • Though not a perfect data model, from the application the table uniqueness are maintained, 
        How am i suppose to migrate under these circumstances using Oracle Goldengate ?

              Ok lets do some blogging, Let me first create the environment as per the criteria given above. Lets migrate data from one schema to another schema
/*
      DB Name       - OGG
      Source Schema - hari@OGG
      Target Schema - puthranv@OGG
     ----------------------------------------------------------------------
                           Setting the Source Env
     ----------------------------------------------------------------------

*/

SQL> set timing on
SQL> set linesize 150
SQL> 
SQL> REM CREATING SOURCE TABLE
SQL> create table hari.T5 as select * from dba_objects where 1=2;

Table created.

Elapsed: 00:00:00.21
SQL> exec dbms_stats.gather_table_stats('HARI','T5');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.05
SQL> ALTER TABLE hari.T5 ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

Table altered.

Elapsed: 00:00:00.15
SQL>


Note : No primary key / unique index in table T5
Procedures to manipulate data on T5
SQL> 
SQL> create or replace PROCEDURE sp_ins_gg_source AS
  2  BEGIN
  3    INSERT
  4    INTO hari.T5(object_id,   object_type) WITH temp_max AS
  5      (SELECT nvl(MAX(object_id),1) AS
  6      max_id
  7       FROM hari.T5)
  8    SELECT max_id + rownum AS
  9    object_id,
 10      'ORACLE' ||(max_id + rownum) AS
 11    object_type
 12    FROM temp_max CONNECT BY LEVEL < 11;
 13  
 14    COMMIT;
 15  
 16  END sp_ins_gg_source;
 17  /

Procedure created.

Elapsed: 00:00:00.42
SQL> create or replace PROCEDURE sp_upd_gg_source AS
  2  BEGIN
  3  
  4    UPDATE t5
  5    SET subobject_name = 'OGG'
  6    WHERE subobject_name IS NULL
  7     AND rownum < 2;
  8    COMMIT;
  9  
 10  END sp_upd_gg_source;
 11  /

Procedure created.

Elapsed: 00:00:00.06
SQL> 
SQL> create or replace PROCEDURE sp_del_gg_source AS
  2  BEGIN
  3  
  4    DELETE FROM t5
  5    WHERE object_id =
  6      (SELECT min(object_id)
  7       FROM t5
  8       WHERE object_id is not null) ;
  9  
 10    COMMIT;
 11  
 12  END sp_del_gg_source;
 13  /

Procedure created.

Elapsed: 00:00:00.01
SQL>
SQL>
 
Retrieve get the SCN at Source
SQL> select current_scn from v$database;

CURRENT_SCN                                                                                                                                           
-----------                                                                                                                                           
    1644171                                                                                                                                           

Elapsed: 00:00:00.07
SQL>
Check the Target Schema 
/*
      DB Name       - OGG      
      Target Schema - puthranv@OGG
     ----------------------------------------------------------------------
                           Check for Table T5 in the Target Env
     ----------------------------------------------------------------------
*/

SQL> desc t5;
ERROR:
ORA-04043: object t5 does not exist

Start OGG Manager extract and Data pump at Source

Starting Manager - 
***********************************************************************
                 Oracle GoldenGate Manager for Oracle
     Version 11.1.1.1.2 OGGCORE_11.1.1.1.2_PLATFORMS_111004.2100 
        Windows (optimized), Oracle 11g on Oct  5 2011 00:28:27
 
Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved.


                    Starting at 2013-01-04 00:12:45
***********************************************************************

Operating System Version:
Microsoft Windows 7 , on x86
Version 6.1 (Build 7600: )

Process id: 3240

Parameters...

PORT 1350
DYNAMICPORTLIST 12010-12030,1350,1230


***********************************************************************
**                     Run Time Messages                             **
***********************************************************************


2013-01-04 00:12:45  INFO    OGG-00983  Manager started (port 1350).

2013-01-04 00:14:19  INFO    OGG-00963  Command received from GGSCI on host 192.168.1.2 (START EXTRACT HDPUMP1 ).

2013-01-04 00:14:20  INFO    OGG-00975  EXTRACT HDPUMP1 starting.

2013-01-04 00:14:43  INFO    OGG-00963  Command received from GGSCI on host 192.168.1.2 (START EXTRACT HEXT1 ).

2013-01-04 00:14:43  INFO    OGG-00975  EXTRACT HEXT1 starting.

Starting Data Pump


***********************************************************************
**            Running with the following parameters                  **
***********************************************************************
SETENV (ORACLE_SID=ogg)

Set environment variable (ORACLE_SID=ogg)
SETENV (NLS_LANG="AMERICAN_AMERICA.WE8MSWIN1252")

Set environment variable (NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252)
EXTRACT hdump1

USERID hari@ogg,PASSWORD  ********

RMTHOST localhost, MGRPORT 7809

RMTTRAIL I:\OGG\TARGET\dirdat\RT

NOPASSTHRU

TABLE hari.t5;


CACHEMGR virtual memory values (may have been adjusted)
CACHEBUFFERSIZE:                         64K
CACHESIZE:                                1G
CACHEBUFFERSIZE (soft max):               4M
CACHEPAGEOUTSIZE (normal):                4M
PROCESS VM AVAIL FROM OS (min):        1.74G
CACHESIZEMAX (strict force to disk):   1.54G

Database Version:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production

Database Language and Character Set:
NLS_LANG = "AMERICAN_AMERICA.WE8MSWIN1252" 
NLS_LANGUAGE     = "AMERICAN" 
NLS_TERRITORY    = "AMERICA" 
NLS_CHARACTERSET = "WE8MSWIN1252" 

2013-01-04 00:25:43  INFO    OGG-01226  Socket buffer size set to 27985 (flush size 27985).

2013-01-04 00:25:43  INFO    OGG-01052  No recovery is required for target file I:\OGG\TARGET\dirdat\RT000000, at RBA 0 (file not opened).

2013-01-04 00:25:43  INFO    OGG-01478  Output file I:\OGG\TARGET\dirdat\RT is using format RELEASE 10.4/11.1.





Starting Extract


***********************************************************************
**            Running with the following parameters                  **
***********************************************************************
SETENV (ORACLE_SID=ogg)

Set environment variable (ORACLE_SID=ogg)
SETENV (NLS_LANG="AMERICAN_AMERICA.WE8MSWIN1252")

Set environment variable (NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252)
EXTRACT hext1

USERID hari@ogg,PASSWORD ********

EXTTRAIL I:\OGG\SOURCE\dirdat\EX

TABLE hari.t5, TOKENS (TKN1 = @GETENV("TRANSACTION", "CSN")),KEYCOLS (OBJECT_ID);


2013-01-04 00:25:53  INFO    OGG-01635  BOUNDED RECOVERY: reset to initial or altered checkpoint.

Bounded Recovery Parameter:
Options    = BRRESET
BRINTERVAL = 4HOURS
BRDIR      = I:\OGG\SOURCE

CACHEMGR virtual memory values (may have been adjusted)
CACHEBUFFERSIZE:                         64K
CACHESIZE:                                1G
CACHEBUFFERSIZE (soft max):               4M
CACHEPAGEOUTSIZE (normal):                4M
PROCESS VM AVAIL FROM OS (min):        1.74G
CACHESIZEMAX (strict force to disk):   1.54G

Database Version:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production

Database Language and Character Set:
NLS_LANG = "AMERICAN_AMERICA.WE8MSWIN1252" 
NLS_LANGUAGE     = "AMERICAN" 
NLS_TERRITORY    = "AMERICA" 
NLS_CHARACTERSET = "WE8MSWIN1252" 

2013-01-04 00:25:53  WARNING OGG-01423  No valid default archive log destination directory found for thread 1.

2013-01-04 00:25:53  INFO    OGG-01515  Positioning to begin time Jan 4, 2013 12:20:11 AM.

2013-01-04 00:25:53  INFO    OGG-01052  No recovery is required for target file I:\OGG\SOURCE\dirdat\EX000000, at RBA 0 (file not opened).

2013-01-04 00:25:53  INFO    OGG-01478  Output file I:\OGG\SOURCE\dirdat\EX is using format RELEASE 10.4/11.1.

There are no primary keys / unique index in the table, therefore in order to retrieve the records from the redo logs KEYCOLS are being used, but this doesn't confirm the uniqueness of the record,

RUN time messages from Extract 
***********************************************************************
**                     Run Time Messages                             **
***********************************************************************


2013-01-04 00:25:54  INFO    OGG-01515  Positioning to begin time Jan 4, 2013 12:20:11 AM.

2013-01-04 00:25:55  INFO    OGG-01516  Positioned to Sequence 27, RBA 39400464, Jan 4, 2013 12:20:11 AM.

2013-01-04 00:25:55  INFO    OGG-01517  Position of first record processed Sequence 27, RBA 39400464, SCN 0.1643323, Jan 4, 2013 12:20:15 AM.
TABLE resolved (entry HARI.T5):
  TABLE HARI.T5, TOKENS (TKN1 = @GETENV("TRANSACTION", "CSN")),KEYCOLS (OBJECT_ID);

Using the following key columns for source table HARI.T5: OBJECT_ID.

2013-01-04 00:26:11  INFO    OGG-01021  Command received from GGSCI: STATS.

The above run time messages confirms that OBJECT_ID is being used as Key Column for data retrieval from redo logs.

Export data for the specified SCN

expdp hari/******@ogg tables=t5 content=all directory=exp_dir dumpfile=t5.dmp logfile=expdpt5.log flashback_scn=1644171
Export: Release 11.2.0.1.0 - Production on Fri Jan 4 00:27:27 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
;;; 
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "HARI"."SYS_EXPORT_TABLE_01":  hari/********@ogg tables=t5 content=all directory=exp_dir dumpfile=t5.dmp logfile=expdpt5.log flashback_scn=1644171
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
. . exported "HARI"."T5"                                 16.23 KB     241 rows
Master table "HARI"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************


Importing the Dump to Target Schema 
impdp puthranv/*******@ogg REMAP_SCHEMA=hari:puthranv directory=exp_dir dumpfile=t5.dmp logfile=impdpt5.log
Import: Release 11.2.0.1.0 - Production on Fri Jan 4 00:28:13 2013

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
;;; 
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "PUTHRANV"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
Starting "PUTHRANV"."SYS_IMPORT_FULL_01":  puthranv/********@ogg REMAP_SCHEMA=hari:puthranv directory=exp_dir dumpfile=t5.dmp logfile=impdpt5.log 
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "PUTHRANV"."T5"                             16.23 KB     241 rows
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "PUTHRANV"."SYS_IMPORT_FULL_01" successfully completed at 00:28:21
241 Records are now loaded in to the target schema
SQL> select count(*) from puthranv.t5;

  COUNT(*)                                                                                                                                            
----------                                                                                                                                            
       241                                                                                                                                            

Elapsed: 00:00:00.01
SQL>
Starting replicate process


***********************************************************************
**            Running with the following parameters                  **
***********************************************************************
SETENV (ORACLE_SID=ogg)

Set environment variable (ORACLE_SID=ogg)
SETENV (NLS_LANG="AMERICAN_AMERICA.WE8MSWIN1252")

Set environment variable (NLS_LANG=AMERICAN_AMERICA.WE8MSWIN1252)
REPLICAT hhr1

USERID puthranv@ogg, PASSWORD ********

ASSUMETARGETDEFS

MAP hari.t5, TARGET  puthranv.t5, KEYCOLS (OBJECT_ID);


CACHEMGR virtual memory values (may have been adjusted)
CACHEBUFFERSIZE:                         64K
CACHESIZE:                              512M
CACHEBUFFERSIZE (soft max):               4M
CACHEPAGEOUTSIZE (normal):                4M
PROCESS VM AVAIL FROM OS (min):           1G
CACHESIZEMAX (strict force to disk):    881M

Database Version:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
PL/SQL Release 11.2.0.1.0 - Production
CORE 11.2.0.1.0 Production
TNS for 32-bit Windows: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 - Production

Database Language and Character Set:
NLS_LANG = "AMERICAN_AMERICA.WE8MSWIN1252" 
NLS_LANGUAGE     = "AMERICAN" 
NLS_TERRITORY    = "AMERICA" 
NLS_CHARACTERSET = "WE8MSWIN1252" 

For further information on character set settings, please refer to user manual.
Run time messages of Replicat


***********************************************************************
**                     Run Time Messages                             **
***********************************************************************

Opened trail file I:\OGG\TARGET\dirdat\RT000000 at 2013-01-04 00:50:36

2013-01-04 00:50:36  INFO    OGG-01373  User requested start after CSN 1644171.

2013-01-04 00:50:36  INFO    OGG-01374  Transaction delivery commencing at position Seqno 0, RBA 28928, Transaction ID 7.23.849, CSN 1644183, 70 transaction(s) skipped.

MAP resolved (entry HARI.T5):
  MAP HARI.T5, TARGET  puthranv.t5, KEYCOLS (OBJECT_ID);

2013-01-04 00:50:48  WARNING OGG-00869  No unique key is defined for table T5. All viable columns will be used to represent the key, but may not guarantee uniqueness.  KEYCOLS may be used to define the key.
Using following columns in default map by name:
  OWNER, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID, DATA_OBJECT_ID, 
  OBJECT_TYPE, CREATED, LAST_DDL_TIME, TIMESTAMP, STATUS, TEMPORARY, 
  GENERATED, SECONDARY, NAMESPACE, EDITION_NAME

Using the following key columns for target table PUTHRANV.T5: OBJECT_ID.


2013-01-04 00:50:48  INFO    OGG-01021  Command received from GGSCI: STATS.

KEYCOL map though specified at the replicate, they are only used during the extraction process. Lets check the data at the Source and Target 

 At Source
SQL> show user
USER is "HARI"
SQL> 
SQL> 
SQL> select systimestamp from dual;

SYSTIMESTAMP                                                                                                                                          
---------------------------------------------------------------------------                                                                           
04-JAN-13 01.13.17.727000 AM +05:30                                                                                                                   

Elapsed: 00:00:00.01
SQL> REM CHECK FOR UPDATES
SQL> select count(*) from t5 where subobject_name = 'OGG';

  COUNT(*)                                                                                                                                            
----------                                                                                                                                            
       294                                                                                                                                            

Elapsed: 00:00:00.00
SQL> 
SQL> REM CHECK FOR INSERT AND DELETE 
SQL> select count(*) from t5;

  COUNT(*)                                                                                                                                            
----------                                                                                                                                            
       523                                                                                                                                            

Elapsed: 00:00:00.00
SQL> select count(*) from t5;

  COUNT(*)                                                                                                                                            
----------                                                                                                                                            
       523                                                                                                                                            

Elapsed: 00:00:00.00
SQL>
At Target
SQL> show user
USER is "PUTHRANV"
SQL> 
SQL> select systimestamp from dual;

SYSTIMESTAMP                                                                                        
---------------------------------------------------------------------------                         
04-JAN-13 01.13.18.160000 AM +05:30                                                                 

Elapsed: 00:00:00.00
SQL> REM CHECK FOR UPDATES
SQL> select count(*) from t5 where subobject_name = 'OGG';

  COUNT(*)                                                                                          
----------                                                                                          
       293                                                                                          

Elapsed: 00:00:00.00
SQL> select count(*) from t5 where subobject_name = 'OGG';

  COUNT(*)                                                                                          
----------                                                                                          
       294                                                                                          

Elapsed: 00:00:00.00
SQL> REM CHECK FOR INSERT AND DELETE
SQL> select count(*) from t5;

  COUNT(*)                                                                                          
----------                                                                                          
       524                                                                                          

Elapsed: 00:00:00.00
SQL> select count(*) from t5;

  COUNT(*)                                                                                          
----------                                                                                          
       523                                                                                          

Elapsed: 00:00:00.00
SQL>

Thus we are able to migrate data while the application is running using Oracle GoldenGate, even when there is no primary key / unique index on the table.

Monday, November 19, 2012

Cardinality Estimate

When it comes to SQL Tuning, It always starts with an execution plan, likewise this post is no different. I normally suggest to avoid SQL's and advice to write them only when required. The best way to tune an SQL  is to avoid writing it, but in reality this can't happen.

Lets see how we can avoid writing SQL rather saying it as avoid executing SQL.

The below is a small piece which i have done in my recent tuning projects  

SQL> CREATE TABLE tblchk
  2  (
  3    id number,
  4    id_val varchar2(50)
  5  );

Table created.

Elapsed: 00:00:00.30
SQL> 
SQL> 
SQL> declare
  2  type t1 is table of tblchk%rowtype;
  3  v_t1 t1:=t1();
  4  x number:= 65;
  5  begin
  6  
  7  for i in 1..20000 loop
  8  v_t1.extend;
  9  v_t1(i).id:=i;
 10  v_t1(i).id_val:=chr(x);
 11  x:=x+1;
 12  if x=68 then
 13  x:=65;
 14  end if;
 15  end loop;
 16  
 17  forall j in v_t1.first..v_t1.last
 18  insert into tblchk values v_t1(j);
 19  
 20  commit;
 21  end;
 22  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.27
SQL> 
SQL> exec dbms_stats.gather_table_stats('HARI','TBLCHK');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.51
SQL> 
SQL> select count(*) from tblchk;

  COUNT(*)                                               
----------                                               
     20000                                               

Elapsed: 00:00:00.00

Lets query the table
SQL> alter system flush shared_pool;

System altered.

Elapsed: 00:00:00.34
SQL> 
SQL> variable a varchar2(10);
SQL> 
SQL> exec :a:='ORACLE';
SQL> set autotrace traceonly
SQL>
SQL> 
SQL> select * from tblchk where id_val = :a;

no rows selected

Elapsed: 00:00:00.11

Execution Plan
----------------------------------------------------------                  
Plan hash value: 729812915                                                  
                                                                            
----------------------------------------------------------------------------
| Id  | Operation         | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |        |  6667 | 40002 |    11   (0)| 00:00:01 |
|*  1 |  TABLE ACCESS FULL| TBLCHK |  6667 | 40002 |    11   (0)| 00:00:01 |
----------------------------------------------------------------------------
                                                                            
Predicate Information (identified by operation id):                         
---------------------------------------------------                         
                                                                            
   1 - filter("ID_VAL"=:A)                                                  


Statistics
----------------------------------------------------
        111  recursive calls                        
          0  db block gets                          
         51  consistent gets                        
         37  physical reads                         
          0  redo size                              
        342  bytes sent via SQL*Net to client       
        408  bytes received via SQL*Net from client 
          1  SQL*Net roundtrips to/from client      
          0  sorts (memory)                         
          0  sorts (disk)                           
          0  rows processed                         


"No rows selected" but why is the Rows column in execution plan showing it as 6667. Although we can get this fixed with a much similar approach as what we have seen in my earlier article on extended stats but now am looking for other options, 

Lets start with TKprof. 
select * 
from
 tblchk where id_val = :a1


call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        1      0.00       0.07          0          0          0           0
Execute      1      0.00       0.00          0          0          0           0
Fetch        1      0.00       0.12         36         38          0           0
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        3      0.00       0.19         36         38          0           0

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 85  

Rows     Row Source Operation
-------  ---------------------------------------------------
      0  TABLE ACCESS FULL TBLCHK (cr=38 pr=36 pw=0 time=0 us cost=11 size=40002 card=6667)


As expected the row source operation shows a cardinality as 6667. So how does oracle estimate the cardinality as 6667


SQL> select round(count(*)/count(distinct id_val)) ROWS_VAL from tblchk;

  ROWS_VAL
----------
      6667

Elapsed: 00:00:00.02
SQL>

Well, you can also cross check the actual's with the estimated rows


SQL> set linesize 150
SQL> set pagesize 150
SQL> 
SQL> SELECT * FROM   TABLE(DBMS_XPLAN.DISPLAY_CURSOR('96vjj2ncju5p5',null,'ALLSTATS LAST'));

PLAN_TABLE_OUTPUT                                                                               
------------------------------------------------------------------------------------------------
SQL_ID  96vjj2ncju5p5, child number 0                                                           
-------------------------------------                                                           
select * from tblchk where id_val = :a1                                                         
                                                                                                
Plan hash value: 729812915                                                                      
                                                                                                
----------------------------------------------------------------------------------------------- 
| Id  | Operation         | Name   | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  | 
----------------------------------------------------------------------------------------------- 
|   0 | SELECT STATEMENT  |        |      1 |        |      0 |00:00:00.03 |      38 |     36 | 
|*  1 |  TABLE ACCESS FULL| TBLCHK |      1 |   6667 |      0 |00:00:00.03 |      38 |     36 | 
----------------------------------------------------------------------------------------------- 
                                                                                                
Predicate Information (identified by operation id):                                             
---------------------------------------------------                                             
                                                                                                
   1 - filter("ID_VAL"=:A1)                                                                     


Lets provide more information to the optimizer so as to have a better cardinality estimate.


SQL>
SQL> alter table tblchk add CONSTRAINT check_idval CHECK (id_val in ('A','B','C'
));

Table altered.

Elapsed: 00:00:00.91
SQL>
SQL>
SQL> exec dbms_stats.gather_table_stats('HARI','TBLCHK');

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.27
SQL>
Will check constraint help me to find the correct cardinality ?, Soon will post more interesting facts on the same

Wednesday, October 3, 2012

Oracle 12c New Features

Oracle 12c New Features  

Compression - Auto compression with Heat Maps / Automatic Data Optimization

        This is for the first time in database mgmt, we gonna have intelligent compression. With the metadata in hand, oracle let's us know how is my data used by the applications. Based on my data usage heat maps are generated which are classified as

  • Hot       -   Active Modifications - Last modified is less than a week's time.
  • Warm  -   Moderate Modifications - Last modified is greater than a week and less than a Year.
  • Cold    -   Nill - Last modified is greater than a year


When I look into a tablespace which is holding the tables shown in the picture (Heat Maps), Oracle 12c provides a clear data usage differentiation through heat maps. The EMP table is under HOT section as it is subject to modifications. But when we drill down into EMP table and look into its partitions Oracle 12c provides data usage details across partitions. 













Using these info, automatic data optimization creates policies to perform automatic data compression.



The compression levels are 
3x   - Advanced Row Compression
10x - Columnar Query Compression
15x - Columar Archive Compression

        Now with these advanced levels of compression we can load more data into memory, so does that mean are we heading towards in-memory database in Oracle 12c? Lots more features to be explored, watch out in this space :)




















Disclaimer - The views expressed are my own and not necessarily those of Oracle  and its affiliates or whom so ever.

Monday, October 1, 2012

Zero Downtime Migration using Oracle GoldenGate


               It's all about migrating data from one environment to another. This blog is about one way replication, the simplest of its kind. Data is replicated from the source to the target database in an uni-directional format. All sorts of modifications are done only at the source and the same would be replicated to the target system.  Only Migration is covered below and will add fallback options in forth coming blogs.

Obtaining sufficient downtime is always a challenge and Oracle GoldenGate (OGG) comes here for our rescue. The real challenge in migration is when the end users are allowed to perform modifications to the system until the migration cutover. The below set of steps explains the Oracle GoldenGate execution flow for Zero Downtime Migration in multiple steps.

               As a first step let's create a source environment which has got continuous data modifications,  
SQL> REM CREATING SOURCE AND TARGET TABLE
SQL> 
SQL> create table GG_SRC as select * from dba_objects;

Table created.

Elapsed: 00:00:00.54
SQL> ALTER TABLE GG_SRC add CONSTRAINT PK_GG_SRC PRIMARY KEY (object_id);

Table altered.

Elapsed: 00:00:00.36
SQL> 
SQL> 
SQL> create table GG_TRGT as select * from dba_objects where 1 = 2;

Table created.

Elapsed: 00:00:00.03
SQL> 
SQL> ALTER TABLE GG_TRGT add CONSTRAINT PK_GG_TRGT PRIMARY KEY (object_id);

Table altered.

Elapsed: 00:00:00.01
SQL> 
SQL> exec dbms_stats.gather_table_stats('HARI','GG_SRC');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.47
-------------

Creating procedures to perform data modifications, later these are scheduled as an on-going process
SQL - INSERT - Stored Procedure to Insert 20 records scheduled to execute every minute
create or replace
PROCEDURE sp_ins_gg AS
BEGIN
  INSERT
  INTO GG_SRC(object_id,   object_name,subobject_name,data_object_id,object_type,created) WITH temp_max AS
    (SELECT MAX(object_id) AS
    max_id
     FROM GG_SRC)
  SELECT max_id + rownum AS
  object_id,
  'ORACLE' ||(max_id + rownum)||chr(rownum+60) AS object_name,
  'ORACLE' ||(max_id + rownum)||chr(rownum+60) AS subobject_name,
  rownum AS data_object_id,
    'ORACLE' ||(max_id + rownum) AS object_type,
    sysdate AS created
  FROM temp_max CONNECT BY LEVEL < 21;
  COMMIT;
END sp_ins_gg;
/

Scheduling 20 INSERT's Every Minute
begin  
 -- Every minute - Every day  
 dbms_scheduler.create_schedule(  
 schedule_name  => 'MIN_BY_MIN',  
 start_date    => trunc(sysdate),  
 repeat_interval => 'freq=MINUTELY;interval=1',  
 comments     => 'Execute: ALL DAY EVERY MINUTE');     
end;  
/


begin  
-- Invoke the Procedure   
dbms_scheduler.create_program  
(program_name=> 'GG_INSERT',  
 program_type=> 'STORED_PROCEDURE',  
 program_action=> 'sp_ins_gg',  
 enabled=>true,  
 comments=>'Procedure to insert source data'  
 );  
end; 
/

begin  
-- Create the Job  
dbms_scheduler.create_job  
 (job_name => 'JOB_GG_INSERT',  
  program_name=> 'GG_INSERT',  
  schedule_name=>'MIN_BY_MIN',  
  enabled=>true,  
  auto_drop=>false,  
  comments=>'Job to insert source data every minute');  
end;  
/

SQL - UPDATE - Stored Procedure to update two records is scheduled to execute every 10 seconds
create or replace
PROCEDURE sp_upd_gg AS
BEGIN

  UPDATE GG_SRC
  SET subobject_name = 'ORACLE'
  WHERE subobject_name IS NULL
  AND rownum < 2;
  
  UPDATE GG_SRC
  SET object_type = 'ORACLE'
  WHERE owner IS NULL
  AND rownum < 2;
  
  COMMIT;
END sp_upd_gg;
/

Scheduling 2 UPDATEs Every 10 Seconds

begin  
 -- Every 10 seconds - Every day  
 dbms_scheduler.create_schedule(  
 schedule_name  => 'SEC_10',  
 start_date    => trunc(sysdate),  
 repeat_interval => 'FREQ=SECONDLY; INTERVAL=10',  
 comments     => 'Execute: ALL DAY EVERY MINUTE');     
end;  
/

begin  
-- Invoke the Procedure   
dbms_scheduler.create_program  
(program_name=> 'GG_UPDATE',  
 program_type=> 'STORED_PROCEDURE',  
 program_action=> 'sp_upd_gg',  
 enabled=>true,  
 comments=>'Procedure to insert source data'  
 );  
end; 
/

begin  
-- Create the Job  
dbms_scheduler.create_job  
 (job_name => 'JOB_GG_UPDATE',  
  program_name=> 'GG_UPDATE',  
  schedule_name=>'SEC_10',  
  enabled=>true,  
  auto_drop=>false,  
  comments=>'Job to update source data every 10 seconds');  
end;  
/

SQL - DELETE - Stored Procedure to delete two records is scheduled to execute every 15 seconds
create or replace
PROCEDURE sp_del_gg AS
BEGIN

  DELETE FROM GG_SRC
  WHERE object_id =
    (SELECT object_id
     FROM GG_SRC
     WHERE subobject_name != 'ORACLE'
     AND rownum < 2);

  DELETE FROM GG_SRC
  WHERE object_id =
    (SELECT object_id
     FROM GG_SRC
     WHERE object_type != 'TABLE'
     AND rownum < 2);

  COMMIT;

END sp_del_gg;
/

Scheduling 2 DELETEs Every 15 Seconds


begin  
 -- Every 15 seconds - Every day  
 dbms_scheduler.create_schedule(  
 schedule_name  => 'SEC_15',  
 start_date    => trunc(sysdate),  
 repeat_interval => 'FREQ=SECONDLY; INTERVAL=15',  
 comments     => 'Execute: ALL DAY EVERY MINUTE');     
end;  
/

begin  
-- Invoke the Procedure   
dbms_scheduler.create_program  
(program_name=> 'GG_DELETE',  
 program_type=> 'STORED_PROCEDURE',  
 program_action=> 'sp_del_gg',  
 enabled=>true,  
 comments=>'Procedure to insert source data'  
 );  
end; 
/

begin  
-- Create the Job  
dbms_scheduler.create_job  
 (job_name => 'JOB_GG_DELETE',  
  program_name=> 'GG_DELETE',  
  schedule_name=>'SEC_15',  
  enabled=>true,  
  auto_drop=>false,  
  comments=>'Job to delete source data every 15 seconds');  
end;  
/


Lets check the number records in source and target table

SQL> SELECT SYSTIMESTAMP FROM DUAL;

SYSTIMESTAMP                                                                                 
---------------------------------------------------------------------------                  
05-OCT-12 12.34.43.028000 AM +05:30                                                          

Elapsed: 00:00:00.00
SQL> 
SQL> SELECT COUNT(*) as SOURCE_COUNT FROM GG_SRC;

SOURCE_COUNT                                                                                 
------------                                                                                 
       72287                                                                                 

Elapsed: 00:00:00.00
SQL> SELECT COUNT(*) as TARGET_COUNT FROM GG_TRGT;

TARGET_COUNT                                                                                 
------------                                                                                 
           0                                                                                 

Elapsed: 00:00:00.00
SQL>
SQL> --REM "Checking UPDATE RECORDS COUNT"---
SQL> SELECT COUNT(*) as Source_subobject_count FROM GG_SRC WHERE subobject_name = 'ORACLE';

SOURCE_SUBOBJECT_COUNT                                                                       
----------------------                                                                       
                    79                                                                       

Elapsed: 00:00:00.00
SQL> SELECT COUNT(*) as Target_subobject_count FROM GG_TRGT WHERE subobject_name = 'ORACLE';

TARGET_SUBOBJECT_COUNT                                                                       
----------------------                                                                       
                     0                                                                       

Elapsed: 00:00:00.00
SQL> 
SQL> SELECT COUNT(*) as Source_object_type FROM GG_SRC WHERE object_type = 'ORACLE';

SOURCE_OBJECT_TYPE                                                                           
------------------                                                                           
                 4                                                                           

Elapsed: 00:00:00.00
SQL> SELECT COUNT(*) as Target_object_type FROM GG_TRGT WHERE object_type = 'ORACLE';

TARGET_OBJECT_TYPE                                                                           
------------------                                                                           
                 0                                                                           

Elapsed: 00:00:00.00
SQL> 
SQL> REM--"Checking DELETE RECORDS COUNT"--
SQL> SELECT COUNT(*) FROM GG_SRC WHERE subobject_name != 'ORACLE';

  COUNT(*)                                                                                   
----------                                                                                   
       603                                                                                   

Elapsed: 00:00:00.00
SQL> SELECT COUNT(*) FROM GG_TRGT WHERE subobject_name != 'ORACLE';

  COUNT(*)                                                                                   
----------                                                                                   
         0                                                                                   

Elapsed: 00:00:00.00
SQL> SELECT COUNT(*) FROM GG_SRC WHERE object_type != 'TABLE';

  COUNT(*)                                                                                   
----------                                                                                   
     69487                                                                                   

Elapsed: 00:00:00.00
SQL> 
SQL> SELECT COUNT(*) FROM GG_TRGT WHERE object_type != 'TABLE';

  COUNT(*)                                                                                   
----------                                                                                   
         0                                                                                   

Elapsed: 00:00:00.00
SQL> 

Now the environment is set, let’s start Oracle GoldenGate. As a very first step start GoldenGate Manager Process in both Source and Target

Start GoldenGate Manager Process@SOURCE
Oracle GoldenGate Command Interpreter for Oracle
Version 11.1.1.1.2 OGGCORE_11.1.1.1.2_PLATFORMS_111004.2100
Windows (optimized), Oracle 11g on Oct  5 2011 00:50:57

Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved.

GGSCI (HARI-PC) 1> start mgr

Manager started.

--------------------------------------------------------------------
                 /*  SOURCE MANAGER RUN TIME INFO */ 
--------------------------------------------------------------------
***********************************************************************
                 Oracle GoldenGate Manager for Oracle
     Version 11.1.1.1.2 OGGCORE_11.1.1.1.2_PLATFORMS_111004.2100
        Windows (optimized), Oracle 11g on Oct  5 2011 00:28:27

Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved.


                    Starting at 2012-10-05 00:35:13
***********************************************************************

Operating System Version:
Microsoft Windows 7 , on x86
Version 6.1 (Build 7600: )

Process id: 3308

Parameters...

PORT 1350
DYNAMICPORTLIST 12010-12030,1350,1230


***********************************************************************
**                     Run Time Messages                             **
***********************************************************************

2012-10-05 00:35:13  INFO    OGG-00983  Manager started (port 1350)

Start GoldenGate Manager Process@TARGET
Oracle GoldenGate Command Interpreter for Oracle
Version 11.1.1.1.2 OGGCORE_11.1.1.1.2_PLATFORMS_111004.2100
Windows (optimized), Oracle 11g on Oct  5 2011 00:50:57
Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved.

GGSCI (HARI-PC) 1> start mgr

Manager started.

--------------------------------------------------------------------
                 /*  TARGET MANAGER RUN TIME INFO */ 
--------------------------------------------------------------------
***********************************************************************
                 Oracle GoldenGate Manager for Oracle
     Version 11.1.1.1.2 OGGCORE_11.1.1.1.2_PLATFORMS_111004.2100
        Windows (optimized), Oracle 11g on Oct  5 2011 00:28:27
Copyright (C) 1995, 2011, Oracle and/or its affiliates. All rights reserved.

                    Starting at 2012-10-05 00:35:34
***********************************************************************
Operating System Version:
Microsoft Windows 7 , on x86
Version 6.1 (Build 7600: )

Process id: 3976

Parameters...

PORT 7809
***********************************************************************
**                     Run Time Messages                             **
***********************************************************************
2012-10-05 00:35:34  INFO    OGG-00983  Manager started (port 7809)

                     Its normally Initial Load which is the next step to start, but in scenario like above the key is to handle continuous data modifications coming in. So in a Zero downtime  replication (ZDR) the initial step is to capture the current activity in the source database and then start the initial load i.e

Step 1 - ZDR - Extract and Datapump

























OGG- Extract and Datapump

GGSCI (HARI-PC) 2> START EXTRACT OEXT1

Sending START request to MANAGER ...
EXTRACT OEXT1 starting


GGSCI (HARI-PC) 3> INFO ALL

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING
EXTRACT     STOPPED     ODPUMP1     00:00:00      00:19:10
EXTRACT     RUNNING     OEXT1       00:19:16      00:00:03


GGSCI (HARI-PC) 4> START EXTRACT ODPUMP1

Sending START request to MANAGER ...
EXTRACT ODPUMP1 starting


GGSCI (HARI-PC) 5> INFO ALL

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     ODPUMP1     00:00:00      00:19:19
EXTRACT     RUNNING     OEXT1       00:00:00      00:00:02


Runtime Logs
                    /*  Extract  - Runtime Info */
***********************************************************************
**                     Run Time Messages                             **
***********************************************************************


2012-10-05 00:39:17  INFO    OGG-01517  Position of first record processed Seque
nce 160, RBA 19020816, SCN 0.4713843, Oct 3, 2012 12:56:36 AM.
TABLE resolved (entry HARI.GG_SRC):
  TABLE HARI.GG_SRC;

Using the following key columns for source table HARI.GG_SRC: OBJECT_ID.

                   /*  Datapump  - Runtime Info */
***********************************************************************
**                     Run Time Messages                             **
***********************************************************************

Opened trail file I:\OGG\SOURCE\dirdat\EX000014 at 2012-10-05 00:39:37

Switching to next trail file I:\OGG\SOURCE\dirdat\EX000015 at 2012-10-05 00:39:3
7 due to EOF, with current RBA 196334
Opened trail file I:\OGG\SOURCE\dirdat\EX000015 at 2012-10-05 00:39:37

TABLE resolved (entry HARI.GG_SRC):
  TABLE HARI.GG_SRC;
PASSTHRU mapping resolved for source table HARI.GG_SRC


Ok, my Extract and Datapump are running. But are they capturing the current modifications occurring at source , Let's check that 

 
GGSCI (HARI-PC) 13> stats extract oext1

Sending STATS request to EXTRACT OEXT1 ...

Start of Statistics at 2012-10-05 00:40:06.

Output to I:\OGG\SOURCE\dirdat\EX:

Extracting from HARI.GG_SRC to HARI.GG_SRC:

*** Total statistics since 2012-10-05 00:39:27 ***
        Total inserts                              520.00
        Total updates                              301.00
        Total deletes                              202.00
        Total discards                               0.00
        Total operations                          1023.00

*** Daily statistics since 2012-10-05 00:39:27 ***
        Total inserts                              520.00
        Total updates                              301.00
        Total deletes                              202.00
        Total discards                               0.00
        Total operations                          1023.00

*** Hourly statistics since 2012-10-05 00:39:27 ***
        Total inserts                              520.00
        Total updates                              301.00
        Total deletes                              202.00
        Total discards                               0.00
        Total operations                          1023.00

*** Latest statistics since 2012-10-05 00:39:27 ***
        Total inserts                              520.00
        Total updates                              301.00
        Total deletes                              202.00
        Total discards                               0.00
        Total operations                          1023.00

End of Statistics.

Yes the data modifications are captured by the Extract process. Have got sufficient number of DML's as of now. While this is running, lets start the next step 

Step 2 - ZDR - Initial Load



As a part of Initial Load the Target tables are synchronized with that of source. OGG takes the snapshot of commited transaction as of the start time of the Inital Load and migrates that data alone. 











OGG - Initial Load

This is gonna be a special run as it will be executed only once, 

GGSCI (HARI-PC) 15> start extract iext1

Sending START request to MANAGER ...
EXTRACT IEXT1 starting

/* From the Logs */ 
GGSCI command (HARI): start extract iext1.
Command received from GGSCI on host 192.168.1.2 (START EXTRACT IEXT1 ).
EXTRACT IEXT1 starting.
Wildcard resolution set to IMMEDIATE because SOURCEISTABLE is used.
EXTRACT IEXT1 starting.
EXTRACT IEXT1 started.
EXTRACT IEXT1 stopped normally.

The initial load extract process would start and stop its respective replicate process automatically. Now while the initial is running lets check the target database to validate if there are any data load,


SQL> SELECT SYSTIMESTAMP FROM DUAL;

SYSTIMESTAMP                                      
--------------------------------------------------
05-OCT-12 12.41.03.718000 AM +05:30               

Elapsed: 00:00:00.00
SQL> SELECT COUNT(*) as SOURCE_COUNT FROM GG_SRC;

SOURCE_COUNT                                      
------------                                      
       72375                                      

Elapsed: 00:00:00.01
SQL> SELECT COUNT(*) as TARGET_COUNT FROM GG_TRGT
  2  ;

TARGET_COUNT                                      
------------                                      
       72359                                      

Elapsed: 00:00:00.01
SQL> 
SQL> 
SQL> SELECT COUNT(*) as TARGET_COUNT FROM GG_TRGT;

TARGET_COUNT                                      
------------                                      
       72359                                      

Elapsed: 00:00:00.01

Let's also check the stats of initial load avaliable at OGG
GGSCI (HARI-PC) 15> info extract iext1

EXTRACT    IEXT1     Last Started 2012-10-05 00:40   Status STOPPED
Checkpoint Lag       Not Available
Log Read Checkpoint  Table HARI.GG_SRC
                     2012-10-05 00:41:01  Record 72359
Task                 SOURCEISTABLE


Since there are continuous changes happening in the database, let compare the records count in source and target again
SQL> SELECT SYSTIMESTAMP FROM DUAL;

SYSTIMESTAMP                                        
----------------------------------------------------
05-OCT-12 12.41.48.520000 AM +05:30                 

Elapsed: 00:00:00.00
SQL> 
SQL> SELECT COUNT(*) as SOURCE_COUNT FROM GG_SRC;

SOURCE_COUNT                                        
------------                                        
       72369                                        

Elapsed: 00:00:00.00
SQL> SELECT COUNT(*) as TARGET_COUNT FROM GG_TRGT;

TARGET_COUNT                                        
------------                                        
       72359                                        

Elapsed: 00:00:00.01
SQL> 

The last step is to start the replicate process at the target so as so apply the ongoing changes

Step 3 - ZDR - Start Replicate Process
























The replicate process picks the remote trail files and applies them on the target database.

OGG - Replicate Process
GGSCI (HARI-PC) 2> start replicat orep1

Sending START request to MANAGER ...
REPLICAT OREP1 starting


/*-------------------------------------------------------------------*/
***********************************************************************
**                     Run Time Messages                             **
***********************************************************************

Opened trail file I:\OGG\TARGET\dirdat\RT000012 at 2012-10-05 00:42:30

Switching to next trail file I:\OGG\TARGET\dirdat\RT000013 at 2012-10-05 00:42:3
0 due to EOF, with current RBA 196899
Opened trail file I:\OGG\TARGET\dirdat\RT000013 at 2012-10-05 00:42:30

Processed extract process graceful restart record at seq 13, rba 1426.
Processed extract process graceful restart record at seq 13, rba 1488.

MAP resolved (entry HARI.GG_SRC):
  MAP HARI.GG_SRC, TARGET  hari.GG_TRGT;
Using following columns in default map by name:
  OWNER, OBJECT_NAME, SUBOBJECT_NAME, OBJECT_ID, DATA_OBJECT_ID,
  OBJECT_TYPE, CREATED, LAST_DDL_TIME, TIMESTAMP, STATUS, TEMPORARY,
  GENERATED, SECONDARY, NAMESPACE, EDITION_NAME

Using the following key columns for target table HARI.GG_TRGT: OBJECT_ID.

Applying Lagging Transactions
GGSCI (HARI-PC) 5> send replicat orep1 getlag

Sending GETLAG request to REPLICAT OREP1 ...
Last record lag: 7 seconds.
At EOF, no more records to process.


Did OGG handle Collisions at the target since Extract process is started prior to Initial Load ?

GGSCI (HARI-PC) 6> stats replicat orep1

Sending STATS request to REPLICAT OREP1 ...

Start of Statistics at 2012-10-05 00:46:08.

Replicating from HARI.GG_SRC to HARI.GG_TRGT:

*** Total statistics since 2012-10-05 00:42:31 ***
        Total inserts                              631.00
        Total updates                              373.00
        Total deletes                              250.00
        Total discards                               0.00
        Total operations                          1254.00
        Total insert collisions                    391.00
        Total update collisions                     40.00
        Total delete collisions                     77.00

*** Daily statistics since 2012-10-05 00:42:31 ***
        Total inserts                              631.00
        Total updates                              373.00
        Total deletes                              250.00
        Total discards                               0.00
        Total operations                          1254.00
        Total insert collisions                    391.00
        Total update collisions                     40.00
        Total delete collisions                     77.00

*** Hourly statistics since 2012-10-05 00:42:31 ***
        Total inserts                              631.00
        Total updates                              373.00
        Total deletes                              250.00
        Total discards                               0.00
        Total operations                          1254.00
        Total insert collisions                    391.00
        Total update collisions                     40.00
        Total delete collisions                     77.00

*** Latest statistics since 2012-10-05 00:42:31 ***
        Total inserts                              631.00
        Total updates                              373.00
        Total deletes                              250.00
        Total discards                               0.00
        Total operations                          1254.00
        Total insert collisions                    391.00
        Total update collisions                     40.00
        Total delete collisions                     77.00

End of Statistics.

Yes OGG handles collisions and the same is obtained through stats. Let's validate the data in the database

SQL> 
SQL> EXEC DBMS_SCHEDULER.drop_job(job_name => 'JOB_GG_DELETE');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> EXEC DBMS_SCHEDULER.drop_job(job_name => 'JOB_GG_UPDATE');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.01
SQL> EXEC DBMS_SCHEDULER.drop_job(job_name => 'JOB_GG_INSERT');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.03
SQL> 


Have stopped the scheduler jobs just like what we do during cutover/downtime. Let's check both source and target tables,

SQL> 
SQL> SELECT SYSTIMESTAMP FROM DUAL;

SYSTIMESTAMP                                                                                         
---------------------------------------------------------------------------                          
05-OCT-12 12.47.39.329000 AM +05:30                                                                  

Elapsed: 00:00:00.00
SQL> 
SQL> SELECT COUNT(*) as SOURCE_COUNT FROM GG_SRC;

SOURCE_COUNT                                                                                         
------------                                                                                         
       72445                                                                                         

Elapsed: 00:00:00.01
SQL> 
SQL> SELECT COUNT(*) as TARGET_COUNT FROM GG_TRGT;

TARGET_COUNT                                                                                         
------------                                                                                         
       72445                                                                                         

Elapsed: 00:00:00.00
SQL> 
SQL>  --REM "Checking UPDATE RECORDS COUNT"---
SQL> SELECT COUNT(*) as Source_subobject_count FROM GG_SRC WHERE subobject_name = 'ORACLE';

SOURCE_SUBOBJECT_COUNT                                                                               
----------------------                                                                               
                   155                                                                               

Elapsed: 00:00:00.01
SQL> 
SQL> SELECT COUNT(*) as Target_subobject_count FROM GG_TRGT WHERE subobject_name = 'ORACLE';

TARGET_SUBOBJECT_COUNT                                                                               
----------------------                                                                               
                   155                                                                               

Elapsed: 00:00:00.00
SQL> 
SQL> SELECT COUNT(*) as Source_object_type FROM GG_SRC WHERE object_type = 'ORACLE';

SOURCE_OBJECT_TYPE                                                                                   
------------------                                                                                   
                 4                                                                                   

Elapsed: 00:00:00.01
SQL> 
SQL> SELECT COUNT(*) as Target_object_type FROM GG_TRGT WHERE object_type = 'ORACLE';

TARGET_OBJECT_TYPE                                                                                   
------------------                                                                                   
                 4                                                                                   

Elapsed: 00:00:00.00
SQL> 
SQL> REM--"Checking DELETE RECORDS COUNT"--
SQL> SELECT COUNT(*) FROM GG_SRC WHERE subobject_name != 'ORACLE';

  COUNT(*)                                                                                           
----------                                                                                           
       761                                                                                           

Elapsed: 00:00:00.00
SQL> 
SQL> SELECT COUNT(*) FROM GG_TRGT WHERE subobject_name != 'ORACLE';

  COUNT(*)                                                                                           
----------                                                                                           
       761                                                                                           

Elapsed: 00:00:00.01
SQL> 
SQL> SELECT COUNT(*) FROM GG_SRC WHERE object_type != 'TABLE';

  COUNT(*)                                                                                           
----------                                                                                           
     69645                                                                                           

Elapsed: 00:00:00.01
SQL> 
SQL> SELECT COUNT(*) FROM GG_TRGT WHERE object_type != 'TABLE';

  COUNT(*)                                                                                           
----------                                                                                           
     69645                                                                                           

Elapsed: 00:00:00.01
SQL> 
SQL> spool off


Thus we are able to migrate the data to the target table while the data modifications are still active at the source end.