Tuesday, December 3, 2019

Resolving Gaps in Data Guard Apply Using Incremental RMAN BACKUP

Arup Nanda had written a blogpost on this in 2009 and I highly recommend all of you read this.

http://arup.blogspot.com/2009/12/resolving-gaps-in-data-guard-apply.html

The basic technique of using  SCN of dataguard and taking backup from primary is well documented in his blog and I am not going to repeat that.

Unfortunately the blog was written using normal filesystems and did not take into account about ASM with oracle managed files .


Example is production might have all the files under

+DATA/<prod db name>/DATAFILE

whereas dataguard will have files under

+DATA/<DR db name>/DATAFILE

Since the path do NOT match , you will have to tweak his technique a bit.

In this example , I will put in my perspective of how to fix this part.

PLEASE NOTE TO REFER TO ARUP NANDA'S ORIGINAL BLOG FOR THE RECOVERY PORTION USING RMAN.



PRIMARY
++++++

oracle@prodhost orclpr1 trace: sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Tue Mar 7 10:21:58 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> alter database create standby controlfile as '/tmp/control01.ctl';

Database altered.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
oracle@prodhost orclpr1 trace: scp /tmp/control01.ctl gchandra@drhost:/tmp/.
This computer system is for authorized users only. Unauthorized users are
prohibited. Users (authorized or unauthorized) have no explicit or implicit
expectation of privacy. Discontinue access immediately if you do not agree to
the conditions stated in this notice
Password:
control01.ctl                                                                                                                                                  100%   23MB  23.3MB/s   00:01


Data Guard
++++++++++

[root@drhost AKA ws-dr-bmodb01 tmp]# chown oracle:oinstall control01.ctl

oracle@drhost orcldr1 trace: srvctl stop database -d orcldr
oracle@drhost orcldr1 trace: sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Tue Mar 7 10:26:28 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

starConnected to an idle instance.

SQL> tup nomount
ORACLE instance started.

Total System Global Area 4731174912 bytes
Fixed Size                  4506744 bytes
Variable Size            1291848584 bytes
Database Buffers         3422552064 bytes
Redo Buffers               12267520 bytes
SQL> exit


oracle@drhost orcldr1 trace: rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Tue Mar 7 10:26:49 2017

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: FI9219PR (not mounted)

RMAN> restore controlfile from '/tmp/control01.ctl';

Starting restore at 07-MAR-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=774 instance=orcldr1 device type=DISK

channel ORA_DISK_1: copied control file copy
output file name=+DATA/orcldr/control01.ctl
Finished restore at 07-MAR-17






RMAN> sql 'alter database mount';

sql statement: alter database mount
released channel: ORA_DISK_1

RMAN> catalog start with '+data/orcldr/datafile';

Starting implicit crosscheck backup at 07-MAR-17
allocated channel: ORA_DISK_1
Finished implicit crosscheck backup at 07-MAR-17

Starting implicit crosscheck copy at 07-MAR-17
using channel ORA_DISK_1
Finished implicit crosscheck copy at 07-MAR-17

searching for all files in the recovery area
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: +RECO/orcldr/ARCHIVELOG/2017_03_07/thread_1_seq_793.2753.937994585
File Name: +RECO/orcldr/ARCHIVELOG/2017_03_07/thread_2_seq_714.5482.937994587
File Name: +RECO/orcldr/ARCHIVELOG/2017_03_07/thread_2_seq_715.672.937994749
File Name: +RECO/orcldr/ARCHIVELOG/2017_03_07/thread_1_seq_794.2341.937994749Production

searching for all files that match the pattern +data/orcldr/datafile

List of Files Unknown to the Database
=====================================
File Name: +DATA/orcldr/DATAFILE/system.468.929733757
File Name: +DATA/orcldr/DATAFILE/sysaux.469.929733759
File Name: +DATA/orcldr/DATAFILE/undotbs2.470.929733765
File Name: +DATA/orcldr/DATAFILE/admin.471.929733767
File Name: +DATA/orcldr/DATAFILE/archive.472.929733769
File Name: +DATA/orcldr/DATAFILE/hubload.473.929733771
File Name: +DATA/orcldr/DATAFILE/hubsys.474.929733773
File Name: +DATA/orcldr/DATAFILE/intplat.475.929733779
File Name: +DATA/orcldr/DATAFILE/keystore.476.929733781
File Name: +DATA/orcldr/DATAFILE/undotbs1.477.929733787
File Name: +DATA/orcldr/DATAFILE/users.478.929733787
File Name: +DATA/orcldr/DATAFILE/webinit.479.929733789
File Name: +DATA/orcldr/DATAFILE/webrpt.480.929733789
File Name: +DATA/orcldr/DATAFILE/websys.481.929733797
File Name: +DATA/orcldr/DATAFILE/wpdata.482.929733797
File Name: +DATA/orcldr/DATAFILE/system.536.937132397
File Name: +DATA/orcldr/DATAFILE/sysaux.537.937132397
File Name: +DATA/orcldr/DATAFILE/undotbs2.538.937132413
File Name: +DATA/orcldr/DATAFILE/admin.539.937132417
File Name: +DATA/orcldr/DATAFILE/archive.540.937132425
File Name: +DATA/orcldr/DATAFILE/hubload.541.937132431
File Name: +DATA/orcldr/DATAFILE/hubsys.542.937132433
File Name: +DATA/orcldr/DATAFILE/intplat.543.937132441
File Name: +DATA/orcldr/DATAFILE/keystore.544.937132441
File Name: +DATA/orcldr/DATAFILE/undotbs1.545.937132449
File Name: +DATA/orcldr/DATAFILE/users.546.937132449
File Name: +DATA/orcldr/DATAFILE/webinit.547.937132453
File Name: +DATA/orcldr/DATAFILE/webrpt.548.937132453
File Name: +DATA/orcldr/DATAFILE/websys.549.937132461
File Name: +DATA/orcldr/DATAFILE/wpdata.550.937132461

Do you really want to catalog the above files (enter YES or NO)? yes
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: +DATA/orcldr/DATAFILE/system.468.929733757
File Name: +DATA/orcldr/DATAFILE/sysaux.469.929733759
File Name: +DATA/orcldr/DATAFILE/undotbs2.470.929733765
File Name: +DATA/orcldr/DATAFILE/admin.471.929733767
File Name: +DATA/orcldr/DATAFILE/archive.472.929733769
File Name: +DATA/orcldr/DATAFILE/hubload.473.929733771
File Name: +DATA/orcldr/DATAFILE/hubsys.474.929733773
File Name: +DATA/orcldr/DATAFILE/intplat.475.929733779
File Name: +DATA/orcldr/DATAFILE/keystore.476.929733781
File Name: +DATA/orcldr/DATAFILE/undotbs1.477.929733787
File Name: +DATA/orcldr/DATAFILE/users.478.929733787
File Name: +DATA/orcldr/DATAFILE/webinit.479.929733789
File Name: +DATA/orcldr/DATAFILE/webrpt.480.929733789
File Name: +DATA/orcldr/DATAFILE/websys.481.929733797
File Name: +DATA/orcldr/DATAFILE/wpdata.482.929733797
File Name: +DATA/orcldr/DATAFILE/system.536.937132397
File Name: +DATA/orcldr/DATAFILE/sysaux.537.937132397
File Name: +DATA/orcldr/DATAFILE/undotbs2.538.937132413
File Name: +DATA/orcldr/DATAFILE/admin.539.937132417
File Name: +DATA/orcldr/DATAFILE/archive.540.937132425
File Name: +DATA/orcldr/DATAFILE/hubload.541.937132431
File Name: +DATA/orcldr/DATAFILE/hubsys.542.937132433
File Name: +DATA/orcldr/DATAFILE/intplat.543.937132441
File Name: +DATA/orcldr/DATAFILE/keystore.544.937132441
File Name: +DATA/orcldr/DATAFILE/undotbs1.545.937132449
File Name: +DATA/orcldr/DATAFILE/users.546.937132449
File Name: +DATA/orcldr/DATAFILE/webinit.547.937132453
File Name: +DATA/orcldr/DATAFILE/webrpt.548.937132453
File Name: +DATA/orcldr/DATAFILE/websys.549.937132461
File Name: +DATA/orcldr/DATAFILE/wpdata.550.937132461

RMAN> switch database to copy;

datafile 1 switched to datafile copy "+DATA/orcldr/DATAFILE/system.536.937132397"
datafile 2 switched to datafile copy "+DATA/orcldr/DATAFILE/sysaux.537.937132397"
datafile 3 switched to datafile copy "+DATA/orcldr/DATAFILE/undotbs2.538.937132413"
datafile 4 switched to datafile copy "+DATA/orcldr/DATAFILE/admin.539.937132417"
datafile 5 switched to datafile copy "+DATA/orcldr/DATAFILE/archive.540.937132425"
datafile 6 switched to datafile copy "+DATA/orcldr/DATAFILE/hubload.541.937132431"
datafile 7 switched to datafile copy "+DATA/orcldr/DATAFILE/hubsys.542.937132433"
datafile 8 switched to datafile copy "+DATA/orcldr/DATAFILE/intplat.543.937132441"
datafile 9 switched to datafile copy "+DATA/orcldr/DATAFILE/keystore.544.937132441"
datafile 10 switched to datafile copy "+DATA/orcldr/DATAFILE/undotbs1.545.937132449"
datafile 11 switched to datafile copy "+DATA/orcldr/DATAFILE/users.546.937132449"
datafile 12 switched to datafile copy "+DATA/orcldr/DATAFILE/webinit.547.937132453"
datafile 13 switched to datafile copy "+DATA/orcldr/DATAFILE/webrpt.548.937132453"
datafile 14 switched to datafile copy "+DATA/orcldr/DATAFILE/websys.549.937132461"
datafile 15 switched to datafile copy "+DATA/orcldr/DATAFILE/wpdata.550.937132461"

RMAN> exit


Recovery Manager complete.


oracle@drhost orcldr1 trace: sqlplus '/ as sysdba'

SQL*Plus: Release 12.1.0.2.0 Production on Tue Mar 7 10:30:15 2017

Copyright (c) 1982, 2014, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options

SQL> shutdown immediate
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.

NOW WE HAVE SUCESSFULLY GOT THE STANDBY CONTROLFILE FROM PRODUCTION INTO DATAGUARD AND SUCESSFULLY CHANGED THE PATH OF ALL THE FILE NAMES.



Once the above is done, you can start from step 10 in his blog .


Note to confused myself
++++++++++++++++++

You might be wondering what hapenned to the redo logs under +DATA/orcldr/onlinelog directory as we never renamed them but oracle will automatically clear them up for all the groups once you open the database.

example of alert log  -

ORA-17503: ksfdopn:2 Failed to open file +DATA/FI9219DR/onlinelog/group_3.414.929526809
ORA-15173: entry 'group_3.414.929526809' does not exist in directory 'onlinelog'
Deleted Oracle managed file +DATA/orcldr/onlinelog/group_3.414.929526809
Deleted Oracle managed file +RECO/orcldr/onlinelog/group_3.562.929526811

Clearing online redo logfile 3 complete



GoldenGate Password Encryption




The goldengate admin user is GGADMIN and password is secretpassword.


GET ENCRYPTED PASSWORD for "secretpassword"
++++++++++++++++++++++++++++++++++++++++++
GGSCI (homelinux.local) 1> encrypt password secretpassword,encryptkey default
Using Blowfish encryption with DEFAULT key.
Encrypted password:  AACAAAAAAAAAAAMALFJHGAYIMJSFLYTVDHBLHDIEHVJSFSJVD

Algorithm used:  BLOWFISH


VERIFY MANUALLY USING THE ENCRYPTED PASSWORD GENERATED FROM ABOVE 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
dblogin userid ggadmin@ORCL, password AACAAAAAAAAAAAMALFJHGAYIMJSFLYTVDHBLHDIEHVJSFSJVD

CHANGE THE CLEARTEXT PASSWORD IN YOUR PARAMETER FILE TO ENCRYPTED
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

The cleartext password 
userid ggadmin@ORCL, password secretpassword

can now be changed to
userid ggadmin@ORCL, password AACAAAAAAAAAAAMALFJHGAYIMJSFLYTVDHBLHDIEHVJSFSJVDENCRYPTKEY DEFAULT

Oracle export with where clause and parallelism





This pretty much sums up the way to export tables with a where clause.



export DATA_PUMP_DIR="EXPORT_DIR"
expdp \'/ as sysdba\'  tables=scott.emp query=scott.emp:'"where eno <>0"' dumpfile=emp%U.dmp parallel=5 logfile=emp.log compression=all



parallelism is NOT mandatory and can be skipped

The above is put into a shell script and can be run from the unix prompt.
Tested only on UNIX systems.


Saturday, August 9, 2014

Goldengate ALLOWNOOPUPDATES error - Why it happens ?

Before you read this article , you might want to read my other 3 blogs -
Goldengate supplemental logging , keycols, logallsupcols testcase 1/2/3 

As I have explained , an update on the source can either trigger a GGSPKUpdate or a FieldComp record on the trail file.

Now ALLOWNOOPUPDATES is specific to FieldComp update.

A usual update statement consists of

update <table name> set <set clause> where <where clause> ;

The error happens when a replicat tries to construct the  'set clause' from a trail record after it has successfully constructed the 'where clause' .

Again as explained in the earlier blog, if the trail file does not have the column information which replicat needs (based on PK/UK on the target table or KEYCOLS in replicat parameter table) then the "where clause" construction part of the update will itself fail. All those failures are documented in the other blogs mentioned above.

Considering that it has gone past that part successfully, it has to construct the "set clause" of the update statement.

Usually if a particular column is picked by where clause that column should be skipped from set clause.

When replicat sees that all the columns in the trail file are used by the where clause then it reaches a condition that there are no more columns left to construct the "set clause". This is exactly the reason why we encounter a   ALLOWNOOPUPDATES.


Let me start with a simple example with logdump views to explain more clearly.

1.Create a table in Source and make sure a PK is built. A PK is needed to make sure any update goes as FieldComp.



SQL> create table junk(col1 number ,col2 number,col3 number,col4 number,col5 num
ber,col6 number)
  2  /

Table created.

SQL> alter table junk add constraint pk_junk primary key(col1)
  2  /

Table altered.

2.Make sure you add supplemental logging for ALL Columns -


GGSCI (titan.gautham.com) 239> add trandata scott.junk, allcols

Logging of supplemental redo data enabled for table SCOTT.JUNK.
TRANDATA for scheduling columns has been added on table 'SCOTT.JUNK'.TRANDATA for all columns has been added on table 'SCOTT.JUNK'.
GGSCI (titan.gautham.com) 240> 

GGSCI (titan.gautham.com) 240> info trandata scott.junk

Logging of supplemental redo log data is enabled for table SCOTT.JUNK.

Columns supplementally logged for table SCOTT.JUNK: ALL.

3.Ensure the table has arrived on target and drop the PK on the target .

The reason why we drop the PK on target is, that when replicat tries to construct the 'where clause' of the update, we want all the columns in the trail file to be used so that nothing is left for the 'set clause' 


SQL> desc junk
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------

 COL1                                      NOT NULL NUMBER
 COL2                                               NUMBER
 COL3                                               NUMBER
 COL4                                               NUMBER
 COL5                                               NUMBER
 COL6                                               NUMBER

SQL> alter table junk drop constraint pk_junk
  2  /

Table altered.

SQL> desc junk
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------

 COL1                                               NUMBER
 COL2                                               NUMBER
 COL3                                               NUMBER
 COL4                                               NUMBER
 COL5                                               NUMBER
 COL6                                               NUMBER

SQL>


4.Insert a record in the source and ensure record arrives in target


--source
SQL> insert into junk values(100,100,100,100,100,100);

1 row created.

SQL> commit;

Commit complete.

--target
SQL> select * from junk;

      COL1       COL2       COL3       COL4       COL5       COL6
---------- ---------- ---------- ---------- ---------- ----------
       100        100        100        100        100        100

We are not interested in seeing logdump for above insert as it does not serve us any purpose

5.Update the source -

SQL> update junk set col5=500 where col1=100;

1 row updated.

SQL> commit;

Commit complete.

Check the replicat -


GGSCI (titan.gautham.com) 242> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
JAGENT      STOPPED                                           
EXTRACT     RUNNING     xxxxx     00:00:09      00:00:01    
REPLICAT    ABENDED     yyyy     00:00:00      00:00:25    

Check the error from ggserr.log

2014-08-09 21:28:51  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): info all.
2014-08-09 21:29:08  ERROR   OGG-01168  Oracle GoldenGate Delivery for Oracle, rphvn1d.prm:  Encountered an update for target table SCOTT1.JUNK, which has no unique key defined.  KEYCOLS can be used to define a key.  Use ALLOWNOOPUPDATES to process the update without applying it to the target database.  Use APPLYNOOPUPDATES to force the update to be applied using all columns in both the SET and WHERE clause.
2014-08-09 21:29:17  ERROR   OGG-01668  Oracle GoldenGate Delivery for Oracle, rphvn1d.prm:  PROCESS ABENDING.

I hope the above explanation is clear.

Obviously if we create a PK or UK on the target or use KEYCOLS in replicat , we should be able to start the replicat with no issues.






Part 3 - Goldengate supplemental logging , keycols, logallsupcols - With Final Comments

Same as in testcase 1 i.e. NO PK's on source or target, no LOGALLSUPCOLS in extract, no KEYCOLS in replicat  except that extract now has KEYCOLS defined on col1 .


Extract Parameter file--------------------------------

--LOGALLSUPCOLS -->commented out
GETTRUNCATES
table SCOTT.junk, keycols(col1);


Replicat Paramer file------------------------------

Please don't get confused with the mapexception part as it is the exception handling routine I always add in dev/qa to trap unique key violated and no data found.
This has no meaning in this experiment.

DDL INCLUDE MAPPED &

EXCLUDE OBJNAME GGADM.EXCEPTIONS
..
ASSUMETARGETDEFS
GETTRUNCATES
REPERROR (-1, EXCEPTION)
REPERROR (-1403, EXCEPTION)

map scott.junk, target scott1.junk, mapexception (TARGET ggadm.exceptions
, INSERTALLRECORDS
-- , EXCEPTIONSONLY
, colmap ( rep_name = 'xxxxx'
, table_name = @GETENV ('GGHEADER', 'TABLENAME')
, errno = @GETENV ('LASTERR', 'DBERRNUM')
, dberrmsg = @GETENV ( 'LASTERR', 'DBERRMSG')
, optype = @GETENV ( 'LASTERR', 'OPTYPE')
, errtype = @GETENV ( 'LASTERR', 'ERRTYPE')
, logrba = @GETENV ( 'GGHEADER', 'LOGRBA')
, logposition = @GETENV ( 'GGHEADER', 'LOGPOSITION')
, TRAILRBA = @GETENV('RECORD', 'FILERBA')
, TRAILSEQUENCE = @GETENV('RECORD', 'FILESEQNO')
, committimestamp = @GETENV ( 'GGHEADER', 'COMMITTIMESTAMP')

) );

As always create table, add supplemental logging on col1,col2,col3 and insert a row on source and ensure that travelled to target.

Please see this relevant portion from test case 1.


Now the important update

SQL> update junk set col4=500 where col1=100;

1 row updated.



SQL> commit;

And here is the record from logdump in trail file
------------------------------------------------





Logdump 781 >n

___________________________________________________________________ 
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)  
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)  
RecLength  :    44  (x002c)   IO Time    : 2014/08/09 16:57:33.000.000   
IOType     :    15  (x0f)     OrigNode   :   255  (xff) 
TransInd   :     .  (x03)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
AuditRBA   :      10578       AuditPos   : 245809680 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2014/08/09 16:57:33.000.000 FieldComp            Len    44 RBA 5876 

Name: SCOTT.JUNK 
After  Image:                                             Partition 4   G  s   
 0000 0007 0000 0003 3130 3000 0100 0700 0000 0331 | ........100........1  
 3030 0002 0007 0000 0003 3130 3000 0300 0700 0000 | 00........100.......  
 0335 3030                                         | .500  
Column     0 (x0000), Len     7 (x0007)  
 0000 0003 3130 30                                 | ....100  
Column     1 (x0001), Len     7 (x0007)  
 0000 0003 3130 30                                 | ....100  
Column     2 (x0002), Len     7 (x0007)  
 0000 0003 3130 30                                 | ....100  
Column     3 (x0003), Len     7 (x0007)  
 0000 0003 3530 30                                 | ....500  
  
GGS tokens: 
TokenID x52 'R' ORAROWID         Info x00  Length   20 
 4141 4155 476a 4141 4641 4141 414b 3141 4141 0001 | AAAUGjAAFAAAAK1AAA..  
TokenID x4c 'L' LOGCSN           Info x00  Length    9 
 3539 3131 3131 3537 31                            | 591111571  
TokenID x36 '6' TRANID           Info x00  Length   12 

 3231 2e33 332e 3138 3830 3833                     | 21.33.188083  

See the huge difference that KEYCOL made ?

In test case 1, we saw a GGSPKUpdate  whereas now we are seeing a FieldComp.
remember supplemental logging was done on col1,col2 and col3 and KEYCOLS had col1.

Replicat has the same abort issue .


2014-08-09 16:57:14  WARNING OGG-06439  No unique key is defined for table JUNK. All viable columns will be used to represent the key, but may not guarantee uniqueness. KEYCOLS may be used to define the key.

Using following columns in default map by name:
  COL1, COL2, COL3, COL4, COL5, COL6

Using the following key columns for target table SCOTT1.JUNK: COL1, COL2, COL3, COL4, COL5, COL6.

This is because since the target has no PK/UK or KEYCOLS , the replicat wants to set all the columns in the where condition of the update but it sees only col1, col2, col3, col4. therefore it fails.


The resolving is simple, just add primary key on either or all of columns - col1, col2,col3 and col4 and you are set.

SQL> alter table junk add constraint pk_junk primary key(col2);


Table altered.

--restart replicat and see the magic
SQL> /

      COL1       COL2       COL3       COL4       COL5       COL6
---------- ---------- ---------- ---------- ---------- ----------
       100        100        100        500        100        100


I am going to redo the same experiment above but I am going to set KEYCOLS to a column which is not logged by supplemental logging and then going to update an totally different column.

1.Set supplemental logging on col1, col2, col3

2.Set KEYCOLS on col4 on extract side
i.e. table SCOTT.junk, keycols(col4);

3.Update a column say col5 not defined in 1,2.
Update junk set col5=500 where col1=100;

SQL> create table junk(col1 number,col2 number,col3 number,col4 number,col5 numb
er,col6 number);

Table created.

add trandata scott.junk, cols(col1,col2,col3),nokey

SQL> insert into junk values(100,100,100,100,100,100);

1 row created.

SQL> commit;

Commit complete.


SQL> update junk set col5=500 where col1=100;

1 row updated.

SQL> commit;

Commit complete.

logdump as always
---------------------

Logdump 794 >n
___________________________________________________________________ 
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)  
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)  
RecLength  :    44  (x002c)   IO Time    : 2014/08/09 17:17:47.000.000   
IOType     :    15  (x0f)     OrigNode   :   255  (xff) 
TransInd   :     .  (x03)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
AuditRBA   :      10578       AuditPos   : 248909840 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2014/08/09 17:17:47.000.000 FieldComp            Len    44 RBA 3031 
Name: SCOTT.JUNK 
After  Image:                                             Partition 4   G  s   
 0000 0007 0000 0003 3130 3000 0100 0700 0000 0331 | ........100........1  
 3030 0002 0007 0000 0003 3130 3000 0400 0700 0000 | 00........100.......  
 0335 3030                                         | .500  
Column     0 (x0000), Len     7 (x0007)  
 0000 0003 3130 30                                 | ....100  
Column     1 (x0001), Len     7 (x0007)  
 0000 0003 3130 30                                 | ....100  
Column     2 (x0002), Len     7 (x0007)  
 0000 0003 3130 30                                 | ....100  
Column     4 (x0004), Len     7 (x0007)  
 0000 0003 3530 30                                 | ....500  
  
GGS tokens: 
TokenID x52 'R' ORAROWID         Info x00  Length   20 
 4141 4155 476e 4141 4641 4141 414c 4e41 4141 0001 | AAAUGnAAFAAAALNAAA..  
TokenID x4c 'L' LOGCSN           Info x00  Length    9 
 3539 3131 3133 3834 32                            | 591113842  
TokenID x36 '6' TRANID           Info x00  Length   12 
 3231 2e33 302e 3138 3735 3938                     | 21.30.187598  

Very Interesting !!!!
Because KEYCOLs is mentioned in the parameter file , that is why we see a FieldComp instead of GGSPKUpdate operation (otherwise I am sure we would see a GGSPKUpdate in the absence of KEYCOLS)  but unfortunately col4 which is set in KEYCOLS is missing in the 
trail file as the supplemental logging only covered col1, col2 and col3 and offcourse we see col5 as that was the row updated.
So please, please ensure that your supplemental logging covers the KEYCOLS.

Also KEYCOLS or PK/UK decide whether the update would be a FieldComp vs GGSPKUpdate.

As always we see REPLICAT abended in target and i guess you all know what to do (if you had been reading my previous testcases).


Here are the golden rules on how the target system constructs the where condition of the target
based on whether it encounters a FieldComp or GGSPKUpdate


GGSPKUpdate


GGSPKUpdate has both before and after images in same record.
It is created only if source has no PK/UK nor does it have KEYCOLS defined on extract.

Rules - (1)If The target should have PK/UK or KEYCOLS defined in replicat - it will use those columns
for where condition(If both of them are available I am assuming KEYCOLS takes precedence over PK/UK)
Then the before image in the GGSPKUpdate should have those columns set. Otherwise replicat will bomb as it does not have enough information in trail file to construct the where clause.


(2)If the target has no PK/UK nor does it have KEYCOLS defined in replicat -
The before image of  GGSPKUpdate should have all the table columns . Otherwise replicat will bomb as it does not have enough information in trail file to construct the where clause.



FieldComp

FieldComp has no before image. It has only after image as basic assumption is PK is not changed.It is created only if source has a PK/UK or has KEYCOLS defined in extract parameter file.

Rules -
(1)If The target should have PK/UK or KEYCOLS defined in replicat - it will use those columns for where condition.
If both of them are available I am assuming KEYCOLS takes precedence over PK/UK.
Now the afterimage should cover those columns.If not it will bomb.
If it covers, then it will use them in the where condition and use the rest of the columns in set.

(2)If the target has no PK/UK nor does it have KEYCOLS defined in replicat -
The after image of  FieldComp should have all the table columns . Otherwise replicat will bomb.
Assuming after image has all columns , then it uses all those columns for where condition and since it cannot reuse the columns in set, the update effectively becomes

update <table name> set <nothing> where <column list from goldengate>;


When the above case is encountered, we get the APPLYNOOPUPDATES error.

2014-08-09 20:58:36  INFO    OGG-00987  Oracle GoldenGate Command Interpreter for Oracle:  GGSCI command (oracle): info all.
2014-08-09 20:58:58  ERROR   OGG-01168  Oracle GoldenGate Delivery for Oracle, rphvn1d.prm:  Encountered an update for target table SCOTT1.JUNK, which has no unique key defined.  KEYCOLS can be used to define a key.  Use ALLOWNOOPUPDATES to process the update without applying it to the target database.  Use APPLYNOOPUPDATES to force the update to be applied using all columns in both the SET and WHERE clause.

2014-08-09 20:59:08  ERROR   OGG-01668  Oracle GoldenGate Delivery for Oracle, rphvn1d.prm:  PROCESS ABENDING.







Part 2 - Goldengate supplemental logging , keycols, logallsupcols testcase

This is in continuation to my series of findings and I recommend that you start from the first post
http://gauthamc.blogspot.com/2014/08/goldengate-supplemental-logging-keycols.html to understand my environment much better.

TEST CASE 2

==========
The test case is pretty similar to test case 1 except that I have modified to extract parameter to add
LOGALLSUPCOLS.

Otherwise the setup is exactly the same as test case 1.


You might want to read more about LOGALLSUPCOLS (extract parameter) which is GG 12c feature and seems to be mandatory if you are using INTEGRATED replicat (again a 12c feature) with parallelism >1 (which should be true in most cases). Here we are going to look at what changes in the trail file when this parameter is set .


So here it goes again, the same steps as in test case 1.


--extract parameter file

..
..

DDL include Mapped

LOGALLSUPCOLS
table SCOTT.junk;

sequence SCOTT.*;

--end of extract parameter file

--create table in source with no PK/UK defined

SQL> create table junk(col1 number,col2 number,col3 number,col4 number,col5 numb
er,col6 number);


Table created.

--make sure the table DDL has moved to target

SQL> select * from junk;


no rows selected

--set supplemental logging for col1,col2 and col3


GGSCI (titan.gautham.com) 140> add trandata scott.junk, cols(col1,col2,col3),nokey


2014-08-09 15:52:09  WARNING OGG-06439  No unique key is defined for table JUNK. All viable columns will be used to represent the key, but may not guarantee uniqueness. KEYCOLS may be used to define the key.


Logging of supplemental redo data enabled for table SCOTT.JUNK.


TRANDATA for scheduling columns has been added on table 'SCOTT.JUNK'.


--verify


GGSCI (titan.gautham.com) 142> info trandata scott.junk


Logging of supplemental redo log data is enabled for table SCOTT.JUNK.



Columns supplementally logged for table SCOTT.JUNK: COL1, COL2, COL3.

--perform an insert on source

SQL> insert into junk values(100,100,100,100,100,100);

1 row created.


SQL> commit;



Commit complete.

--verify insert has travelled to target

SQL> select * from junk;

      COL1       COL2       COL3       COL4       COL5       COL6

---------- ---------- ---------- ---------- ---------- ----------

       100        100        100        100        100        100


--perform an update on the source of col4 .Remember only col1,col2 and col3 is added for supplemental logging


SQL> update junk set col4=500 where col1=100;


1 row updated.


SQL> commit;



Commit complete.

Look at the trailfile from logdump for the update

--------------------------------------------------------------------
The interesting thing is you will see 2 records in the trail file for the above update.
You will see a FieldComp (before image) followed by a GGSPKUpdate(before/after image).


It does look like the  LOGALLSUPCOLS is responsible for the 1st FieldComp record .
With my interactions with Oracle Support and update operation always appears as either a FieldComp or GGSPKUpdate(when there is a PK update).In most practical situations where a table always has a PK and logging is setup either on the PK or all columns, I have only seen a FieldComp when an update operation takes place.
I am assuming that if the source had a PK or KEYCOLS defined in parameter file(assuming supplemental logging covered the PK or the KEYCOLS), we would see the GGSPKUpdate replaced with a FieldComp .That will be my test case 3.

Logdump 752 >n

___________________________________________________________________ 
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)  
UndoFlag   :     .  (x00)     BeforeAfter:     B  (x42)  
RecLength  :    44  (x002c)   IO Time    : 2014/08/09 16:17:18.000.000   
IOType     :    15  (x0f)     OrigNode   :   255  (xff) 
TransInd   :     .  (x00)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
AuditRBA   :      10578       AuditPos   : 243605008 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2014/08/09 16:17:18.000.000 FieldComp            Len    44 RBA 8711 

Name: SCOTT.JUNK 
Before Image:                                             Partition 4   G  b   
 0000 0007 0000 0003 3130 3000 0100 0700 0000 0331 | ........100........1  
 3030 0002 0007 0000 0003 3130 3000 0300 0700 0000 | 00........100.......  
 0331 3030                                         | .100  
Column     0 (x0000), Len     7 (x0007)  
 0000 0003 3130 30                                 | ....100  
Column     1 (x0001), Len     7 (x0007)  
 0000 0003 3130 30                                 | ....100  
Column     2 (x0002), Len     7 (x0007)  
 0000 0003 3130 30                                 | ....100  
Column     3 (x0003), Len     7 (x0007)  
 0000 0003 3130 30                                 | ....100  
  
GGS tokens: 
TokenID x52 'R' ORAROWID         Info x00  Length   20 
 4141 4155 4763 4141 4641 4141 414b 4641 4141 0001 | AAAUGcAAFAAAAKFAAA..  
TokenID x4c 'L' LOGCSN           Info x00  Length    9 
 3539 3131 3038 3032 37                            | 591108027  
TokenID x36 '6' TRANID           Info x00  Length   12 
 3130 2e32 392e 3637 3134 3533                     | 10.29.671453  
   
Important findings from the above record -
As expected we see col1,col2,col3 records because of supplemental logging.
Looks like since we updated col4 (4th column of the table or column 3 in GG(as GG numbers column from 0),
it stores the before image of col4 which is 100




Logdump 753 >n

___________________________________________________________________ 
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)  
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)  
RecLength  :    90  (x005a)   IO Time    : 2014/08/09 16:17:18.000.000   
IOType     :   115  (x73)     OrigNode   :   255  (xff) 
TransInd   :     .  (x02)     FormatType :     R  (x52) 
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00) 
AuditRBA   :      10578       AuditPos   : 243605008 
Continued  :     N  (x00)     RecCount   :     1  (x01) 

2014/08/09 16:17:18.000.000 GGSPKUpdate          Len    90 RBA 8873 

Name: SCOTT.JUNK 
After  Image:                                             Partition 4   G  e   
 002c 0000 0007 0000 0003 3130 3000 0100 0700 0000 | .,........100.......  
 0331 3030 0002 0007 0000 0003 3130 3000 0300 0700 | .100........100.....  
 0000 0331 3030 0000 0007 0000 0003 3130 3000 0100 | ...100........100...  
 0700 0000 0331 3030 0002 0007 0000 0003 3130 3000 | .....100........100.  
 0300 0700 0000 0335 3030                          | .......500  
Before Image          Len    46 (x0000002e) 
KeyLen    44 (x0000002c) 
KeyCol     0 (x0000), Len     7 (x0007)  
 0000 0003 3130 30                                 | ....100  
KeyCol     1 (x0001), Len     7 (x0007)  
 0000 0003 3130 30                                 | ....100  
KeyCol     2 (x0002), Len     7 (x0007)  
 0000 0003 3130 30                                 | ....100  
KeyCol     3 (x0003), Len     7 (x0007)  
 0000 0003 3130 30                                 | ....100  

After Image           Len    44 (x0000002c) 

Column     0 (x0000), Len     7 (x0007)  
 0000 0003 3130 30                                 | ....100  
Column     1 (x0001), Len     7 (x0007)  
 0000 0003 3130 30                                 | ....100  
Column     2 (x0002), Len     7 (x0007)  
 0000 0003 3130 30                                 | ....100  
Column     3 (x0003), Len     7 (x0007)  
 0000 0003 3530 30                                 | ....500  
  
GGS tokens: 
TokenID x52 'R' ORAROWID         Info x00  Length   20 
 4141 4155 4763 4141 4641 4141 414b 4641 4141 0001 | AAAUGcAAFAAAAKFAAA..  
   
As with testcase 1, the replicat was broken here too -
REPLICAT    ABENDED     RPHVN1D     00:00:00      00:02:35    

report file

---------------
...
...
2014-08-09 16:16:54  WARNING OGG-06439  No unique key is defined for table JUNK. All viable columns will be used to represent the key, but may not guarantee uniqueness. KEYCOLS may be used to define the key.
Using following columns in default map by name:
  COL1, COL2, COL3, COL4, COL5, COL6

Using the following key columns for target table SCOTT1.JUNK: COL1, COL2, COL3, COL4, COL5, COL6.

..
..

It does seem like we had the same abort as testcase 1 and we can fix this issue either by putting KEYCOLS on the replicat parameter file or just adding a PK(on target table) on one of the columns of col1, col2,col3 and col4.


Just for curiousity, instead of adding a PK on col1 or col2 or col3 (unchanged), let me add the PK on the changed column col4.


SQL> alter table junk add constraint pk_junk primary key(col4);



Table altered.

--restart the replicat and check the target


SQL> select * from junk;


      COL1       COL2       COL3       COL4       COL5       COL6

---------- ---------- ---------- ---------- ---------- ----------

       100        100        100        500        100        100


Looks like we are all good now.