Friday, December 23, 2011

Golden Gate RMAN Interaction Causing RMAN-08137

**6/3/2013 Update: The following applies to Classic Capture. Things have changed a little bit for Integrated Capture. Hope to address this in a new blog *****
We are test-running GG on our development instance and therefore we had to put the DB in archivelog mode.
Till this incident hapenned I used to think the only reson why the database was in ARCHIVELOG mode was because the golden gate can read the archive log files if it lags behind the redo log files.

We used to run a RMAN job via cron every 4 hours which would just run the following command -

delete noprompt archivelog all;

We did not backup the archive logs as this was a DEV instance and the database was in ARCHIVELOG only for the GG capture process and we were also aware that GG will prevent archivelogs from being deleted (if necessary)which it had to read because of a lag where it could not keep in pace with redo generation of the database.

Lately the archive destination was getting 100% full and when we ran the job manually we would see archive files as old as 8-10 hours still in the destination.
The rman delete also would fail for the above files complaining as -

RMAN> delete noprompt archivelog all;

using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=8 device type=DISK
RMAN-08137: WARNING: archived log not deleted, needed for standby or upstream capture process
archived log file name=/oracle/arch/test123/arch/1_3071_767059108.dbf thread=1 sequence=3071
..
..
..
..
..
Then we checked which redo log the capture process (info ext <ext name>,detail) was currently reading and it would show the sequence number of the  online redo log file.

So the big question was

WHY IS GOLDEN GATE PREVENTING ARCHIVE LOGS FROM BEING DELETED IF IT IS NOT LAGGING BEHIND

So today I am going to share a few things I learnt working with Oracle Support.

The first thing I came to know was the capture process writes only committed transactions into a trail file. This seemed to be the root cause of the issue.

So technically if it is 3pm now and there is an active transaction running from 7am in the morning , all the changes between 7am and 3pm have to be recorded somewhere, right? This obviously cannot go into the trail file as only commited transactions can get into trail file (I am still not sure why  this was architected this way???Any thoughts from anybody?).

So it looks like the Golden Gate capture process stores the details of a transaction in memory and it is written into the trail file only when comitted. There is something called Bounded Recovery (BR) whose timeout is set to 4 hours(by default) . That means if an active transaction goes beyond 4 hours,  the transaction is copied from memory into the BR file under $$GG_HOME/BR/<extract name> directory.

This parameter can be changed by putting the following parameter into the extract parameter file.
The following line sets it to 1 hour.

BR BRINTERVAL 1H
or
BR BRINTERVAL 20M  -> 20 minutes

Now assuming the BR is set to 4 hours (default value) and somebody stopped the extract process at say 7pm and as of 7pm there were many active transactions of which the oldest transaction started at 8am .

Now without the BR feature, when the extract is restarted it has to read redo information starting from 8am which would most probably be archived. Therefore the extract just wouldn't start if it is not able to access this archived file. This seems to be the most important reason for any database to be in archivelog mode for GG capture prcess to function effectively.

Please note that even by reducing BR down to 1 hour or so, is not going to resolve the RMAN-08137 issue .

The golden rule is
However old the archive file is, if it contains a record from an active transaction , then it cannot be deleted.Also the subsequent archive logfiles will not be deleted.



For example , if you wrote a code like

BEGIN
insert into t1 values .... ; -->redo sequence is 3500 on day 1
dbms_lock.sleep('15 days'); -->sleep for 15 days
insert into t2 values(......); redo sequence is 6000 on day 16
END;
/
RMAN will be unable to delete from sequence 3500 onwards eventhough sequence 3501 ...5999 does not have anything to do with this transaction (I am testing this out next week and update if i see any anomalies).

And since it is not possible to ask people to commit their transactions frequently , I seem to favour to decouple extract and rman by adding the following parameter
 ggsci> stop ext <extract name>
ggsci> dbloging userid <id> password <pw>
ggsci> UNREGISTER EXTRACT <extract name> LOGRETENTION

and editing the extract parameter file
TRANLOGOPTIONS LOGRETENTION DISABLED

Again this could lead to other issues like extract abending as it had to read from archivelogs (as it was lagging)
and could not find the archived files as rman had deleted it.

Therefore a good way is to put a balance by deleting on like sysdate - 4 hours worth of archive log files.
Let me know if anybody has found out a better way instead of decoupling extract and rman.

*********IMPORTANT 6/7/2012 UPDATE ***************
PLEASE NOTE THAT THE DEFAULT BEHAVIOUR HAS CHANGED SINCE GOLDEN GATE VERSION 11.1.1.1.2. FROM THIS VERSION , GG AND RMAN ARE DE-COUPLED BY DEFAULT AND THEREFORE YOU HAVE TO MANUALLY COUPLE THEM (IF YOU WANT BOTH OF THEM TO INTERACT WITH EACH OTHER). A GOOD WAY TO CHECK IS TO QUERY DBA_CAPTURE.
HERE IS STATEMENT FROM ORACLE SUPPORT

                            ++++++++
I have checked again and the fix for Bug 12648838 went into 11.1.1.1.2 (the note I looked at is not correct) so the default behaviour is to not create an implicit capture process unless you have explicity specified
the LOGRETENTION parameter. In your case you have not therefore you do not have one.

So what you observe is correct in your version.


                         +++++++++++++++ 
*************************************************************************** 

A few useful commands

To check the time of the oldest transaction
============================

 select vt.xidusn, vt.xidslot, vt.xidsqn, vs.sid, vs.username, vs.program, vs.machine, vt.used_ublk,
     vt.start_time, TO_CHAR(SYSDATE, 'MM-DD-YYYY HH24:MI:SS') Current_Time
   from gv$transaction vt, gv$session vs where vt.addr = vs.taddr order by vt.
 start_time ;

To find the oldest dependent archive file
============================

send <extract name>, showtrans

Decouple RMAN and GoldenGate
===========================================

ggsci> stop ext <extract name>

Modify the extract param file as follows

Extract XTATHDV
----------------------------------------------------
--Extract Process for testdb Database
--List of Schemas
--SCOTT
----------------------------------------------------
SETENV (ORACLE_HOME="/oracle/software/rdbms/11.2.0.3")
SETENV (ORACLE_SID="testdb")
userid ggddlusr, password xxxx
TRANLOGOPTIONS ASMUSER "sys@asm",asmpassword "xxx" LOGRETENTION DISABLED
exttrail /oracle/goldengatedata/dirdat/at
Reportcount every 30 Minutes, Rate
Report at 11:00
ReportRollover at 11:15
DiscardFile /oracle/goldengatedata/dirrpt/xtathdv.dsc, Append
DiscardRollover at 06:00 ON MONDAY
DDL include Mapped
table SCOTT.*;

Make a backup of the checkpoint file (./dirchk/<extract>.cpe)

ggsci> dbloging userid <id> password <pw>
ggsci> UNREGISTER EXTRACT <extract name> LOGRETENTION

If the above returns that extract was not registered you can try the following to see if extract has an entry in the dba_capture table, which has been disconnected from Oracle GoldenGate.

select capture_name, queue_owner, capture_user, start_scn, status from dba_capture;

This entry could be stopping the RMAN from removing the archives. To remove the entry use the following command

exec DBMS_CAPTURE_ADM.DROP_CAPTURE ('OGG$_LGHTYC15666903B');

ggsci> start ext <extract name>

 Couple GG and RMAN
=======================

Please note that post GG version 11.1.1.1.2   and beyond, GG and RMAN are decoupled by default .
So If you want to couple them, here are the steps



ggsci> stop ext <extract name>

Modify the extract param file as follows

Extract XTATHDV
----------------------------------------------------
--Extract Process for ATHENADV Database
--List of Schemas
--PUBS
----------------------------------------------------
SETENV (ORACLE_HOME="/oracle/software/rdbms/11.2.0.3")
SETENV (ORACLE_SID="xxx")
userid ggddlusr, password xxx
TRANLOGOPTIONS ASMUSER "sys@asm",asmpassword "xxx" LOGRETENTION ENABLED
exttrail /oracle/goldengatedata/dirdat/at
Reportcount every 30 Minutes, Rate
Report at 11:00
ReportRollover at 11:15
DiscardFile /oracle/goldengatedata/dirrpt/xtathdv.dsc, Append
DiscardRollover at 06:00 ON MONDAY
DDL include Mapped
table PUBS.*;

Make a backup of the checkpoint file (./dirchk/<extract>.cpe)

ggsci> dblogin userid <id> password <pw>
ggsci> REGISTER EXTRACT <extract name> LOGRETENTION

Wednesday, November 16, 2011

Change refresh schedules for an Oracle Materialized View ( MV )

Here is an automated script to change the refresh schedule for all MV's for a particular schema(in our case the
schema name is GGAMADM).
Please run this as DBA user (or change dba_mviews to user_mviews).

The basic syntax is

ALTER MATERIALIZED VIEW abc
     REFRESH COMPLETE
     START WITH trunc(sysdate+1) +(10/24)
     NEXT case when to_char( sysdate,'hh24' ) between '10' and '11' then trunc(sysdate)+(14/24) else trunc(sysdate+1)+(10/24) end
/

The above command refreshes the MV at 10am and 2pm.





declare
--str1 varchar2(100);
str2 varchar2(4000);
cursor c1
is select mview_name from dba_mviews where owner='GGAMDADM';
begin
for c2 in c1
loop

str2:= 'ALTER MATERIALIZED VIEW ggamdadm.'|| c2.mview_name||
     ' REFRESH COMPLETE
     START WITH trunc(sysdate+1) +(10/24)
     NEXT case when to_char( sysdate,''hh24'' ) between ''10'' and ''11'' then trunc(sysdate)+(14/24) else trunc(sysdate+1)+(10/24) end';
dbms_output.put_line(str2);
execute immediate str2;
end loop;
end;
/
  

Friday, September 30, 2011

Golden Gate Database Setup Part 2

In the first post, we did a bit on setting up the schemas for GG Replication both on the source and target databases.
Now we have to do a few things at the source database.

Connect as sqlplus '/ as sysdba'
--enable supplemental logging
alter database add supplemental log data
/
--switch logfile so that the next logfile has supplemental logging
alter system switch logfile
/
--check whether supplemental login was really enabled.As if you did not even bother to read output of step 1.
select supplemental_log_data_min from v$database
/

Now the fun part, login to golden gate on the source side
./ggsci


dblogin userid ggddlusr,password welcome123

add schematrandata SCOTT

The above will enable logging at schema level and makes your life easier compared to
ADD TRANDATA <table name>, as new tables are automatically picked up from the schema SCOTT.
By the way , the above command is supported on 11gR2 and needs PATCH 10423000.
To verify whether the above command worked
select * from table(logmnr$always_suplog_columns('owner in upper case','table in upper case'))

Or I have neatly scripted the below


declare
l_schema varchar2(20) := upper('PINNACLE') ;
l_table_name varchar2(200);
x_ctr number(5);
x_refcur sys_refcursor;
begin
open x_refcur for 'select table_name from dba_tables where owner='''||l_schema||''' order by table_name';
loop
fetch x_refcur into l_table_name;
exit when x_refcur%NOTFOUND;

execute immediate 'select count(*) from table(logmnr$always_suplog_columns(:1,:2))'
into x_ctr using l_schema,l_table_name ;
if x_ctr =0 then

dbms_output.put_line('Analyzing  '||l_table_name||' ..**ERROR**');
else
dbms_output.put_line('Analyzing  '||l_table_name||' ..OK');
end if;

end loop;
end;
/



As of now, this patch is not available on our platform HP-UX Itanium and we have to still use
ADD TRANDATA which is very painful.

But here is the script to generate the script

set echo off
set verify off
set pagesize 2000
set linesize 250
set trim on
set heading off
set feedback off
spool /tmp/scott.obey
select 'add trandata SCOTT.'||table_name
from dba_tables where owner = 'SCOTT' ;
spool off


Once the above file is generated , go back in ggsci and type

obey /tmp/scott.obey

By the way , a small note on the above step ---


1. The "add trandata" does something like "alter table <name> add supplemental log group ggs_<table name>_object id (comma seperated columns) ALWAYS"
Its just to give you an idea and the actual "alter table" sql executed behind the scenes might differ depending on the db versions etc

2. To drop the table level supplemental logging you could either do it from ggsci or sqlplus

ggsci> dblogin userid <id> password <pw>
ggsci> info trandata schema.table_name
ggsci> DELETE TRANDATA schema.table_name

or

sql> select * from dba_log_groups where table_name=upper('&table_name') and owner=upper('&owner');
sql> alter table <table name> drop supplemental log group <log group name>;

By the way, if you want to add table level logging for all columns, you use 'supplemental log data' instead of supplemental log group -

ALTER TABLE tab1 ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
The valid options are ALL,PRIMARY,UNIQUE and FOREIGN KEY.
The above sql creates a supplemental log group starting with SYS_C instead of GGS_xxxx

To drop , you can use
ALTER TABLE tab1 DROP SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
OR
ALTER TABLE tab1 DROP SUPPLEMENTAL LOG GROUP SYS_Cxxxxxx

/



**PLEASE NOTE, IF YOU HAD USED ADD SCHEMATRANDATA THERE WILL BE NO ENTRIES IN DBA_LOG_GROUPS TABLE.YOU WILL HAVE TO USE THE BELOW QUERY
select * from table(logmnr$always_suplog_columns('owner in upper case','table in upper case'))
**


We are now all done and go to the next step.


Golden Gate Schema Setup Part 1

The idea behind this post is not simply write one more post about GG Schema setup. There are tons of information about them from the official oracle documentation which is very good and some excellent blogs on this especially the Gavinsoorma Blogs, from which I learnt myself.

This posting is just to fill the gaps in-between and and I still suggest that you go to the other resources if you are a newbie.

To be honest, I like to keep things simple and prefer to run a standard script at both the source and destination eventhough a few privs are more specific to source and some to destination.

There are a few things which i do differently. One of them is creating a checkpoint table (obviously this is at the destination) for each REPLICAT process instead of a common checkpoint table per destination. This is just to ensure that one DBA does not mess-up with the checkpoint information of an other DBA in environments where there is a common reporting database and multiple people work on setting up their own replicat process.


Without further ado , let me jump into the gory syntax stuff.

COMMON STUFF AT BOTH SOURCE AND DESTINATION
=============================================

create tablespace ggdata          
datafile '+DATA' size 100m
autoextend on maxsize 4096m
extent management local uniform size 1m
/
create user ggddlusr identified by welcome123
/

alter user ggddlusr
default tablespace ggdata
temporary tablespace temp
quota unlimited on ggdata
/
grant dba to ggddlusr
/

--The below is needed at the source only -read my note below
--but I run them at source and target -keep things simple policy.
exec dbms_goldengate_auth.grant_admin_privilege('GGDDLUSR')
/

That is it....I just skip stuff like  execute on flashback_database etc.
I just grant dba privilege and that is all I do....As I said I keep things simple as you wouldn't expect anybody else except the DBA team to use ggddlusr.

Also please note that

Manager (RMAN) works with Extract to retain the archive logs that Extract needs for
recovery. The special privileges are required for interaction with an underlying Oracle
Streams Capture and with RMAN.

For this to happen , you need to make sure that at the source you need to do the following based on the oracle version.

Oracle EE version Privileges
10.2 1. Run package to grant Oracle Streams admin privilege.
exec dbms_streams_auth.grant_admin_privilege('<user>')
2. Grant INSERT into logmnr_restart_ckpt$.
grant insert on system.logmnr_restart_ckpt$ to <user>;
3. Grant UPDATE on streams$_capture_process.
grant update on sys.streams$_capture_process to <user>;
4. Grant the 'become user' privilege.
grant become user to <user>;
11.1 and 11.2.0.1 1. Run package to grant Oracle Streams admin privilege.
exec dbms_streams_auth.grant_admin_privilege('<user>')
2. Grant the 'become user' privilege.
grant become user to <user>;
11.2.0.2 and later Run package to grant Oracle Streams admin privilege.
exec dbms_goldengate_auth.grant_admin_privilege('<user>');

Since I am running the above on 11.2.0.2 release, I just had to run


exec dbms_goldengate_auth.grant_admin_privilege('GGDDLUSR')
/
Please make sure that you do what is appropriate for your version.







Now go to Golden Gate Home, and open the file called GLOBALS (if not there, feel free to create a new file called GLOBALS) and put the following entry
GGSCHEMA <schema name>

In our case the <schema name> is GGDDLUSR.






Now that we have the common user GGDDLUSER setup in both source and target , let us go into some source and target specific actions.


STEPS AT SOURCE DATABASE
==========================
This step is to add DDL support to make sure that DDL changes like new tables, modified tables (ALTER TABLE...) at the source is pushed to destination side automatically. I just do this as this is a common requirement for all my replication needs.nowadays I just do this as a matter of habit.i recommend you do it too.
Just go to golden gate home directory and invoke

sqlplus '/ as sysdba' and run the following ----

--SETUP DDL USER
@marker_setup.sql
--when prompted type in ggddlusr
@ddl_setup.sql
--when prompted type in ggddlusr AND INITIALSETUP
@role_setup.sql
--when prompted type in ggddlusr
grant GGS_GGSUSER_ROLE to ggddlusr
/
@ddl_enable.sql
@ddl_pin ggddlusr

It is a very good idea to check whether all the above commands really setup the DDL replication by running
the following command.
  ddl_status.sql

whew!!!We are almost done except that we will create a checkpoint table at our target database to enable REPLICAT to use it to know where it was !!!


STEPS AT TARGET DATABASE
=========================

 Login to Golden Gate, by typing ggsci
 and do the following

dblogin userid ggddlusr,password welcome123
add checkpointtable ggddlusr.ckptpubs

Let me jump ahead and tell you the command to make sure how the above checkpoint table can be made to be used by the replicat process
add replicat pubsrep, exttrail /oracle/software/goldengate/11.1.1.1/dirdat/cc, CHECKPOINTTABLE ggddlusr.ckptpubs

There you got it!!!!We are all done except that we have to prepare the source database by adding supplemental logging and few other stuff.

I am going to discuss that in the next blog.Hope you find things easy.

Monday, April 4, 2011

Automate Startup and Shutdown of Oracle Database on Linux Machines

This blog explains my own scripts which I use to startup and shutdown the database server.
I don't use the oracle provided scripts given under $ORACLE_HOME/bin (dbstart/dbshut) but completely do it on my own.
The following script can be used to start an oracle database, listener (provided the listener name is same as the oracle db name) and management agent. The script does not start or shutdown ASM instance.
If anybody wants I will write a script for you.Please feel free to email me as I will gather some extra motivation if somebody contacts me :-) .

But before I use my 2 scripts, please make sure the following 2 files are present

1. /etc/oratab - This file should be familiar as this is created with the oracle installation and the above mentioned dbstart and dbshut use this file. i find it convenient to use the same file. This file is used to help me know the ORACLE_SID, ORACLE_HOME env variables.
A typical /etc/oratab looks like this


orcl:/oracle/software/rdbms/10.2.0.4.0:Y -->entry for database and a listener with the same name
agent:/oracle/software/agent/10.1.0.5:N --> entry for oracle agent

Please note that I install Management agent in all my databases and therefore create an entry for the Agent oracle home too.The command to start and shutdown the agent is (the script already handles this but I have just mentioned it for the knowledge of some people who are not aware about this).

Please note that even if you don't have an agent installed , you can still use the script below.
emctl start agent
emctl stop agent

2.The presence of a script called sid under /usr/local/bin . Please note that /usr/local/bin is a directory under default PATH of mose unix users. So I prefer to put the script sid under this directory.

Here is the contents of sid script
$ cat /usr/local/bin/sid
#!/bin/sh
isthere=`cat /etc/oratab | grep -v ^#|grep -i $1|wc -l`
if  [ $isthere -eq 1 ] ; then

export TNS_ADMIN=/etc
export NLS_LANG=AMERICAN_AMERICA.UTF8
export ORACLE_SID=`cat /etc/oratab | grep -v ^#|grep -i $1|awk 'BEGIN{FS=":"} {print $1}'`
export ORACLE_HOME=`cat /etc/oratab | grep -v ^#|grep -i $1|awk 'BEGIN{FS=":"} {print $2}'`
export PATH=$ORACLE_HOME/bin:$PATH:/usr/local/bin:.
echo "Oracle SID set to $ORACLE_SID"
echo "Oracle Home set to $ORACLE_HOME"

elif [ $isthere -eq 0 ] ; then
echo "Unable to find any matching rows in /etc/oratab"
else
echo "Too many matching rows in /etc/oratab"
fi


The above script is invoked as
$ . /usr/local/bin/sid orcl
Oracle SID set to orcl
Oracle Home set to /oracle/software/rdbms/10.2.0.4.0
$ . /usr/local/bin/sid nothing
Unable to find any matching rows in /etc/oratab
$


After this we are all set to go to the main shell script. I name the script as dbora.Please feel free to name it anything you want.

Do the following

1)Please copy this file to /etc/init.d directory as dbora
2)chgrp dba dbora
3)chmod 750 dbora
4)/sbin/chkconfig –add dbora
5)create a directory called /oracle/admin/logs as the below script will write to this directory.If you plan to change the directory please feel free to edit  DB_LOG_DEST in the below shell script.


Here are the contents of the shell script

++++++++++++++++++++++++++++++++++++++++++++++++++++++
#!/bin/ksh
# chkconfig: 35 99 10
ORA_OWNER=oracle
DB_LOG_DEST="/oracle/admin/logs"


case "$1" in
'start')
rm /tmp/dbstart.sh 2>/dev/null

cat /etc/oratab|grep -v ^#|grep -v ^$|while read LINE ; do
AUTO_RS=`echo $LINE | awk 'BEGIN {FS=":"} {print $3}'`
#echo $AUTO_RS

if [ ${AUTO_RS} = "Y" ]; then



ORA_SID=`echo $LINE | awk 'BEGIN {FS=":"} {print $1}'`



if [ $ORA_SID = "agent" ]; then
echo ". /usr/local/bin/sid $ORA_SID" >>/tmp/dbstart.sh
echo  "emctl start agent" >>/tmp/dbstart.sh
else
echo ". /usr/local/bin/sid $ORA_SID" >>/tmp/dbstart.sh
echo "lsnrctl start $ORA_SID" >>/tmp/dbstart.sh
echo ". /usr/local/bin/sid $ORA_SID" >>/tmp/dbstart.sh
cat<<EOF1 >>/tmp/dbstart.sh
sqlplus "/ as sysdba" <<EOF
startup;
EOF
EOF1
fi



fi

done

echo "##################################">> $DB_LOG_DEST/DBSTART_STOP.log
echo "Starting up databases on `date`" >>   $DB_LOG_DEST/DBSTART_STOP.log
echo "##################################">> $DB_LOG_DEST/DBSTART_STOP.log
chown oracle:dba /tmp/dbstart.sh
chmod 700 /tmp/dbstart.sh
su - $ORA_OWNER -c /tmp/dbstart.sh >> $DB_LOG_DEST/DBSTART_STOP.log

;;
'stop')
rm /tmp/dbstart.sh 2>/dev/null

cat /etc/oratab|grep -v ^#|grep -v ^$|while read LINE ; do
AUTO_RS=`echo $LINE | awk 'BEGIN {FS=":"} {print $3}'`
#echo $AUTO_RS

if [ ${AUTO_RS} = "Y" ]; then



ORA_SID=`echo $LINE | awk 'BEGIN {FS=":"} {print $1}'`



if [ $ORA_SID = "agent" ]; then
echo ". /usr/local/bin/sid $ORA_SID" >>/tmp/dbstart.sh
echo  "emctl stop agent" >>/tmp/dbstart.sh
else
echo ". /usr/local/bin/sid $ORA_SID" >>/tmp/dbstart.sh
echo "lsnrctl stop $ORA_SID" >>/tmp/dbstart.sh
echo ". /usr/local/bin/sid $ORA_SID" >>/tmp/dbstart.sh
cat<<EOF1 >>/tmp/dbstart.sh
sqlplus "/ as sysdba" <<EOF
shutdown immediate;
EOF
EOF1
fi



fi

done

echo "##################################">> $DB_LOG_DEST/DBSTART_STOP.log
echo "Starting up databases on `date`" >>   $DB_LOG_DEST/DBSTART_STOP.log
echo "##################################">> $DB_LOG_DEST/DBSTART_STOP.log
chown oracle:dba /tmp/dbstart.sh
chmod 700 /tmp/dbstart.sh
su - $ORA_OWNER -c /tmp/dbstart.sh >> $DB_LOG_DEST/DBSTART_STOP.log


;;
esac

++++++++++++++++++++++++++++++++++++++++++


Please note that I have put
# chkconfig: 35 99 10
In the script, which should mean that this service will start at runlevels 3 and 5.
99 indicates this service will be started near the end of init processing and 10 signifies it will be stopped near the beginning of init processing.
Please feel free to make changes.

Tuesday, March 15, 2011

Multiset Union ALL operation on NESTED tables built on OBJECT Types

This blog is to share some difficulties when we try to use MULTISET UNION ALL command on NESTED TABLES
built on OBJECT TYPES.In our example the nested table test_obj_array is built on a type called test_obj.

I will then mention a small workaround which is not perfect but workable by creating a map member function

SQL> select col1 from junk_tab;

COL1
--------------------------------------------------------------------------------
gautham
siddhu
surya

SQL> create or replace type test_obj as object(col1 varchar2(100),col2 varchar2(100), col3 varchar2(
100));
  2  /

Type created.

SQL> create type test_obj_array as table of test_obj;
  2  /

Type created.



SQL>  select cast(multiset(select test_obj(col1,col1,col1) from junk_tab) as test_obj_array) from du
al;

CAST(MULTISET(SELECTTEST_OBJ(COL1,COL1,COL1)FROMJUNK_TAB)ASTEST_OBJ_ARRAY)(COL1,
--------------------------------------------------------------------------------
TEST_OBJ_ARRAY(TEST_OBJ('gautham', 'gautham', 'gautham'), TEST_OBJ('siddhu', 'si
ddhu', 'siddhu'), TEST_OBJ('surya', 'surya', 'surya'))

It gets more complicated when we use MULTISET UNION ALL command when are dealing with NESTED TABLES built on OBJECT TYPES

SQL> DECLARE
  2   x test_obj_array := test_obj_array();
  3   y test_obj_array := test_obj_array();
  4   z test_obj_array := test_obj_array();
  5  BEGIN
  6     SELECT cast( multiset(select test_obj(col1,col1,col1) from junk_tab) as test_obj_array )
  7     INTO x from dual;
  8  y:=x;
  9  z:=x multiset union all y;
 10 
 11  END;
 12  /
z:=x multiset union all y;
*
ERROR at line 9:
ORA-06550: line 9, column 1:
PLS-00801: internal error [*** ASSERT at file pdw4.c, line 2079; Type
0x87ffffffbf3eb9a0 has no MAP method.; _anon__c0000000381e5be8__AB[9, 1]]

SQL> drop type test_obj_array;

Type dropped.

SQL> drop type test_obj;

Type dropped.

create or replace type test_obj as object(col1 varchar2(100),col2 varchar2(100), col3 varchar2(
100),map member function test_function return varchar2);
/

create type test_obj_array as table of test_obj;
/


SQL> create or replace type body test_obj
  2  as
  3  map member function test_function return varchar2 is
  4  begin
  5   return col1||col2||col3;
  6  end;
  7  end;
  8  /

Type body created.


SQL>  select cast(multiset(select test_obj(col1,col1,col1) from junk_tab) as test_obj_array) from du
al;

CAST(MULTISET(SELECTTEST_OBJ(COL1,COL1,COL1)FROMJUNK_TAB)ASTEST_OBJ_ARRAY)(COL1,
--------------------------------------------------------------------------------
TEST_OBJ_ARRAY(TEST_OBJ('gautham', 'gautham', 'gautham'), TEST_OBJ('siddhu', 'si
ddhu', 'siddhu'), TEST_OBJ('surya', 'surya', 'surya'))

SQL> DECLARE
  2   x test_obj_array := test_obj_array();
  3   y test_obj_array := test_obj_array();
  4   z test_obj_array := test_obj_array();
  5  BEGIN
  6     SELECT cast( multiset(select test_obj(col1,col1,col1) from junk_tab) as test_obj_array )
  7     INTO x from dual;
  8  y:=x;
  9  z:=x multiset union all y;
 10  for i in 1 .. z.count() loop dbms_output.put_line(z(i).col1); end loop;
 11 
 12  END;
 13  /
gautham
siddhu
surya
gautham
siddhu
surya

PL/SQL procedure successfully completed.

SQL>

Monday, March 14, 2011

Shell Script to map datafiles on source to target filesystem before doing RMAN duplicate/restore

Many a times we DBA's are asked to refresh the DEV/QA database with PROD.
The biggest headache is rarely do the file system names match and even if they match the size varies and therefore we can rarely do a 1 to 1 mapping.

For example , a DB file /oracle/oradata01/proddb/data/users01.dbf might have to go to /oracle/oradata02 filesystem as either there might not be a filesystem called /oracle/oradata01 on the dev/qa box or it might be too small to accomodate this file after the other datafiles go there.

Therefore I have developed a utility called mapper.sh (a shell script written in ksh) which does this very nicely.

This utility has 2 arguments
Parameter 1 to shell script- Mandatory- name of the file which has information about the file system on destination server (1st field), file size in MB (2nd field) and Utilization factor in % (optional 3rd field).
For example
/oracle/oradata01/devdb/data 10000 72
/oracle/oradata02/devdb/data 5000
This means our candidates on the dev/qa box are
1./oracle/oradata01/devdb/data which has 10000M free space but we want to use only 72% of it.
2./oracle/oradata02/devdb/data which has 5000M but no utilization factor is mentioned.That means the tool will take a default utilization factor of 90% unless it is over-rided by the 2nd parameter to the shell script which is explained just below.

Parameter 2 to shell script -
this is an optional parameter which takes effect for file systems (inside the file of parameter 1) which have the 3rd field missing.For example in the above file /oracle/oradata02/devdb/data file system will use this parameter (if passed) or if not passed as 2nd argument will use the default 90% utilization factor.

Things might be a bit confusing but let us jump to the demo shown below which will make things clearer
**caution**PLEASE  CHECK THIS SCRIPT IN A TEST ENV BEFORE IMPLEMENTING
At the production DB server ,login as DBA account and check the datafile sizes

SQL> select file_name||'->'||(bytes/1024/1024)bytes_in_mb from dba_data_files;

/oracle/oradata01/prodpr/data/system01.dbf->542
/oracle/oradata02/prodpr/data/undotbs01.dbf->1094
/oracle/oradata02/prodpr/data/sysaux01.dbf->500
/oracle/oradata01/prodpr/data/drsys01.dbf->1024
/oracle/oradata01/prodpr/data/maxdata01.dbf->5049.5
/oracle/oradata02/prodpr/data/users01.dbf->1024

SQL> select sum(bytes/1024/1024) from dba_data_files;

SUM(BYTES/1024/1024)
--------------------
              9233.5

SQL>

Assume that we already know the destination server file systems, here is my input file to
the shell script (/tmp/destfs.txt).

contents of /tmp/destfs.txt
==================cut from below==============
/oracle/oradata01/testdv/data 100000 2
/oracle/oradata02/testdv/data 5000 80
/oracle/oradata03/testdv/data 8000
=================cut from above===============

What we are telling the tool is

1.utilize just 2% of the /oracle/oradata01/testdv/data which is almost 100G(approx)i.e use just 2G
2.use 80% of /oracle/oradata02/testdv/data i.e.use 4G out of 5G
3.Since we don't mention anything here it takes the 2nd parameter to the script (if passed) or
if nothing is passed takes default 90% of available 8G.

Demo:

homelinux<oracle>/tmp> cat /tmp/destfx.txt
/oracle/oradata01/testdv/data 100000 2
/oracle/oradata02/testdv/data 5000 80
/oracle/oradata03/testdv/data 8000


homelinux<oracle>/tmp>
homelinux<oracle>/tmp>
homelinux<oracle>/tmp>
homelinux<oracle>/tmp>
homelinux<oracle>/tmp>

Before running please make sure mapper.sh has execute privs and all the oracle env's have been set
as the below script tries to connect as sysdba using the unix account it is being run on.
therefore I recommend running this as oracle user or a unix account in DBA group.

homelinux<oracle>/tmp> ./mapper.sh /tmp/destfx.txt 87
Processing ...
...


Utlization Percentage specified by user as 87
This might change at individual file system level if specified in /tmp/destfx.txt
.....
Utilization limit for /oracle/oradata01/testdv/data is 2
....
.....
Utilization limit for /oracle/oradata02/testdv/data is 80
....
.....
Utilization limit for /oracle/oradata03/testdv/data is 87
....
Sucessfully allocated destination file system to all data files
SET NEWNAME FOR DATAFILE 5 TO '/oracle/oradata03/testdv/data/maxdata0101.dbf';
SET NEWNAME FOR DATAFILE 2 TO '/oracle/oradata03/testdv/data/undotbs0101.dbf';
SET NEWNAME FOR DATAFILE 6 TO '/oracle/oradata02/testdv/data/users0101.dbf';
SET NEWNAME FOR DATAFILE 4 TO '/oracle/oradata02/testdv/data/drsys0101.dbf';
SET NEWNAME FOR DATAFILE 1 TO '/oracle/oradata03/testdv/data/system0101.dbf';
SET NEWNAME FOR DATAFILE 3 TO '/oracle/oradata02/testdv/data/sysaux0101.dbf';
****
Above information is stored in /tmp/datafile_map.log
Please refer to tables - file_placement and dest_fsize_tab for other useful information
homelinux<oracle>/tmp>


Let us see the output if the o/p file which can be copied and pasted
into our rman duplicate /restore file
homelinux<oracle>/tmp> cat /tmp/datafile_map.log
SET NEWNAME FOR DATAFILE 5 TO '/oracle/oradata03/testdv/data/maxdata0101.dbf'; 
SET NEWNAME FOR DATAFILE 2 TO '/oracle/oradata03/testdv/data/undotbs0101.dbf'; 
SET NEWNAME FOR DATAFILE 6 TO '/oracle/oradata02/testdv/data/users0101.dbf';   
SET NEWNAME FOR DATAFILE 4 TO '/oracle/oradata02/testdv/data/drsys0101.dbf';   
SET NEWNAME FOR DATAFILE 1 TO '/oracle/oradata03/testdv/data/system0101.dbf';  
SET NEWNAME FOR DATAFILE 3 TO '/oracle/oradata02/testdv/data/sysaux0101.dbf';  
homelinux<oracle>/tmp>


Please note 2 tables called file_placement and dest_fsize_tab are created in the DB account
which can be dropped or left around which will be re-used the next time the script is run.


SOURCE CODE FOR MAPPER.SH
*************************************************

#/bin/ksh
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++
#+++++++ AUTHOR :GAUTHAM CHANDRASEKARAN
#File name: mapper.sh
#mapper.sh <file name> [ <optional utilization capacity>
#+++++++++++++++++++++++++++++++++++++++++++++++++++++++

echo "Processing ..."
echo "..."
echo
echo
if [ $# -eq 2 ]
then
 DEF_UTIL_PCT=$2
 echo "Utlization Percentage specified by user as $DEF_UTIL_PCT"
 echo "This might change at individual file system level if specified in $1"
elif [ $# -eq 1 ]
then
 DEF_UTIL_PCT="90"
 echo "Utlization Percentage assumed as default 90%"
 echo "This might change at individual file system level if specified in $1"
else
 echo "ERROR *** Wrong number of arguments to $0"
 echo "Usage : $0 <filename>  [ <utilization_pct> ]"
 exit 1
fi

if [ ! -f "$1" ]
then
 echo "ERROR *** File $1 does not exist"
 exit 1
fi

sqlplus -s /nolog <<EOF
connect / as sysdba;
set echo off;
set feedback off;
DROP TABLE dest_fsize_tab
/
CREATE TABLE dest_fsize_tab
(dir_name VARCHAR2(200) primary key,
full_size NUMBER(10),
avbl_size NUMBER(10),
utilization_pct NUMBER(3))
/
DROP TABLE file_placement
/
CREATE TABLE file_placement
(file_id NUMBER(5),
file_name VARCHAR2(50),
dir_name VARCHAR2(200),
sequence_num NUMBER(3))
/

CREATE OR REPLACE PACKAGE pk_genutilitypkg
  AS
     /*
     Generic String Parser: provide a delimiter and it returns an
     index-by table of the individual elements of the string that are
     separated by the specified delimiter.
     Author: "GAUTHAM CHANDRASEKARAN" <gautha@hotmail.com>
     */
    TYPE t_string IS TABLE OF VARCHAR2(2000)
       INDEX BY BINARY_INTEGER;
    m_ctr NUMBER(5);
    m_pos NUMBER(5);
    PROCEDURE sp_parsestring (
       p_string IN VARCHAR2,
       delimiter IN VARCHAR2,
       p_t_string OUT t_string);
 END pk_genutilitypkg;
 /
 CREATE OR REPLACE PACKAGE BODY pk_genutilitypkg
 AS
    PROCEDURE sp_parsestring (
         p_string IN VARCHAR2,
         delimiter IN VARCHAR2,
         p_t_string OUT t_string)
      IS
         m_string VARCHAR2(4000);
      BEGIN
         /* Raise a Error if the length of the delimiter is not 1 */
         IF LENGTH (delimiter) != 1
         THEN
            raise_application_error (-20001,
                'Delimiter should be of only one character');
            RETURN;
         END IF;
         m_string := p_string;
         m_ctr := 1;
         LOOP
            m_pos := INSTR (m_string, delimiter);
            IF m_pos > 1
            THEN
               p_t_string (m_ctr) := SUBSTR (m_string, 1, m_pos - 1);
               IF (m_pos < LENGTH (m_string))
               THEN
                  m_string := SUBSTR (
                                 m_string,
                                 m_pos + 1,
                                 LENGTH (m_string) - m_pos
                              );
               ELSIF m_pos = LENGTH (m_string)
               THEN
                  m_ctr := m_ctr + 1;
                  p_t_string (m_ctr) := NULL;
                  EXIT;
               END IF;
            ELSIF m_pos = 1
            THEN
               p_t_string (m_ctr) := NULL;
               IF m_pos < LENGTH (m_string)
               THEN
                  m_string := SUBSTR (
                                 m_string,
                                 m_pos + 1,
                                 LENGTH (m_string) - m_pos
                              );
               ELSIF m_pos = LENGTH (m_string)
               THEN
                  m_ctr := m_ctr + 1;
                  p_t_string (m_ctr) := NULL;
                  EXIT;
               END IF;
            ELSIF m_pos = 0
            THEN
               p_t_string (m_ctr) := m_string;
               EXIT;
            END IF;
            m_ctr := m_ctr + 1;
         END LOOP;
      END;
END pk_genutilitypkg;
 /
exit;
EOF

cat $1| while read INPUT_LINE
do
FIELD_CTR=`echo $INPUT_LINE|awk '{print NF}'`
if [ $FIELD_CTR -eq 3 ]
then
  DEST_NAME=`echo $INPUT_LINE|awk '{print $1}'`
  DEST_SIZE=`echo $INPUT_LINE|awk '{print $2}'`
  UTIL_PCT=`echo $INPUT_LINE|awk '{print $3}'`
  echo "....."
  echo "Utilization limit for $DEST_NAME is $UTIL_PCT"
  echo "...."
sqlplus -s /nolog<<EOF
connect / as sysdba;
set feedback off
whenever sqlerror exit sql.sqlcode;
INSERT INTO dest_fsize_tab  values ('$DEST_NAME', $DEST_SIZE,$DEST_SIZE, $UTIL_PCT);
exit;
EOF

if [ $? -ne 0 ]
then
 echo "***Error while inserting into configuration tables from $1"
 echo "Check entries in $1 : $DEST_NAME $DEST_SIZE $UTIL_PCT"
 exit 1
fi
elif [ $FIELD_CTR -eq 2 ]
then

DEST_NAME=`echo $INPUT_LINE|awk '{print $1}'`
  DEST_SIZE=`echo $INPUT_LINE|awk '{print $2}'`
  echo "....."
    echo "Utilization limit for $DEST_NAME is $DEF_UTIL_PCT"
  echo "...."
 
sqlplus -s /nolog<<EOF
connect / as sysdba;
set feedback off
whenever sqlerror exit sql.sqlcode;
INSERT INTO dest_fsize_tab  values ('$DEST_NAME', $DEST_SIZE,$DEST_SIZE, $DEF_UTIL_PCT);
exit;
EOF
if [ $? -ne 0 ]
then
 echo "***Error while inserting into configuration tables from $1"
 echo "Check entries in $1 : $DEST_NAME $DEST_SIZE $DEF_UTIL_PCT"
 exit 1
fi

else
 echo "Please check the contents of $1 file"
 echo "Wrong number of entries inside file"
 exit 1
fi

done


sqlplus -s /nolog <<EOF
connect / as sysdba;
set feedback off;

set serverout on
whenever sqlerror exit ;
DECLARE
x pk_genutilitypkg.t_string;
v_sequence_num file_placement.sequence_num%TYPE;
formatted_fname pk_genutilitypkg.t_string;
v_dir_name dest_fsize_tab.dir_name%TYPE ;

cursor c1
is select file_id, file_name, (bytes/1024)/1024 fsize from dba_data_files
order by fsize desc;

BEGIN

FOR c2 in c1
LOOP
   BEGIN
   SELECT t1.dir_name
   INTO v_dir_name
   FROM
   (SELECT dir_name
   FROM   dest_fsize_tab
   WHERE  ( (avbl_size-c2.fsize)*100/full_size) >=(100-utilization_pct)
   order by avbl_size) t1
   where rownum<2 ;
  
   EXCEPTION
   WHEN no_data_found THEN
   raise_application_error(-20001,'No File System capable of accomodating '||c2.file_name);
   END;
   pk_genutilitypkg.sp_parsestring (c2.file_name,'/',x) ;
   pk_genutilitypkg.sp_parsestring(x(x.count),'.',formatted_fname);
   SELECT NVL(max(sequence_num),0)+1
   INTO v_sequence_num
   FROM file_placement
   WHERE file_name =formatted_fname(1);

   INSERT INTO file_placement
   VALUES
   (c2.file_id,
   formatted_fname(1),
   v_dir_name,
   v_sequence_num);
  
   UPDATE dest_fsize_tab
   SET  avbl_size=avbl_size-c2.fsize
   WHERE dir_name=v_dir_name;
   commit;
  
  
END LOOP;
dbms_output.put_line('Sucessfully allocated destination file system to all data files');

END;
/
whenever sqlerror exit 1;
EOF

if [ $? -ne 0 ]
then
 echo "Please allocate more space or try increasing Utilization Percentage"
 exit 1
fi

sqlplus -s /nolog<<EOF
connect / as sysdba;
set feedback off
set verify off
set pagesize 0
set heading off
spool /tmp/datafile_map.log
select 'SET NEWNAME FOR DATAFILE '||file_id||' TO '''||dir_name||'/'||file_name||trim(to_char(sequence_num,'00'))||'.dbf'';' from file_placement;
spool off
EOF

#cat /tmp/datafile_map.log
echo "****"
echo "Above information is stored in /tmp/datafile_map.log"
echo "Please refer to tables - file_placement and dest_fsize_tab for other useful information"

Break comma-separed values in a table into rows

In this blog, I am going to talk  about an interesting way to split a comma-separated list into individual rows.

To achieve this consider a table called junk_tab with a column called names .

SQL> select names from junk_tab;

NAMES
--------------------------------------------------------------------------------
gautham,siddhu,surya
frank,shriram,vimal

SQL> desc junk_tab
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 NAMES                                              VARCHAR2(4000 CHAR)

What i intend to do is to split the individual names (delimited by comma) into separate rows.

Therefore at the end of the exercise we will have a select query returning 6 rows as
gautham
siddhu
..
..
..
vimal

6 rows selected

There are many ways to achieve this and infact there are several ways to achieve the same end result.

I am going to teach you step by step about what I wanted to do .

Step 1
======

For every row in the table I want to get the number of comma-separated values.

Here to get this the logic is subtract the length of the string after the comma's have ben stripped off from the original
length of the string .The result will give you the count of the commas.Adding 1 to it will give you the number of
comma-separated variables.

SQL> select names,length(names)-length(replace(names,',',''))+1 xxx from junk_tab;

NAMES
----------------------------------------------------------------------------------------------------
       XXX
----------
gautham,siddhu,surya
         3

frank,shriram,vimal
         3

Step 2
=======
The below sql is used for generating a sequence of number starting from 1 till what we specify.

SQL> select level num from dual connect by level <=5;

       NUM
----------
         1
         2
         3
         4
         5
        
In the below query, note the "with tmp as" I have used to include the above query as a temporary view.
        
Now do the following

SQL> with tmp as ( select level num from dual connect by level <= 100 )
  2  select test123.names,tmp.num
  3  from
  4  (select names,length(names)-length(replace(names,',',''))+1 xxx from junk_tab) test123,tmp
  5  where tmp.num <=test123.xxx
  6  order by test123.names,tmp.num
  7  /

NAMES
----------------------------------------------------------------------------------------------------
       NUM
----------
frank,shriram,vimal
         1

frank,shriram,vimal
         2

frank,shriram,vimal
         3


NAMES
----------------------------------------------------------------------------------------------------
       NUM
----------
gautham,siddhu,surya
         1

gautham,siddhu,surya
         2

gautham,siddhu,surya
         3
        
Step 3
+++++++++

This is easy as we just have to extract the 1st string from row 1, 2nd string from row 2 etc.



SQL> with tmp as ( select level num from dual connect by level <= 100 )
  2  select
  3  substr(test123.names ,
  4    decode(tmp.num,1,1,(instr(test123.names,',',1,(tmp.num-1))+1)),
  5    decode(instr(test123.names,',',1,tmp.num),0,length(test123.names)+1,instr(test123.names,',',1
,tmp.num)) - decode(tmp.num,1,1,(instr(test123.names,',',1,(tmp.num-1))+1))
  6    )
  7  from
  8  (select names,length(names)-length(replace(names,',',''))+1 xxx from junk_tab) test123,tmp
  9  where tmp.num <=test123.xxx
 10  order by test123.names,tmp.num
 11  /

SUBSTR(TEST123.NAMES,DECODE(TMP.NUM,1,1,(INSTR(TEST123.NAMES,',',1,(TMP.NUM-1))+1)),DECODE(INSTR(TES
----------------------------------------------------------------------------------------------------
frank
shriram
vimal
gautham
siddhu
surya

6 rows selected.

SQL>


6 rows selected.

Pipelined-Functions - Practical Use

Pipeline functions allow us to call them from a select statement and can be used to return more than 1 row.
The below example would make it more clear.
Say we have a table called employee and the fields are (or rather the most significant fields for our discussion) are
ename and dno .

SQL> select substr(ename,1,25),dno from employee;

SUBSTR(ENAME,1,25)               DNO
------------------------- ----------
JACK                              10
JILL                              10
MASON                             20
MARK                              20


you are being asked to write a small pl/sql code to return the enames of a particular department, the normal way we would
do is to write a procedure like

CREATE OR REPLACE TYPE vcarray AS TABLE OF VARCHAR2(500)
/

CREATE or replace procedure p1(dno_p IN number,ename_p OUT vcarray)
IS
CURSOR c1 is select ename from employee where dno=dno_p;
i number:=1;
BEGIN
ename_p := vcarray();
for c2 in c1
loop
 ename_p.extend;
 ename_p(ename_p.count) :=c2.ename ;
 i:=i+1;
end loop;
END;
/

To call the above procedure , we are forced to write a pl/sql block like

SQL> declare
  2  ename vcarray ;
  3  begin
  4  p1(10,ename);
  5  for i in 1..ename.COUNT
  6  LOOP
  7  dbms_output.put_line(ename(i));
  8  end loop;
  9  end;
 10  /
JACK
JILL

PL/SQL procedure successfully completed.

So in the above example we are forced to write a pl/sql block to read from the procedure.

The below example uses pipelined functions which will make it possibel for the same
job to be done via a function (eventhough we think of function as only returning
a single value) and the icing on the cake is the function can be referenced as a simple select
from sqlplus prompt itself....




CREATE OR REPLACE TYPE vcarray AS TABLE OF VARCHAR2(500)
/

CREATE OR REPLACE FUNCTION get_employees (dno_p IN number)
RETURN vcarray
PIPELINED
AS

CURSOR c1 is select ename from employee where dno=dno_p;
BEGIN

FOR c2 in c1
LOOP
   pipe row (c2.ename);
END LOOP;
RETURN;
END;
/


SQL> select * from TABLE (get_employees(10) );

COLUMN_VALUE
----------------------------------------------------------------------------------------------------
JACK
JILL