The Database File System (DBFS) creates a shared file system which has its files stored in the database.
Because the data is stored in the database, the file system inherits all the HA/DR capabilities provided by the database.
When configuring to run on Exadata or any RAC database, best practice is to store the Oracle GoldenGate trail files, checkpoint files, bounded recovery and configuration files in DBFS to provide the best performance, scalability, recoverability, and failover capabilities in the event of a system failure.
* the checkpoint and trail files remain available in the event of a node failure
* allows one of the surviving database instances to be the source of an Extract process or destination for the Replicat processes
* after a failure occurs, the Extract process can continue mining from the last known archived redo log file position
* replicat processes can start applying from the same trail file position before a failure occurred
In order to enable the DBFS, you must have the FUSE RPMs installed. The Filesystem in Userspace (FUSE) is a loadable kernel module for Unix-like computer operating systems that lets non-privileged users create their own file systems without editing kernel code.
The following configuration was implemented on and Oracle Enterprise Linix 6.2 environement.
Let’s set it up and configure GoldenGate to use DBFS.
FUSE Setup
Required RPM Packages
Ensure that the following packages are installed.
$ rpm -qa –queryformat=”%{name}-%{version}-%{release}.%{arch}\n” | grep -i fuse
fuse-2.8.3-4.el6.x86_64
fuse-libs-2.8.3-4.el6.x86_64
gvfs-fuse-1.4.3-12.el6.x86_64
Add the oracle user to the fuse group
$ usermod -a -G fuse oracle
Create the /etc/fuse.conf file with the user_allow_other option.
$ echo user_allow_other > /etc/fuse.conf
$ chmod 644 /etc/fuse.conf
Create an empty directory that will be used as the mount point for the DBFS filesystem.
$ mkdir -p /oracle/dbfs_direct
$ chown oracle:dba /oracle/dbfs_direct
To pick up the additional group (fuse) membership for the oracle user on Linux, Clusterware must be restarted.
Database Configuration
As the RDBMS software owner, create the DBFS repository inside the database. To create the repository, create a new tablespace to hold the DBFS objects and a database user that will own the objects.
I have the following disk groups defined in ASM, and will be using the DBFS disk group.
GRP NAME STATE TYPE FREE_GB USABLE_GB TOTAL_GB PERCENT_FREE COMPATIBILITY DATABASE_COMP ---- ------------ ----------- ------ ---------- ---------- ---------- ------------ ------------- ------------- 1 DATA MOUNTED EXTERN 5.01 5.01 10 50 11.2.0.3.0 11.2.0.3.0 2 DBFS MOUNTED EXTERN 7.74 7.74 8 97 11.2.0.3.0 11.2.0.3.0 3 RECO MOUNTED EXTERN 8.01 8.01 15 53 11.2.0.3.0 11.2.0.3.0
DBFS Repository Creation
> sqlplus / as sysdba SQL> create bigfile tablespace dbfs_ts datafile '+DBFS' size 200M autoextend on next 100M maxsize 3G NOLOGGING EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO; SQL> create user dbfs_user identified by dbfs_user default tablespace dbfs_ts quota unlimited on dbfs_ts; SQL> grant create session, create table, create view, create procedure, dbfs_role to dbfs_user;
Create the database objects that will hold DBFS
> cd $ORACLE_HOME/rdbms/admin > sqlplus dbfs_user/dbfs_user SQL> start dbfs_create_filesystem dbfs_ts FS1
DBFS Storage Options
The LOB segment used by DBFS should be configured with the storage options NOCACHE LOGGING which is the default.
SQL> SELECT owner , table_name , segment_name , logging , cache FROM dba_lobs WHERE tablespace_name='DBFS_TS'; OWNER TABLE_NAME SEGMENT_NAME LOGGING CACHE ----------- ---------------------- ---------------------- ------- ----- DBFS_USER T_FS1 LOB_SFS$_FST_12 YES NO
If the LOB segment is not using NOCACHE LOGGING, alter it:
SQL> ALTER TABLE DBFS_USER.T_FS1 MODIFY LOB (FILEDATA) (NOCACHE LOGGING);
Prepare the FUSE library links
$ echo "/usr/local/lib" >> /etc/ld.so.conf.d/usr_local_lib.conf $ cd /usr/local/lib $ ln -s $ORACLE_HOME/lib/libclntsh.so.11.1 $ ln -s $ORACLE_HOME/lib/libnnz11.so $ locate libfuse.so /lib/libfuse.so.2 /lib/libfuse.so.2.8.3 /lib64/libfuse.so.2 /lib64/libfuse.so.2.8.3 $ ln -s /lib64/libfuse.so.2 libfuse.so $ ldconfig $ ln -s $ORACLE_HOME/bin/dbfs_client /sbin/mount.dbfs
Here’s what your links should look like when done.
> ls -lhF total 0 lrwxrwxrwx 1 root root 59 May 11 2012 libclntsh.so.11.1 -> /u01/app/oracle/product/11.2.0.3/db_1/lib/libclntsh.so.11.1* lrwxrwxrwx 1 root root 19 Feb 25 03:07 libfuse.so -> /lib64/libfuse.so.2* lrwxrwxrwx 1 root root 19 May 11 2012 libfuse.so.2 -> /lib64/libfuse.so.2* lrwxrwxrwx 1 root root 53 May 11 2012 libnnz11.so -> /u01/app/oracle/product/11.2.0.3/db_1/lib/libnnz11.so > ls -lhF /sbin/mount.dbfs lrwxrwxrwx 1 root root 38 Feb 25 03:08 /sbin/mount.dbfs -> /u01/app/11.2.0.3/grid/bin/dbfs_client*
Register with Oracle Cluster Resources
Click here to get a copy of the mount-dbfs.sh script.
$ cp mount-dbfs.sh $GRID_HOME/crs/script/mount-dbfs.sh
$ chown oracle:dba $GRID_HOME/crs/script/mount-dbfs.sh
$ chmod 750 $GRID_HOME/crs/script/mount-dbfs.sh
Create the following file and execute it to create the resource.
> cat add-dbfs-resource.sh #!/bin/bash ACTION_SCRIPT=/u01/app/11.2.0.3/grid/crs/script/mount-dbfs.sh RESNAME=dbfs_mount DBNAME=GG DBNAMEL=`echo $DBNAME | tr A-Z a-z` ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/db_1 PATH=$ORACLE_HOME/bin:$PATH export PATH ORACLE_HOME crsctl add resource $RESNAME \ -type cluster_resource \ -attr "ACTION_SCRIPT=$ACTION_SCRIPT, \ CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \ START_DEPENDENCIES='hard(ora.$DBNAMEL.db)pullup(ora.$DBNAMEL.db)',\ STOP_DEPENDENCIES='hard(ora.$DBNAMEL.db)',\ SCRIPT_TIMEOUT=300" ##### end script add-dbfs-resource.sh exit 0 > ./add-dbfs-resource.sh
Start the Database and then start DBFS resource, which will mount the filesystem
> srvctl start database -d GG > crsctl start resource dbfs_mount CRS-2672: Attempting to start 'dbfs_mount' on 'ol6u2' CRS-2676: Start of 'dbfs_mount' on 'ol6u2' succeeded > crsctl stat res dbfs_mount -t -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- dbfs_mount 1 ONLINE ONLINE ol6u2
You should now be able to see the mount using “df -h”
> df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_ol6u2-lv_root 26G 22G 2.6G 90% / tmpfs 1004M 212M 792M 22% /dev/shm /dev/sda1 485M 115M 346M 25% /boot /dev/sdb1 20G 8.7G 11G 47% /oracle dbfs-dbfs_user@GG:/ 199M 400K 199M 1% /oracle/dbfs_direct
Now for the GoldenGate configuration
Install GoldenGate, if you haven’t already done so. If not, click here to see a previous blog for installation.
Create the shared directories on the DBFS
> cd /oracle/dbfs_direct/FS1/golden_gate/goldengate > mkdir dirchk > mkdir dirpcs > mkdir dirprm > mkdir dirdat > mkdir dirrpt > mkdir BR
Changed directory to the $GG_HOME directory and make the following soft links:
> ls -lhF | egrep "drwx|lrwx" lrwxrwxrwx 1 oracle oinstall 38 Mar 2 08:20 BR -> /oracle/dbfs_direct/FS1/golden_gate/BR/ lrwxrwxrwx 1 oracle oinstall 47 Mar 11 08:54 dirchk -> /oracle/dbfs_direct/FS1/golden_gate/dirchk/ lrwxrwxrwx 1 oracle oinstall 42 Mar 2 08:20 dirdat -> /oracle/dbfs_direct/FS1/golden_gate/dirdat/ lrwxrwxrwx 1 oracle oinstall 42 Mar 11 08:56 dirpcs -> /oracle/dbfs_direct/FS1/golden_gate/dirpcs/ lrwxrwxrwx 1 oracle oinstall 42 Mar 2 08:19 dirprm -> /oracle/dbfs_direct/FS1/golden_gate/dirprm/ lrwxrwxrwx 1 oracle oinstall 42 Mar 2 08:19 dirrpt -> /oracle/dbfs_direct/FS1/golden_gate/dirrpt/
GoldenGate is now ready to use.
You can follow the above procedure to configure the target environment as well, but with one exception. There should be two separate DBFS file systems; one for the the dirchk directory that contains the frequently written to checkpoint file, and the other one for the other directories. This is to separate the directory I/O requirements since the checkpoint file is continuously written to over and over again. Due to the heavy write activity, performance is best when the file is stored in DBFS with the CACHE LOGGING storage option.
Create a new tablespace to hold the DBFS Checkpoint Directory
SQL> create tablespace dbfs_chkpt_ts datafile '+DBFS' size 100M autoextend on next 100M maxsize 500M NOLOGGING EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO; SQL> alter user dbfs_user quota unlimited on dbfs_chkpt_ts;
Create the database objects that will hold DBFS
> cd $ORACLE_HOME/rdbms/admin > sqlplus dbfs_user/dbfs_user SQL> start dbfs_create_filesystem dbfs_chkpt_ts FS_CHKPT No errors. -------- CREATE STORE: begin dbms_dbfs_sfs.createFilesystem(store_name => 'FS_FS_CHKPT', tbl_name => 'T_FS_CHKPT', tbl_tbs => 'dbfs_chkpt_ts', lob_tbs => 'dbfs_chkpt_ts', do_partition => false, partition_key => 1, do_compress => false, compression => '', do_dedup => false, do_encrypt => false); end; -------- REGISTER STORE: begin dbms_dbfs_content.registerStore(store_name=> 'FS_FS_CHKPT', provider_name => 'sample1', provider_package => 'dbms_dbfs_sfs'); end; -------- MOUNT STORE: begin dbms_dbfs_content.mountStore(store_name=>'FS_FS_CHKPT', store_mount=>'FS_CHKPT'); end; -------- CHMOD STORE: declare m integer; begin m := dbms_fuse.fs_chmod('/FS_CHKPT', 16895); end; No errors.
Connect to the DBFS database and change the LOB storage option
> sqlplus / as sysdba SQL> SELECT owner , table_name , segment_name , logging , cache FROM dba_lobs WHERE tablespace_name like 'DBFS_%'; OWNER TABLE_NAME SEGMENT_NAME LOGGING CACHE --------------- ------------------------------ ------------------------------ ------- ---------- DBFS_USER T_FS_CHKPT LOB_SFS$_FST_528 NO NO DBFS_USER T_FS1 LOB_SFS$_FST_1 YES NO SQL> ALTER TABLE DBFS_USER.T_FS_CHKPT MODIFY LOB (FILEDATA) (CACHE LOGGING); SQL> SELECT owner , table_name , segment_name , logging , cache FROM dba_lobs WHERE tablespace_name like 'DBFS_%'; OWNER TABLE_NAME SEGMENT_NAME LOGGING CACHE --------------- ------------------------------ ------------------------------ ------- ---------- DBFS_USER T_FS_CHKPT LOB_SFS$_FST_528 YES YES DBFS_USER T_FS1 LOB_SFS$_FST_1 YES NO
Make the same database links here as before pointing to the DBFS mount, but the dirchk directory will be pointing toward the DBFS_CHKPT_TS mount.
> ls -lhF | egrep "drwx|lrwx" lrwxrwxrwx 1 oracle oinstall 38 Mar 2 08:20 BR -> /oracle/dbfs_direct/FS1/golden_gate/BR/ lrwxrwxrwx 1 oracle oinstall 47 Mar 11 08:54 dirchk -> /oracle/dbfs_direct/FS_CHKPT/golden_gate/dirchk/ lrwxrwxrwx 1 oracle oinstall 42 Mar 2 08:20 dirdat -> /oracle/dbfs_direct/FS1/golden_gate/dirdat/ lrwxrwxrwx 1 oracle oinstall 42 Mar 11 08:56 dirpcs -> /oracle/dbfs_direct/FS1/golden_gate/dirpcs/ lrwxrwxrwx 1 oracle oinstall 42 Mar 2 08:19 dirprm -> /oracle/dbfs_direct/FS1/golden_gate/dirprm/ lrwxrwxrwx 1 oracle oinstall 42 Mar 2 08:19 dirrpt -> /oracle/dbfs_direct/FS1/golden_gate/dirrpt/
Hi
Create the shared directories on the DBFS
> cd /mnt/dbfs_source/goldengate> mkdir dirchk> …..
Changed directory to the $GG_HOME directory and make the following soft links
> ls -lhF | egrep “drwx|lrwx”lrwxrwxrwx 1 oracle oinstall 38 Mar 2 08:20 BR -> /oracle/dbfs_direct/FS1/golden_gate/BR/lrwxrwxrwx 1 oracle oinstall 47 Mar
In previous article GG is installed on /u01/app/oracle/product/golden_gate
+ subdirectoriws .
So why in our case you are using cd /mnt/dbfs ?
We have created /oracle/dbfs_direct .
Thanks !!!!!
David Itshak
Hi
Should we instead do a soft link from
/u01/app/oracle/product/golden_gate to /oracle/dbfs_direct and not to /mnt/dbfs_source/goldengate ?
Thanks
David Itshak
Yes, thanks David. I have updated the directory to /oracle/dbfs_direct.
Hi
After failover in hub & Spoke configuration where HUB is Data guard I have to restart all extract ,pump and replicat processes in all HUB and Spokes. So only changing the IP of server name alias in DNS in not enough.
Should restart only these processes in HUB or I have to do as discussed in OGG_Best_Practice_-_GoldenGate_and_Oracle_Data_Guard_v11.1-ID1322547.1
Source Database failure to Standby DB
Non shared-storage failover
In our case where it can be vary complicated where there are > 20 spokes
11.2 Appendix B – gg_ext_non_shared.sh
#Failover steps
v_host=`hostname`
if [ “$v_host” = “$FAL_NODE1” ]
then
#Remote connection to kill mgr/pump in the failed node
ssh “$FAL_NODE2”>/dev/null 2>&1 “mgr_proc_id=\`ps -ef|grep $OGG_HOME/dirprm/mgr.prm|grep -v grep|awk ‘{print \$2}’\`; pump_proc_id=\`ps -ef|grep $OGG_HOME/dirprm/${pump}.prm|grep -v grep|awk ‘{print \$2}’\`;kill -9 \$mgr_proc_id;kill -9 \$pump_proc_id;sudo fusermount -u -z $DBFS_MNT”
$OGG_HOME/ggsci </dev/null 2>&1 “mgr_proc_id=\`ps -ef|grep $OGG_HOME/dirprm/mgr.prm|grep -v grep|awk ‘{print \$2}’\`; pump_proc_id=\`ps -ef|grep $OGG_HOME/dirprm/${pump}.prm|grep –v
Thanks
David YITZHAK
In case of failover I start GG on standby DB which become the Primary DB
Here are my tests
9.0 test to check that Hub&Spoke Configuration and Bidirectional Replication works
===================================================================================
Shutdown primary server hub_primary_server to stimulate failover as root
1. init o
2. in standby DB barda 4 do failover
conn sys/Admin01@ORCL223st as sysdba
— Failing over to a Physical Standby
.
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
Database altered.
SQL> shutdown immediate;
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.
Total System Global Area 1636814848 bytes
Fixed Size 2228864 bytes
Variable Size 1140854144 bytes
Database Buffers 486539264 bytes
Redo Buffers 7192576 bytes
Database mounted.
SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN;
Database altered.
SQL> ALTER DATABASE OPEN;
Database altered.
SQL> alter system switch logfile;
System altered.
# Query the DB to get the resetlogs_id from last incarnation
select max(resetlogs_id) from v$database_incarnation where resetlogs_id not in (select max(resetlogs_id) from v$database_incarnation);
SQL> select max(resetlogs_id) from v$database_incarnation where resetlogs_id not in (select max(resetlogs_id) from v$database_incarnation);
MAX(RESETLOGS_ID)
—————–
817979583
3. Connect to DBFS on standby DB :
root@standby_server tmp]# su – oracle
[oracle@standby_server ~]$ dbfs_client dbfs_user@ORCL223 /oracle/dbfs_direct
Password
heck that all DBFS file system is avalable on OK :
oracle@standby_server dbfs_direct]$ ls
FS1 FS_CHKPT
[oracle@standby_server dbfs_direct]$ cd FS_CHKPT/
[oracle@standby_server FS_CHKPT]$ ls
dirchk
[oracle@standby_server FS_CHKPT]$ cd ..
[oracle@standby_server dbfs_direct]$ ls
FS1 FS_CHKPT
[oracle@standby_server dbfs_direct]$ cd FS1
[oracle@standby_server FS1]$ ls
golden_gate
[oracle@standby_server FS1]$ cd golden_gate/
[oracle@standby_server golden_gate]$ ls
BR dirchk dirdat diroby dirpcs dirprm dirrpt
[oracle@standby_server golden_gate]$
[oracle@standby_server golden_gate]$ cd dirprm
[oracle@standby_server dirprm]$ ls
E00.prm mgr1.prm mgr.prm MGR.rpt P01.prm P03.prm R01.prm R03.pr
[oracle@standby_server golden_gate]$
[oracle@standby_server golden_gate]$ cd dirprm
[oracle@standby_server dirprm]$ ls
E00.prm mgr1.prm mgr.prm MGR.rpt P01.prm P03.prm R01.prm R03.prm
4.
Change BP22 on standby from C:\oracle\product\11.2.0\dbhome_1\NETWORK\ADMIN\tnsnames.ora from
BP22 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = hub_primary_server)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = BP22)
)
)
to
BP22 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = standby_server)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = BP22)
)
)
Optional :
=========
in DHCP change alias of hub_primary_server to standby_server
and run
ipconfig /flushdns
To Purges the DNS Resolver cache
n each sopke restart extarct processes :
on spoke3 :
C:\Users\Administrator>cd C:\ogg01\
C:\ogg01>ls
‘ls’ is not recognized as an internal or external command,
operable program or batch file.
C:\ogg01>ggsci
Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.6 16211226 OGGCORE_11.2.1.0.6_PLATFORMS_130418.1829
Windows x64 (optimized), Oracle 11g on Apr 19 2013 17:38:40
Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserv
GGSCI (spoke3) 1> info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING E3 00:00:00 00:00:09
EXTRACT ABENDED P3 00:00:00 21:35:04
REPLICAT RUNNING R3 00:00:00 00:00:05
GGSCI (spoke3) 2> stop er *
Sending STOP request to EXTRACT E3 …
Request processed.
EXTRACT P3 is already stopped.
Sending STOP request to REPLICAT R3 …
Request processed.
GGSCI (spoke3) 3> start er *
Sending START request to MANAGER …
EXTRACT E3 starting
Sending START request to MANAGER …
EXTRACT P3 starting
Sending START request to MANAGER …
REPLICAT R3 starting
GGSCI (spoke3) 4> info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING E3 00:00:00 00:00:01
EXTRACT RUNNING P3 00:00:00 21:35:31
REPLICAT RUNNING R3 00:00:00 00:00:10
on spoke1:
GGSCI (spoke1) 22> info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING E1 00:00:00 00:00:05
EXTRACT ABENDED P1 00:00:00 21:36:57
REPLICAT RUNNING R1 00:00:00 00:00:00
In summery my question
Should I restartb pump process on Spoke :
— File name : P1.prm
— Data pump from Spoke 1 to Hub
P1.prm is as follows
EXTRACT P1
userid OGG_ADMIN_1@BP11, Password Admin01
RmtHost hub , MgrPort 7809, Compress
rmttrail ./dirdat/rA
TABLE scott_1.emp;
Note that RmtHost hub shoul be changed after faiover to
stop er *
start er *
info all
The p1.prm is attached and Spoke1_Config.oby to build P1 punp process on spoke is attached
Thanks
Hi
So in spoke in the following p1.prm :
— File name : P1.prm
— Data pump from Spoke 1 to Hub
P1.prm is as follows
EXTRACT P1
userid OGG_ADMIN_1@ORCL11, Password pass
RmtHost hub , MgrPort 7809, Compress
rmttrail ./dirdat/rA
TABLE scott_1.emp;
The RmtHost hub should be changedin p1.prm after faiover
But Should I restart pump after updated p1.prm on
as follows ?
stop er *
start er *
info all
Thanks
Hi
The Q here is:
if I do not change the file in p1.prm In the line :
RmtHost hub , MgrPort 7809, Compress
I hub is primary DB in dataguard . Now if there is a failover and we only change the IP of hub in DNS (to standby IP) after failover
Thus p1.prm is not changed .
Should I restart pump afte changing IP of hub in DNS ?
Thanks
Hi
After failover in hub & Spoke configuration where HUB is Data guard I have to restart all extract ,pump and replicat processes in all HUB and Spokes. So only changing the IP of server name alias in DNS in not enough.
Should restart only these processes in HUB or I have to do as discussed in OGG_Best_Practice_-_GoldenGate_and_Oracle_Data_Guard_v11.1-ID1322547.1
Source Database failure to Standby DB
Non shared-storage failover
In our case where it can be vary complicated where there are > 20 spokes
11.2 Appendix B – gg_ext_non_shared.sh
#Failover steps
v_host=`hostname`
if [ “$v_host” = “$FAL_NODE1” ]
then
#Remote connection to kill mgr/pump in the failed node
ssh “$FAL_NODE2”>/dev/null 2>&1 “mgr_proc_id=\`ps -ef|grep $OGG_HOME/dirprm/mgr.prm|grep -v grep|awk ‘{print \$2}’\`; pump_proc_id=\`ps -ef|grep $OGG_HOME/dirprm/${pump}.prm|grep -v grep|awk ‘{print \$2}’\`;kill -9 \$mgr_proc_id;kill -9 \$pump_proc_id;sudo fusermount -u -z $DBFS_MNT”
$OGG_HOME/ggsci </dev/null 2>&1 “mgr_proc_id=\`ps -ef|grep $OGG_HOME/dirprm/mgr.prm|grep -v grep|awk ‘{print \$2}’\`; pump_proc_id=\`ps -ef|grep $OGG_HOME/dirprm/${pump}.prm|grep –v
Thanks
By the way the support answer us :
For failover it is always better to stop and rest all the processes after switching to the new setup. The documemtation recommends “Stop the Manager process on the original node before starting Oracle GoldenGate processes from another node.”
Thanks
Glad you got your answer. I have run into situations similar in an active-active environment with Data Guard failover; all the processes on source an target should be restarted. Also, in an active-active configuration without failover, it is necessary to restart the processes when certain DDL occurs like adding a column. This issue there was that even with supplemental logging being automatically added as well as metadata updates in the replicat, if a row was inserted/updated for a node where the DDL did not originate, the column data for the additional column is not included. After restarting the processes, the new column data is included.
In OGG Best Practices white papers, it is recommended not to issue altering DDL in an active-active configuration, but DDL that does not alter existing tables is okay from one node.