In the pervious article we have explained the Upgrade Grid Infrastructure 12c To 19c Oracle Now We will explain the steps for downgrade oracle 19c 2 node GRID to oracle 12.1.0.2 GRID.
ENVIRONMENT DETAILS:
2 NODES – localhost1 and localhost2
Grid owner – oracle
19C ORACLE_HOME(current) – > /sharearea/crs/grid19c
12C ORACLE_HOME (old) -> /crs/app/oracle/product/grid12c
1. Check the current grid version:( check on both the nodes)
oracle@localhost1:/$ crsctl query crs softwareversion Oracle Clusterware version on node [dbhost1] is [19.0.0.0.0] oracle@localhost1:$ crsctl query crs activeversion Oracle Clusterware active version on the cluster is [19.0.0.0.0]
2. Remove the mgmt database:
oracle@localhost1:$ srvctl status mgmtdb Database is enabled Instance -MGMTDB is running on node localhost2
Currently MGMT is running from dbhost2 i.e node 2 .So run the delete command from NODE 2 only.
oracle@localhost2:~$ dbca -silent -deleteDatabase -sourceDB -MGMTDB [WARNING] [DBT-19202] The Database Configuration Assistant will delete the Oracle instances and datafiles for your database. All information in the database will be destroyed. Prepare for db operation 32% complete Connecting to database 35% complete 39% complete 42% complete 45% complete 48% complete 52% complete 65% complete Updating network configuration files 68% complete Deleting instance and datafiles 84% complete 100% complete Database deletion completed. Look at the log file "/sharearea/orabase/cfgtoollogs/dbca/_mgmtdb/_mgmtdb.log" for further details.
NOTE:
So if we try to delete the MGMTDB from node 1, it will throw error as below.
oracle@dbhost1:$ dbca -silent -deleteDatabase -sourceDB -MGMTDB
[FATAL] [DBT-10003] Delete operation for Oracle Grid Infrastructure Management Repository (GIMR) cannot be performed on the current node (dbhost1).
CAUSE: Oracle GIMR is running on a remote node (dbhost2).
ACTION: Invoke DBCA on the remote node (dbhost2) to delete Oracle GIMR.
3. Downgrade script execution:
Now we will run the downgrade script first one local node and then on remote node. It need to be run from root user.
Login as root and go to a path where grid owner has write permission. In our case grid owner is oracle.
Downgrade on node 1:(localhost1) –
login as root and move to a path where oracle has write permission
root$ cd export/home/oracle
/sharearea/crs/grid19c/crs/install/rootcrs.sh -downgrade
root@localhost1:/export/home/oracle# /sharearea/crs/grid19c/crs/install/rootcrs.sh -downgrade Using configuration parameter file: /sharearea/crs/grid19c/crs/install/crsconfig_params The log of current session can be found at: /sharearea/orabase/crsdata/dbhost1/crsconfig/crsdowngrade_dbhost1_2019-09-16_10-01-30AM.log 2019/09/16 10:04:12 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector. 2019/09/16 10:05:16 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector. 2019/09/16 10:05:18 CLSRSC-591: successfully downgraded Oracle Clusterware stack on this node root@dbhost1:/export/home/oracle# 2019/09/16 10:05:55 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
Downgrade on node 2( localhost2)
login as root and move to a path where oracle has write permission
root$ cd /export/home/oracle
/sharearea/crs/grid19c/crs/install/rootcrs.sh -downgrade
root@localhost2:# /sharearea/crs/grid19c/crs/install/rootcrs.sh -downgrade Using configuration parameter file: /sharearea/crs/grid19c/crs/install/crsconfig_params The log of current session can be found at: /sharearea/orabase/crsdata/dbhost2/crsconfig/crsdowngrade_dbhost2_2019-09-16_10-10-47AM.log 2019-09-16 10:11:04.631 [1] gipcmodClsaBind: Clsa bind 2019-09-16 10:11:04.631 [1] gipcmodClsaBind: Clsa bind, endp 103ee91e0 [00000000000001e9] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=00000000-00000000-0))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=00000000-00000000-0))', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0, ready 0, wobj 103eeb0a0, sendp 103f9e820 status 13flags 0x21080710, flags-2 0x0, usrFlags 0x0 } 2019-09-16 10:11:04.631 [1] gipcmodClsaSetFast: IPC Clsa with fast clsa, endp 103ee91e0 [00000000000001e9] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=00000000-00000000-0))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=00000000-00000000-0))', numPend 0, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, readyRef 0, ready 0, wobj 103eeb0a0, sendp 103f9e820 status 13flags 0xa1080710, flags-2 0x0, usrFlags 0x0 } 2019-09-16 10:11:04.631 [1] gipcmodClsaCompleteRequest: [clsa] stared for req 103eea7b0 [00000000000001ee] { gipcConnectRequest : addr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=8a08364e-459c1999-52758))', parentEndp 103ee91e0, ret gipcretSuccess (0), objFlags 0x0, reqFlags 0x2 } 2019-09-16 10:11:04.631 [1] gipcmodClsaCompleteConnect: [clsa] completed connect on endp 103ee91e0 [00000000000001e9] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=459c1999-8a08364e-31594))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=8a08364e-459c1999-52758))', numPend 4, numReady 1, numDone 2, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 52758, readyRef 0, ready 0, wobj 103eeb0a0, sendp 103f9e820 status 13flags 0xa1082712, flags-2 0x100, usrFlags 0x0 } 2019-09-16 10:11:04.631 [1] gipcmodClsaCheckCompletion: username, state 4, endp 103ee91e0 [00000000000001e9] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=459c1999-8a08364e-31594))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=8a08364e-459c1999-52758))', numPend 4, numReady 0, numDone 3, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 52758, readyRef 0, ready 0, wobj 103eeb0a0, sendp 103f9e820 status 0flags 0xa1002716, flags-2 0x100, usrFlags 0x0 } 2019-09-16 10:11:04.631 [1] gipcmodClsaCheckCompletion: username CLSA, modendp 4, userData 103f2afd0 2019-09-16 10:11:04.632 [1] gipcmodClsaCompleteRequest: [clsa] stared for req 103d6f570 [00000000000001fe] { gipcSendRequest : addr '', data 103eea7b0, len 627, olen 627, parentEndp 103ee91e0, ret gipcretSuccess (0), objFlags 0x0, reqFlags 0x2 } 2019-09-16 10:11:04.654 [1] gipcmodClsaCompleteRequest: [clsa] stared for req 103d6f570 [0000000000000200] { gipcReceiveRequest : peerName 'clsc_ipc', data 103eea8f8, len 502, olen 502, off 0, parentEndp 103ee91e0, ret gipcretSuccess (0), objFlags 0x0, reqFlags 0x2 } 2019-09-16 10:11:04.655 [1] gipcmodClsaDisconnect: [clsa] disconnect issued on endp 103ee91e0 [00000000000001e9] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=459c1999-8a08364e-31594))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=8a08364e-459c1999-52758))', numPend 5, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 52758, readyRef 0, ready 0, wobj 103eeb0a0, sendp 103f9e820 status 0flags 0xa1002716, flags-2 0x100, usrFlags 0x0 } 2019-09-16 10:11:04.655 [1] gipcmodClsaDisconnect: [clsa] disconnect issued on endp 103ee91e0 [00000000000001e9] { gipcEndpoint : localAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=)(GIPCID=459c1999-8a08364e-31594))', remoteAddr 'clsc://(ADDRESS=(PROTOCOL=ipc)(KEY=OHASD_UI_SOCKET)(GIPCID=8a08364e-459c1999-52758))', numPend 0, numReady 5, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 52758, readyRef 0, ready 0, wobj 103eeb0a0, sendp 103f9e820 status 0flags 0xa700271e, flags-2 0x100, usrFlags 0x0 } CRS-4123: Oracle High Availability Services has been started. CRS-2672: Attempting to start 'ora.evmd' on 'dbhost2' CRS-2672: Attempting to start 'ora.mdnsd' on 'dbhost2' CRS-2676: Start of 'ora.mdnsd' on 'dbhost2' succeeded CRS-2676: Start of 'ora.evmd' on 'dbhost2' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'dbhost2' CRS-2676: Start of 'ora.gpnpd' on 'dbhost2' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'dbhost2' CRS-2672: Attempting to start 'ora.gipcd' on 'dbhost2' CRS-2676: Start of 'ora.cssdmonitor' on 'dbhost2' succeeded CRS-2676: Start of 'ora.gipcd' on 'dbhost2' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'dbhost2' CRS-2672: Attempting to start 'ora.diskmon' on 'dbhost2' CRS-2676: Start of 'ora.diskmon' on 'dbhost2' succeeded CRS-2676: Start of 'ora.cssd' on 'dbhost2' succeeded CRS-2672: Attempting to start 'ora.crf' on 'dbhost2' CRS-2672: Attempting to start 'ora.ctssd' on 'dbhost2' CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'dbhost2' CRS-2676: Start of 'ora.crf' on 'dbhost2' succeeded CRS-2676: Start of 'ora.ctssd' on 'dbhost2' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'dbhost2' succeeded CRS-2672: Attempting to start 'ora.asm' on 'dbhost2' CRS-2676: Start of 'ora.asm' on 'dbhost2' succeeded CRS-2672: Attempting to start 'ora.storage' on 'dbhost2' CRS-2676: Start of 'ora.storage' on 'dbhost2' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'dbhost2' CRS-2676: Start of 'ora.crsd' on 'dbhost2' succeeded 2019/09/16 10:26:12 CLSRSC-338: Successfully downgraded OCR to version 12.1.0.2.0 CRS-2672: Attempting to start 'ora.crf' on 'dbhost2' CRS-2676: Start of 'ora.crf' on 'dbhost2' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'dbhost2' CRS-2676: Start of 'ora.crsd' on 'dbhost2' succeeded 2019/09/16 10:27:09 CLSRSC-4006: Removing Oracle Trace File Analyzer (TFA) Collector. 2019/09/16 10:27:52 CLSRSC-4007: Successfully removed Oracle Trace File Analyzer (TFA) Collector. 2019/09/16 10:27:54 CLSRSC-591: successfully downgraded Oracle Clusterware stack on this node 2019/09/16 10:27:55 CLSRSC-640: To complete the downgrade operation, ensure that the node inventory on all nodes points to the configured Grid Infrastructure home '/crs/app/oracle/product/grid12c'. 2019/09/16 10:27:56 CLSRSC-592: Run 'crsctl start crs' from home /crs/app/oracle/product/grid12c on each node to complete downgrade. root@dbhost2:/export/home/oracle# 2019/09/16 10:28:38 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector.
4. Remove the 19c grid_home from Active cluster inventory: ( only from one node)
Run this as oracle user from 19C GRID HOME, This need to be run only from one noded
cd /sharearea/crs/grid19c/oui/bin
oracle$./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=false ORACLE_HOME=/sharearea/crs/grid19c "CLUSTER_NODES=dbhost1,dbhost2" -doNotUpdateNodeList Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 470964 MB Passed The inventory pointer is located at /var/opt/oracle/oraInst.loc You can find the log of this install session at: /crs/app/oraInventory/logs/UpdateNodeList2019-09-16_10-33-44AM.log 'UpdateNodeList' was successful.
5. Update the active cluster inventory with ORACLE 12C grid home ( only from one node)
Run this as oracle user from 12C GRID HOME, This need to be run only from one node
cd /crs/app/oracle/product/grid12c/oui/bin
oracle$ ./runInstaller -nowait -waitforcompletion -ignoreSysPrereqs -updateNodeList -silent CRS=true ORACLE_HOME=/crs/app/oracle/product/grid12c "CLUSTER_NODES=dbhost1,dbhost2" Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 470673 MB Passed The inventory pointer is located at /var/opt/oracle/oraInst.loc 'UpdateNodeList' was successful.
6. START THE CRS FROM ORACLE 12C GRID HOME
--- node 1 : root@localhost1:/crs/app/oracle/product/grid12c/bin# ./crsctl start crs CRS-4123: Oracle High Availability Services has been started. --- node 2 : root@localhost2:/crs/app/oracle/product/grid12c/bin# ./crsctl start crs CRS-4123: Oracle High Availability Services has been started.
7. Remove MGMTDB service from cluster.
REMOVE MGMT SERVICE: oracle@localhost1:~$ srvctl remove mgmtdb Remove the database _mgmtdb? (y/[n]) y
8.Check crs active version on both nodes:
oracle@localhost1:~$ crsctl query crs softwareversion Oracle Clusterware version on node [dbhost1] is [12.1.0.2.0] oracle@localhost1:~$ crsctl query crs activeversion Oracle Clusterware active version on the cluster is [12.1.0.2.0]
9.Create the MGMTDB CONTAINER DB:
Here mgmtdb will be created inside +MGMT diskgroup . Make sure +MGMT diskgroup is mounted .
/crs/app/oracle/product/grid12c/bin/dbca -silent -createDatabase -createAsContainerDatabase true -templateName MGMTSeed_Database.dbc -sid -MGMTDB -gdbName _mgmtdb -storageType ASM -diskGroupName +MGMT -datafileJarLocation /crs/app/oracle/product/grid12c/assistants/dbca/templates -characterset AL32UTF8 -autoGeneratePasswords -skipUserTemplateCheck Registering database with Oracle Grid Infrastructure 5% complete Copying database files 7% complete 9% complete 16% complete 23% complete 30% complete 41% complete Creating and starting Oracle instance 43% complete 48% complete 49% complete 50% complete 55% complete 60% complete 61% complete 64% complete Completing Database Creation 68% complete 79% complete 89% complete 100% complete Look at the log file "/crs/app/grid/cfgtoollogs/dbca/_mgmtdb/_mgmtdb1.log" for further details.
10.Create the MGMTDB PDB:
/crs/app/oracle/product/grid12c/bin/dbca -silent -createPluggableDatabase -sourceDB -MGMTDB -pdbName cluster_name -createPDBFrom RMANBACKUP -PDBBackUpfile /crs/app/oracle/product/grid12c/assistants/dbca/templates/mgmtseed_pdb.dfb - PDBMetadataFile /crs/app/oracle/product/grid12c/assistants/dbca/templates/mgmtseed_pdb.xml -createAsClone true -internalSkipGIHomeCheck Creating Pluggable Database 4% complete 12% complete 21% complete 38% complete 55% complete 85% complete Completing Pluggable Database Creation 100% complete Look at the log file "/crs/app/grid/cfgtoollogs/dbca/_mgmtdb/cluster_name/_mgmtdb.log" for further details. oracle@lcalhost1:...app/oracle/product/grid12c/bin$ srvctl status mgmtdb Database is enabled Instance -MGMTDB is running on node localhost1
Bộ kem trị nám tàn nhang của Thái
Lan Clobetamin G đem lại hiệu quả chăm sóc da tuyệt vời,
mỗi sản phẩm tương ứng với một bước chăm sóc
da riêng biệt nên làn da được làm sạch sâu, tăng khả năng hấp thu dưỡng chất.