In some situation, FS_CLONE failed on one node and you have to make it work on other node(s).
Notes FS_CLONE parameter: force=yes/no [default: no]
- Use force=yes to restart a previous failed fs_clone command from the beginning.
- By default fs_clone will restart where it left off.
$ adop -status
... ...
Node Name Node Type Phase Status Started Finished Elapsed
--------------- ---------- --------------- --------------- ------------------------ ------------------------- ------------------
node2Name slave FS_CLONE FAILED 2022/04/26 08:10:47 3:18:01
nodeName master FS_CLONE COMPLETED 2022/04/26 08:10:47 2022/04/26 08:54:44 0:43:57
After the problem was fixed, one way is to just run it again on primary node and fs_clone will start from where it failed.
$ adop phase=fs_clone
... ...
Checking if adop can continue with available nodes in the configuration.
Log: $NE_BASE/EBSapps/log/adop/.../fs_clone/node1Name
txkADOPEvalSrvStatus.pl returned SUCCESS
Skipping configuration validation on admin node: [nodeName]
Validating configuration on node(s): [node2Name].
Output: $NE_BASE/EBSapps/log/adop/.../fs_clone/validate/remote_execution_result_level2.xml
Log: $NE_BASE/EBSapps/log/adop/.../fs_clone/nodeName
txkADOPEvalSrvStatus.pl returned SUCCESS
Starting admin server on patch file system.
Running fs_clone on node(s): [node2Name].
Output: $NE_BASE/EBSapps/log/adop/...fs_clone/remote_execution_result_level2.xml
Log: $NE_BASE/EBSapps/log/adop/.../fs_clone/nodeName
txkADOPEvalSrvStatus.pl returned SUCCESS
... ...
The other way is to run it just on the failed node. Here are the steps to run FS_CLONE separately on each node.
- Run fs_clone on primary node
$ adop -status
Enter the APPS password:
Connected.
====================================================
ADOP (C.Delta.12)
Session Id: 12
Command: status
Output: $NE_BASE/EBSapps/log/adop/12/.../adzdshowstatus.out
====================================================
Node Name Node Type Phase Status Started Finished Elapsed
--------------- ---------- --------------- --------------- -------------------- -------------------- ------------
nodeName master APPLY ACTIVE 2022/03/07 09:09:36 2022/03/09 11:42:38 50:33:02
CLEANUP NOT STARTED
node2Name slave APPLY ACTIVE 2022/03/07 09:47:44 2022/03/09 11:47:34 49:59:50
CLEANUP NOT STARTED
$ adop phase=fs_clone allnodes=no action=db
... ...
Checking for pending cleanup actions.
No pending cleanup actions found.
Blocking managed server ports.
Log: $NE_BASE/EBSapps/log/adop/.../fs_clone/nodeName/txkCloneAcquirePort.log
Performing CLONE steps.
Log: $NE_BASE/EBSapps/log/adop/.../fs_clone/nodeName
Beginning application tier FSCloneStage - Thu May 18 13:54:06 2022
... ...
Log file located at $INST_TOP/admin/log/clone/FSCloneStageAppsTier_05181354.log
Completed FSCloneStage...
Thu May 18 14:01:38 2022
Beginning application tier FSCloneApply - Thu May 18 14:04:11 2022
... ...
Log file located at $INST_TOP/admin/log/clone/FSCloneApplyAppsTier_05181404.log
Target System Fusion Middleware Home set to $PATCH_BASE/FMW_Home
Target System Web Oracle Home set to $PATCH_BASE/FMW_Home/webtier
Target System Appl TOP set to $PATCH_BASE/EBSapps/appl
Target System COMMON TOP set to $PATCH_BASE/EBSapps/comn
Target System Instance Top set to $PATCH_BASE/inst/apps/$CONTEXT_NAME
Report file located at $PATCH_BASE/inst/apps/$CONTEXT_NAME/temp/portpool.lst
The new APPL_TOP context file has been created : $CONTEXT_FILE on /fs2/
contextfile=$CONTEXT_FILE on /fs2/
Completed FSCloneApply...
Thu May 18 14:28:29 2022
Resetting FARM name...
runDomainName: EBS_domain
patchDomainName: EBS_domain
targets_xml_loc: $PATCH_BASE/FMW_Home/user_projects/domains/EBS_domain/sysman/state/targets.xml
Patch domain is not updated, no need to reset FARM name.
Releasing managed server ports.
Log: $NE_BASE/EBSapps/log/adop/.../fs_clone/nodeName/txkCloneAcquirePort.log
Synchronizing snapshots.
Generating log report.
Output: $NE_BASE/EBSapps/log/adop/.../fs_clone/nodeName/adzdshowlog.out
The fs_clone phase completed successfully.
adop exiting with status = 0 (Success)
$ adop -status
... ...
Node Name Node Type Phase Status Started Finished Elapsed
------------------ ---------- ------------ ---------------------- ---------------- -------------------- ------------
node2Name slave FS_CLONE NOT STARTED 2022/05/18 13:24:35 1:66:38
nodeName master FS_CLONE COMPLETED 2022/05/18 13:24:35 2022/0518 14:28:39 1:64:04
- On primary node, confirm the WLS Admin server is running on PATCH file system. If it is not, start it because FS_CLONE on 2nd node will need it to avoid error:
[UNEXPECTED]The admin server for the patch file system is not running
Note: you can not start/stop WLS Admin server on a non-primary node, because that gives message:
adadminsrvctl.sh should be run only from the primary node nodeName
$ echo $FILE_EDITION
patch
$ cd $ADMIN_SCRIPTS_HOME
$ ./adadminsrvctl.sh start forcepatchfs
$ ./adadminsrvctl.sh status
... ...
The AdminServer is running
$ ./adnodemgrctl.sh status
... ...
The Node Manager is not up.
- Run fs_clone on the 2nd node (use "force=yes" option in most times from the beginning)
$ echo $FILE_EDITION
run
$ export CONFIG_JVM_ARGS="-Xms1024m -Xmx2048m" (optional)
$ adop phase=fs_clone allnodes=no action=nodb force=yes
... ...
Releasing managed server ports.
Synchronizing snapshots.
Generating log report.
Output: $NE_BASE/EBSapps/log/adop/.../fs_clone/node2Name/adzdshowlog.out
The fs_clone phase completed successfully.
adop exiting with status = 0 (Success)
$ adop -status
... ...
Node Name Node Type Phase Status Started Finished Elapsed
----------------- ---------- --------------- --------------- -------------------- -------------------- ------------
node2name slave FS_CLONE COMPLETED 2022/05/18 13:24:35 2022/05/18 15:02:02 1:97:27
nodeName master FS_CLONE COMPLETED 2022/05/18 13:24:35 2022/05/18 14:28:39 1:64:04
- On primary node, stop the WLS Admin server
$ echo $FILE_EDITION
patch
$ ./adadminsrvctl.sh stop
$ stop node manager
$ ps -ef | grep fs2 <== if PATCH is on directory fs2
No comments:
Post a Comment