Saturday, June 18, 2022

FS_CLONE failed with directory error

I ran "adop phase=fs_clone" and it failed on the 2nd node of a two-node instance. The error is

$ adopscanlog -latest=yes
... ...
$NE_BASE/EBSapps/log/adop/.../fs_clone/node2Name/TXK_SYNC_create/txkADOPPreparePhaseSynchronize.log:
-------------------------------------------------------------------------------------------------------------
Lines #(345-347):
... ...
FUNCTION: main::removeDirectory [ Level 1 ]
ERRORMSG: Failed to delete the directory $PATCH_BASE/EBSapps/comn.

When that happened, some folders were already deleted from PATCH file system by FS_CLONE. It is kind of scared. The only way to get over this is to address the root cause and then re-run FS_CLNE. 

The error matches the description in Doc ID 2690029.1 (ADOP: Fs_clone fails with error Failed to delete the directory). The root cause is that developers copied files to a directory owned by applMgr or concurrent jobs wrote logs to folders under CUSTOM TOPs. In most cases it happens under CUSTOM TOPs. Now applMgr has no permission to remove them. The fix is to ask OS System Admin to find them and change their owner to applMgr or delete all troubling files as the file owner.

$ cd $PATCH_BASE/EBSapps/comn.
$ find . ! -user applMgr                   (Then, login as the file owner to delete them)
$ ls -lR  | grep -v applMgr | more    (Optional. to see the detailed list)
$ find . -user wrong_userID -exec chown applMgr:userGroup {} \;

After the fix on OS level, I did try to run "adop phase=fs_clone allnodes=no force=yes" on the 2nd node directly and got error:
[UNEXPECTED]The admin server for the patch file system is not running.        
Start the patch file system admin server from the admin node and then rerun fs_clone.

There two options to make it work.  Run "adop phase=fs_clone force=yes" on the Primary node. Seems it understands fs_clone worked on 1st node and quickly progressed to run it on the 2nd node. Or, start WLS Admin Server on the Primary node and then run "adop phase=fs_clone allnodes=no force=yes" on the 2nd node.

FS_CLONE normal log:

$ adop phase=fs_clone
... ...
Running fs_clone on admin node: [node1Name].
    Output: $NE_BASE/EBSapps/log/adop/.../fs_clone/remote_execution_result_level1.xml
    Log: $NE_BASE/EBSapps/log/adop/.../fs_clone/node1Name/txkADOPEvalSrvStatus.pl returned SUCCESS

Running fs_clone on node(s): [node2Name].
    Output: $NE_BASE/EBSapps/log/adop/.../fs_clone/remote_execution_result_level2.xml
    Log: $NE_BASE/EBSapps/log/adop/.../fs_clone/node2Name/txkADOPEvalSrvStatus.pl returned SUCCESS

Stopping services on patch file system.

    Stopping admin server.
You are running adadminsrvctl.sh version 120.10.12020000.11
Stopping WLS Admin Server...
Refer $PATCH_BASE/inst/apps/$CONTEXT_NAME/logs/appl/admin/log/adadminsrvctl.txt for details
AdminServer logs are located at $PATCH_BASE/FMW_Home/user_projects/domains/EBS_domain/servers/AdminServer/logs
adadminsrvctl.sh: exiting with status 0
adadminsrvctl.sh: check the logfile $PATCH_BASE/inst/apps/$CONTEXT_NAME/logs/appl/admin/log/adadminsrvctl.txt for more information ...

    Stopping node manager.
You are running adnodemgrctl.sh version 120.11.12020000.12
The Node Manager is already shutdown
NodeManager log is located at $PATCH_BASE/FMW_Home/wlserver_10.3/common/nodemanager/nmHome1
adnodemgrctl.sh: exiting with status 2
adnodemgrctl.sh: check the logfile $PATCH_BASE/inst/apps/$CONTEXT_NAME/logs/appl/admin/log/adnodemgrctl.txt for more information ...

Summary report for current adop session:
    Node node1Name:
       - Fs_clone status:   Completed successfully
    Node node2Name:
       - Fs_clone status:   Completed successfully
    For more details, run the command: adop -status -detail
adop exiting with status = 0 (Success)

NOTES:
It one instance, FS_CLONE took 8 hours on first node, during no log entries and updates. Just stayed frozen for hours! 

Sunday, June 5, 2022

Run FS_CLONE on each node separately

In some situation, FS_CLONE failed on one node and you have to make it work on other node(s). 

Notes FS_CLONE parameter: force=yes/no [default: no]
       - Use force=yes to restart a previous failed fs_clone command from the beginning.  
       - By default fs_clone will restart where it left off.

$ adop -status
... ...
Node Name     Node   Type  Phase  Status          Started                             Finished                        Elapsed
--------------- ---------- --------------- --------------- ------------------------ ----------------------------
node2Name      slave      FS_CLONE  FAILED             2022/04/26 08:10:47                                     3:18:01
primaryName      master    FS_CLONE   COMPLETED  2022/04/26 08:10:47  2022/04/26 08:54:44  0:43:57

After the problem was fixed, one way is to just run it again on primary node and fs_clone will start from where it failed.

$ adop phase=fs_clone
... ...
Checking if adop can continue with available nodes in the configuration.
    Log: $NE_BASE/EBSapps/log/adop/.../fs_clone/node1Name
        txkADOPEvalSrvStatus.pl returned SUCCESS
Skipping configuration validation on admin node: [primaryNode]

Validating configuration on node(s): [node2Name].
    Output: $NE_BASE/EBSapps/log/adop/.../fs_clone/validate/remote_execution_result_level2.xml
    Log: $NE_BASE/EBSapps/log/adop/.../fs_clone/primaryName
        txkADOPEvalSrvStatus.pl returned SUCCESS

Starting admin server on patch file system.

Running fs_clone on node(s): [node2Name].
    Output: $NE_BASE/EBSapps/log/adop/...fs_clone/remote_execution_result_level2.xml
    Log: $NE_BASE/EBSapps/log/adop/.../fs_clone/primaryName
        txkADOPEvalSrvStatus.pl returned SUCCESS
... ...

The other way is to run it just on the failed node. Here are the steps to run FS_CLONE on each separate node. 

1. Run fs_clone on primary node

$ adop -status

Enter the APPS password:
Connected.
====================================================
ADOP (C.Delta.12)
Session Id: 12
Command: status
Output: $NE_BASE/EBSapps/log/adop/12/.../adzdshowstatus.out
====================================================
Node Name Node Type  Phase    Status    Started Finished      Elapsed
--------------- ---------- --------------- --------------- -------------------- -------------------- ------------
primaryName master   APPLY    ACTIVE    2022/03/07 09:09:36 2022/03/09 11:42:38  50:33:02
             CLEANUP    NOT STARTED
node2Name      slave    APPLY    ACTIVE    2022/03/07 09:47:44 2022/03/09 11:47:34  49:59:50
            CLEANUP    NOT STARTED

$ adop phase=fs_clone allnodes=no action=db
... ...
Checking for pending cleanup actions.
    No pending cleanup actions found.

Blocking managed server ports.
    Log: $NE_BASE/EBSapps/log/adop/.../fs_clone/primaryName/txkCloneAcquirePort.log

Performing CLONE steps.
    Log: $NE_BASE/EBSapps/log/adop/.../fs_clone/primaryName

Beginning application tier FSCloneStage - Thu May 18 13:54:06 2022
... ...
Log file located at $INST_TOP/admin/log/clone/FSCloneStageAppsTier_05181354.log
Completed FSCloneStage...
Thu May 18 14:01:38 2022

Beginning application tier FSCloneApply - Thu May 18 14:04:11 2022
... ...
Log file located at $INST_TOP/admin/log/clone/FSCloneApplyAppsTier_05181404.log
Target System Fusion Middleware Home set to $PATCH_BASE/FMW_Home
Target System Web Oracle Home set to $PATCH_BASE/FMW_Home/webtier
Target System Appl TOP set to $PATCH_BASE/EBSapps/appl
Target System COMMON TOP set to $PATCH_BASE/EBSapps/comn

Target System Instance Top set to $PATCH_BASE/inst/apps/$CONTEXT_NAME
Report file located at $PATCH_BASE/inst/apps/$CONTEXT_NAME/temp/portpool.lst
The new APPL_TOP context file has been created : $CONTEXT_FILE on /fs2/
contextfile=$CONTEXT_FILE on /fs2/
Completed FSCloneApply...
Thu May 18 14:28:29 2022

Resetting FARM name...
runDomainName: EBS_domain
patchDomainName: EBS_domain
targets_xml_loc: $PATCH_BASE/FMW_Home/user_projects/domains/EBS_domain/sysman/state/targets.xml
Patch domain is not updated, no need to reset FARM name.

Releasing managed server ports.
    Log: $NE_BASE/EBSapps/log/adop/.../fs_clone/primaryName/txkCloneAcquirePort.log

Synchronizing snapshots.

Generating log report.
    Output: $NE_BASE/EBSapps/log/adop/.../fs_clone/primaryName/adzdshowlog.out

The fs_clone phase completed successfully.
adop exiting with status = 0 (Success)

$ adop -status
... ...
Node Name  Node Type  Phase           Status                   Started                        Finished                     Elapsed
------------------ ---------- ------------ ---------------------- ---------------- ------------------ ------------
node2Name    slave      FS_CLONE     NOT STARTED   2022/05/18 13:24:35                                     1:66:38
primaryName    master    FS_CLONE     COMPLETED     2022/05/18 13:24:35  2022/0518 14:28:39  1:64:04

2. On primary node, confirm the WLS Admin server is running on PATCH file system and its WebLogic Console page is accessible from browsers. If it is not, start it because FS_CLONE on 2nd node will need it to avoid error:
[UNEXPECTED] The admin server for the patch file system is not running
Note: you can not start/stop WLS Admin server on a non-primary node, because that gives message:
adadminsrvctl.sh should be run only from the primary node primaryName

$ echo $FILE_EDITION
patch
$ cd $ADMIN_SCRIPTS_HOME

$ ./adadminsrvctl.sh start forcepatchfs
$ ./adadminsrvctl.sh status
... ...
 The AdminServer is running

$ ./adnodemgrctl.sh status
... ...
The Node Manager is not up.

$ grep s_wls_adminport $CONTEXT_FILE
<wls_adminport oa_var="s_wls_adminport" oa_type="PORT" base="7001" step="1" range="-1" label="WLS Admin Server Port">7032</wls_adminport>

Test WebLogic page from a browser: primaryNode.domain.com:7032/console

3. Run fs_clone on the 2nd node (use "force=yes" option in most times to start from the beginning)

$ echo $FILE_EDITION
run
$ export CONFIG_JVM_ARGS="-Xms1024m -Xmx2048m"   (optional)
$ adop phase=fs_clone allnodes=no action=nodb force=yes
... ...
Releasing managed server ports.
Synchronizing snapshots.
Generating log report.
    Output: $NE_BASE/EBSapps/log/adop/.../fs_clone/node2Name/adzdshowlog.out

The fs_clone phase completed successfully.
adop exiting with status = 0 (Success)

$ adop -status
... ...
Node Name       Node Type  Phase           Status          Started              Finished             Elapsed
----------------- ---------- --------------- --------------- -------------------- ------------------- ------------
node2name     slave      FS_CLONE        COMPLETED       2022/05/18 13:24:35  2022/05/18 15:02:02  1:97:27
primaryName    master    FS_CLONE        COMPLETED       2022/05/18 13:24:35  2022/05/18 14:28:39  1:64:04

4.  On primary node, stop the WLS Admin server 

$ echo $FILE_EDITION
patch
$ ./adadminsrvctl.sh stop
$ stop node manager if it is running.

$ ps -ef | grep fs2        <== if PATCH is on directory fs2

Thursday, May 19, 2022

Apache (OHS) in R12.2 failed to stop and refused to start

After Linux server patching and reboot, alerts were sent out that httpd process for OHS (Oracle HTTP Server) in a production instance was not running on the server. I tried to stop/start it without luck. Somehow it tied to a pid owned by root very strangely (or, it ties to a pid that does not exist). 

$ ps -ef | grep httpd         <== No httpd running

$ adapcctl.sh start
$ adopmnctl.sh status
Processes in Instance: EBS_web_EBSPROD_OHS1
------------------------ --------------+------------------+-------+-------
ias-component                     | process-type    |     pid | status
------------------------- -------------+------------------+-------+-------
EBS_web_EBSPROD         | OHS                |    919 | Stop

$ ps -ef | grep 919 (or, process 919 does not exist)
root       919     2  0 09:32 ?        00:00:00 [xxxxxx]

$ iName=$(tr < $CONTEXT_FILE '<>' '  ' | awk '/"s_ohs_instance"/ {print $(NF-1)}' )
$ SUBiName=${iName%?????}
$ cd $FMW_HOME/webtier/instances/$iName/diagnostics/logs/OHS/$SUBiName

The log file shows many lines of message:
--------
22/0X/05 02:47:03 Stop process
--------
$FMW_HOME/webtier/ohs/bin/apachectl stop: httpd (no pid file) not running
--------
22/0X/05 02:48:03 Stop process
--------
$FMW_HOME/webtier/ohs/bin/apachectl hardstop: httpd (no pid file) not running

File httpd.pid shall reside in this log folder, which is defined in httpd.conf in $FMW_HOME/webtier/instances/$iName/config/OHS/$SUBiName (or, $IAS_ORACLE_HOME/instances/$iName/config/OHS/$SUBiName) in R12.2. I believe the problem is httpd.pid was removed BEFORE "adapcctl.sh stop" fully completed, maybe due to Linux server crash or power off.  Normally, "adapcctl.sh stop" checks it and then removes it. Because of that, adapcctl.sh failed on checking a status and refused to start Apache.

Additionally, opmn logs can be found in $FMW_HOME/webtier/instances/$iName/diagnostics/logs/OPMN/opmn

The workaround:

1. Stop/kill all opmn processes  (keeping WLS related processes will be fine) 
$ sh $ADMIN_SCRIPTS_HOME/adopmnctl.sh stop
$ ps -ef | grep opmn

2. Create a empty file 
$ cd $FMW_HOME/webtier/instances/$iName/diagnostics/logs/OHS/$SUBiName
$ touch httpd.pid

3. Clear a folder (important step) 
$ cd $FMW_HOME/webtier/instances/$iName/config/OPMN/opmn
$ ls -al states
-rw-r----- 1 user group 19 Jun 21 18:57 .opmndat
-rw-r----- 1 user group 579 Jun 21 18:54 p1878855085
$ mv states states_BK
$ mkdir states
$ ls -al states

4. Now, starting Apache shall work
$ ./adapcctl.sh start
$ ./adopmnctl.sh status
$ ps -ef | grep httpd | wc -l
4                  <== 3 httpd.worker processes running

5. Make sure all work
./adstpall.sh apps/appsPWD
./adstrtal.sh apps/appsPWD
./adopmnctl.sh status

When Apache (OHS) starts up, it writes the process ID (PID) of the parent httpd process to the httpd.pid file. When Apache is running, file httpd.pid shall exist and not be empty.

Wednesday, May 4, 2022

FRM-40735 on some custom Forms & Forms trace

Some users (but not all users) can not open EBS forms by a pop-up error:
FRM-40735: ON-ERROR trigger raised unhandled exception ORA-06508.

Before that happened, we got alerts the diskspace for holding Forms temp file (defined by forms_tmpdir) was full briefly when "deleted" files keep staying in memory or somewhere. 

Log from Forms process are at $FMW_HOME/user_projects/domains/EBS_domain_${TWO_TASK}/servers/forms_server1/logs. But individual forms' error may not be written to it. The only way to get a direction on finding the root cause is to turn on FRD trace. See Oracle Doc ID 2796573.1 (EBS:FRD Trace in EBS 12.2).

1. Log into EBS > profile > system
 select "user" > give the username (who is going to reproduce the issue)
 Profile : ICX: Forms Launcher > click on find
 Set the profile for the user as https://hostname:[port_number]/forms/frmservlet?record=collect
 logout of EBS

2. Go to control panel > java > advanced
enable logging and show console

Now login to EBS with the username (who is going to reproduce the issue)
(in the Java Console search for "record=collect" to confirm it is being used)
Open forms > reproduce the issue
 
3. Log into EBS server as OS owner of the application (putty session), get the file from the trace path
 
$ echo $FORMS_TRACE_DIR
$ cd $FORMS_TRACE_DIR
$ ls -lrt *collect*

In my case, I see messages in one of "collect" files:
Opened file: $CUSTOM_TOP/12.0.0/forms/US/XXARXTWM.fmx
Error Message: FRM-40039: Cannot attach library ARXCOQIT while opening form ARXTWMAI.

ON-ERROR Trigger Fired:
Form: ARXTWMAI

State Delta:
ARXTWMAI, 21, Trigger, Entry, 758240456, ON-ERROR

ARXTWMAI, 22, Prog Unit, Entry, 758567456, /ARXTWMAI-6/P120_26_SEP_202102_21_28

Unhandled Exception ORA-06508
State Delta:

Error Message: FRM-40735: ON-ERROR trigger raised unhandled exception ORA-06508.
ARXTWMAI, 22, Trigger, Exit, 765768456, ON-ERROR

# 16 - ARXTWMAI:<null>.<null>.1648228229694425000

Nothing showed the true problem. But when I look around the folder, I see file $AU_TOP/resource/ARXCWWIN.plx had a new timestamp and 0 bytes. After I copied the same file from another node to replace it, all forms errors went away.

Very strange thing is why that file became 0 bytes! One possibility is when the folder for holding the temp forms file was full and Linux Admin removed some temp files in memory by file ID, some unexpected thing happened.

Friday, April 29, 2022

Rebuild R12.2 Central Inventory

- The problem: adop failures on a server which has multiple EBS instances

ADOP reported unexpected errors on an UAT instance. "adop -validate" also failed on running script $AD_TOP/patch/115/bin/txkADOPEvalSrvStatus.pl 
    AdopValidate failed or are incomplete on node(s): Node2name
    [UNEXPECTED]Unable to continue processing on other available nodes: Node1name
    [UNEXPECTED]Error running "AdopValidate" on node(s): Node2name.
    [UNEXPECTED]Remote action failed.

Seems the problem was on the 2nd node/server Node2name. Logs on Node2name gave more details on the error:

$ cd $NE_BASE/EBSapps/log/adop/4/20220425_174046/validate/Node2name
$ egrep -i 'erro|fail' *.*
AdopValidate.log:    [PROCEDURE] [START 2022/04/25 17:42:49] Validating failed hotpatch cycle
AdopValidate.log:    [PROCEDURE] [END   2022/04/25 17:42:49] Validating failed hotpatch cycle
AdopValidate.log:    [UNEXPECTED]Error calling TXK validations procedure
ValidationResults.log: Validate failed hotpatch cycle
ValidationResults.log:SUCCESS: No failed hotpatch cycle present for this Node2name node.
ValidationResults.log:  ERROR: The value of inventory_loc in $FMW_HOME/oracle_common/oraInst.loc is not the same as /u03/app/oraQA2Inventory present in other files
ValidationResults.log:  ERROR: The value of inventory_loc in $FMW_HOME/webtier/oraInst.loc is not the same as /u03/app/oraQA2Inventory present in other files
ValidationResults.log:  ERROR: The value of inventory_loc in $FMW_HOME/Oracle_EBS-app1/oraInst.loc is not the same as /u03/app/oraQA2Inventory present in other files

It indicates there was a confusion on the Central Inventory location. After checked pointer file /etc/oraInst.loc, I realized that when Node2name was added to the instance by clone, a new central inventory was not created for it and cloning used the inventory for QA2 which is a different instance on the host.  

"opatch apply" also reported a similar error if file /etc/oraInst.loc does not point its Central Inventory location to the one for QA2. 

The Oracle Home /u02/app/EBSUAT/fs1/EBSapps/10.1.2 is not registered with the Central Inventory.  OPatch was not able to get details of the home from the inventory.
ERROR: OPatch failed because of Inventory problem.

I saw file /u03/app/oraQA2Inventory/ContentsXML/inventory.xml file has many entries for UAT instance. That confirms Inventory for ORACLE HOMEs was messed up.

- Re-build central inventory for ORACLE HOMEs

The only permanent fix is to follow Doc ID 1586607.1 (R12.2 How To Re-attach Oracle Homes To The Central Inventory) to build/create its Central Inventory in below steps:

$ $FMW_HOME/oracle_common/oui/bin/runInstaller -silent -attachHome -invPtrLoc /etc/oraInst.loc ORACLE_HOME="/u02/app/EBSUAT/fs1/FMW_Home/oracle_common" ORACLE_HOME_NAME="fs1_FMW_common" CLUSTER_NODES="{}"

Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB.   Actual 14566 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2022-04-26_12-58-12PM. Please wait ...
$ The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u02/app/oraEBSUATinventory
'AttachHome' was successful.

$ $FMW_HOME/oracle_common/oui/bin/runInstaller -silent -attachHome -invPtrLoc /etc/oraInst.loc ORACLE_HOME="/u02/app/EBSUAT/fs2/FMW_Home/oracle_common" ORACLE_HOME_NAME="fs2_FMW_common" CLUSTER_NODES="{}"

$ $FMW_HOME/oracle_common/oui/bin/runInstaller -silent -attachHome -invPtrLoc /etc/oraInst.loc ORACLE_HOME="/u02/app/EBSUAT/fs1/FMW_Home/webtier" ORACLE_HOME_NAME="fs1_FMW_webtier" CLUSTER_NODES="{}"

$ $FMW_HOME/oracle_common/oui/bin/runInstaller -silent -attachHome -invPtrLoc /etc/oraInst.loc ORACLE_HOME="/u02/app/EBSUAT/fs2/FMW_Home/webtier" ORACLE_HOME_NAME="fs2_FMW_webtier" CLUSTER_NODES="{}"

$ $FMW_HOME/oracle_common/oui/bin/runInstaller -silent -attachHome -invPtrLoc /etc/oraInst.loc ORACLE_HOME="/u02/app/EBSUAT/fs1/FMW_Home/Oracle_EBS-app1" ORACLE_HOME_NAME="fs1_FMW_app1" CLUSTER_NODES="{}"

$ $FMW_HOME/oracle_common/oui/bin/runInstaller -silent -attachHome -invPtrLoc /etc/oraInst.loc ORACLE_HOME="/u02/app/EBSUAT/fs2/FMW_Home/Oracle_EBS-app1" ORACLE_HOME_NAME="fs2_FMW_app1" CLUSTER_NODES="{}"

$ $RUN_BASE/EBSapps/10.1.2/oui/bin/runInstaller -silent -attachHome -invPtrLoc /etc/oraInst.loc ORACLE_HOME="/u02/app/EBSUAT/fs1/EBSapps/10.1.2" ORACLE_HOME_NAME="fs1_tool_EBSapps_10_1_2" CLUSTER_NODES="{}"

Starting Oracle Universal Installer...
No pre-requisite checks found in oraparam.ini, no system pre-requisite checks will be executed.
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2022-04-26_01-13-13PM. Please wait ...

$ Java HotSpot(TM) Server VM warning: You have loaded library /tmp/OraInstall2022-04-26_01-13-13PM/oui/lib/linux/liboraInstaller.so which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
'AttachHome' was successful.

$ $PATCH_BASE/EBSapps/10.1.2/oui/bin/runInstaller -silent -attachHome -invPtrLoc /etc/oraInst.loc ORACLE_HOME="/u02/app/EBSUAT/fs2/EBSapps/10.1.2" ORACLE_HOME_NAME="fs2_tool_EBSapps_10_1_2" CLUSTER_NODES="{}"

$ cd /u02/apps/oraEBSUATinventory/ContentsXML
$ more inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 2009 Oracle Corporation. All rights Reserved -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
<VERSION_INFO>
   <SAVED_WITH>10.1.0.6.0</SAVED_WITH>
   <MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="EBSUAT_WEBOH__u02_app_EBSUAT_apps_tech_st_10_1_3" LOC="/u02/app/EBSUAT/apps/tech_st/10.1.3" TYPE="O" IDX="1"/>
<HOME NAME="fs1_FMW_common" LOC="/u02/app/EBSUAT/fs1/FMW_Home/oracle_common" TYPE="O" IDX="2"/>
<HOME NAME="fs1_FMW_webtier" LOC="/u02/app/EBSUAT/fs1/FMW_Home/webtier" TYPE="O" IDX="3"/>
<HOME NAME="fs1_FMW_app1" LOC="/u02/app/EBSUAT/fs1/FMW_Home/Oracle_EBS-app1" TYPE="O" IDX="4"/>
<HOME NAME="fs1_tool_EBSapps_10_1_2" LOC="/u02/app/EBSUAT/fs1/EBSapps/10.1.2" TYPE="O" IDX="5">
   <NODE_LIST>
      <NODE NAME="{}"/>
   </NODE_LIST>
</HOME>
<HOME NAME="fs2_FMW_common" LOC="/u02/app/EBSUAT/fs2/FMW_Home/oracle_common" TYPE="O" IDX="6"/>
<HOME NAME="fs2_FMW_webtier" LOC="/u02/app/EBSUAT/fs2/FMW_Home/webtier" TYPE="O" IDX="7"/>
<HOME NAME="fs2_FMW_app1" LOC="/u02/app/EBSUAT/fs2/FMW_Home/Oracle_EBS-app1" TYPE="O" IDX="8"/>
<HOME NAME="fs2_tool_EBSapps_10_1_2" LOC="/u02/app/EBSUAT/fs2/EBSapps/10.1.2" TYPE="O" IDX="9">
   <NODE_LIST>
      <NODE NAME="{}"/>
   </NODE_LIST>
</HOME>
</HOME_LIST>
</INVENTORY> 

After Central Inventory was rebuilt, ADOP and opatch worked on the node using the new & correct Central Inventory.