A Case Study in Upgrading Oracle Applications and Migrating to Linux

Download Report

Transcript A Case Study in Upgrading Oracle Applications and Migrating to Linux

A Case Study in Upgrading Oracle Applications
and Migrating to Linux
Jeff Slavitz
Oracle Applications DBA
[email protected]
Overview
 Last September Camelbak upgraded from 11.5.4 to
11.5.10 and migrated database and application from
Solaris to Linux.
 This presentation will focus on migration and not on
the 11.5.10 upgrade (we only have an hour)
 Goal is to give DBAs a road map of the documents and
steps to migrate database and application
 Discuss migration problems and lessons learned
 IT Manager will review business issues related to the
Big Bang
Jeff Slavitz - NorCal OAUG Training Day 2008
2
Camelbak and Oracle Apps
 100 gig database
 Financial applications (GL, AP, PO, FA)
 Order Entry, Inventory, Shipping, WIP
 Custom materialized views used as data warehouse
 Approximately 300 employees in Petaluma, San Diego
and the Phillipines
Jeff Slavitz - NorCal OAUG Training Day 2008
3
Technical Issues
 Running 11.5.4 / 8.1.7 / Solaris since 2002
 Database, application and operating system
unchanged since then
 Extremely poor performance – Nightly materialized
view refresh takes 8 hours, user complaints
 Difficulty getting patches from Support
 Want to upgrade to 11.5.10 and migrate to Linux
 Customer wants to do a Big Bang, not a big explosion
Jeff Slavitz - NorCal OAUG Training Day 2008
4
Business Issues
Jeannine Sarragossa, Camelbak IT Manager
 Applications upgrade issues
 Hardware Migration:



Constraints
Cost
Licensing impacts
 Big Bang vs. a stepped project for each component
Jeff Slavitz - NorCal OAUG Training Day 2008
5
Old Hardware
 Database and concurrent manager server
 Sun E3500 running Solaris
 4 processors
 4 gig memory
 Forms server
 Sun E3500 running Solaris
 2 processors
 2 gig memory
Jeff Slavitz - NorCal OAUG Training Day 2008
6
New Hardware
 Database server
 Dell PowerEdge 2950 running Red Hat Linux 64 bit
 Two Dual Core 2.33 ghz processors
 16 gb memory
 Application server
 Dell PowerEdge 2950 running Red Hat Linux 32 bit
 Two Dual Core 2.33 ghz processors
 16 gb memory
 Note to self:
 Buy as much horsepower as you can budget.
 Buy more disk space than you think you will need.
Jeff Slavitz - NorCal OAUG Training Day 2008
7
The Big Bang Overview
 Process took 8 months from initial planning to Go Live
 Four full CRP runs to test the master task list
 Several mini-CRP runs to test which upgrade/migration
path was fastest:
 Experimented with staged APPL_TOP
 Tested upgrade and migration time on different machines
 Go Live downtime estimate reduced from 14 to 7 days
 Normally would have pre-applied as many patches to
PROD as possible, but didn’t due to system age and testing
requirements
Jeff Slavitz - NorCal OAUG Training Day 2008
8
The Big Bang Overview
 First step in upgrade/migration was to clone PROD to





loaner Solaris system. This machine was faster, cloned
from two tier to one. This allowed original PROD to be
used for fallback system.
Set all PROD tablespaces to read-only so company could
run reports during upgrade downtime.
While on 11.5.4 upgraded database from 8.1.7 to 10.1.
Skipped 9i; 10.2 not supported with 11.5.4.
Upgrade application from 11.5.4 to 11.5.10 on Solaris; 11.5.4
not supported on Linux.
Upgrade database from 10.1 to 10.2 on Solaris. Datapump
faster in 10.2
Migrate database and application to Linux
Jeff Slavitz - NorCal OAUG Training Day 2008
9
Database Migration Requirements
 On OTN refer to installation guide for your
platform, e.g. 10GR2 Linux x86-64 Install Guide
 VERY important to install all operating system
patches and packages. You may need many RPMs
to get all of the required kernel packages and
versions.
 Need to set kernel parameters at least as high as
indicated in installation guide
Jeff Slavitz - NorCal OAUG Training Day 2008
10
App Migration Requirements
 Read note 316806.1 , Oracle Applications Installation
Update Notes Release 11i (11.5.10.2) for Linux x86 (or
appropriate for your operating system)
 Apply all RPMs indicated to get proper packages
 Need gcc and g++ version 3.2.3. To find your version
type: gcc –v (your version likely much higher). For Red
Hat apply patch 4198954 to get the RPMs for the older
version of gcc and g++.
 You will also need:
 JDK 1.3.1 or 1.4.2
 AD.I.5
 Autoconfig enabled
Jeff Slavitz - NorCal OAUG Training Day 2008
11
Migration Thoughts
 Consider upgrading source to highest RDBMS supported







BEFORE migration to get improved DataPump
performance and cool new features, e.g. attach and detach
from running jobs
9i export/import slow
10GR1 DataPump 2 x faster than 9i
10GR2 DataPump 3 x faster than 9i
Need to trade off expdp/impdp performance over time to
upgrade if source machine is slower than target
10GR2 100gb export on Solaris – 7 hours
10GR2 100gb import on Linux – 5 hours
Your mpg will vary based on your hardware configuration
Jeff Slavitz - NorCal OAUG Training Day 2008
12
Migration Thoughts
 If using ssh setup trusted FTP access between all
computers involved in the upgrade:
 Run "ssh-keygen -t rsa1" on source
 Copy source $HOME/.ssh/identity.pub to target
$HOME/.ssh/authorized_keys
 Run "ssh-keygen -t dsa" on source
 Concatenate source $HOME/.ssh/id_dsa.pub to target
$HOME/.ssh/authorized_keys
Jeff Slavitz - NorCal OAUG Training Day 2008
13
Database Migration
 Many documents - 3 binders, 25 page master task list and
notes - organization is important
 Main document - Note 362205.1 – Export/Import Process
for 11i Using 10GR2. The major steps in the process are:
1. Clear existing network topology (application’s
understanding of which nodes belong to application)
 Deregister source database server as shown in note
362203.1 steps 1-4.
 Easier way is to clear entire network topology using
FND_NET_SERVICES.remove_server as shown in note
218089.1, Autoconfig FAQ
 Create new topology by running autoconfig first on
RDBMS and then on application.
Jeff Slavitz - NorCal OAUG Training Day 2008
14
Database Migration
2. Apply patches to the source system.
3. Minimize invalid objects. Run
$ORACLE_HOME/rdbms/admin/utlirp to invalidate
all database objects. Then run utlrp to compile all
database objects.
4. Run scripts to extract settings, mainly advanced
queue settings which aren’t recreated by the import
process.
Jeff Slavitz - NorCal OAUG Training Day 2008
15
Database Migration
6. Use expdp to export source. Export log may show
warning messages for export of application triggers.
Apply database patch 5459871 on source and target to
prevent this problem.
7. Time saver - Prepare target while export is running.
Install 10GR2 as shown in note 362203.1 steps 6-12.
Create target database and schemas and prepare for
import.
8. Time saver - Backup target before import!
Jeff Slavitz - NorCal OAUG Training Day 2008
16
Database Migration
9. Use impdp to import export file into target. Many
messages in logfile.
10. Perform post-import updates to setup advance
queues, create context and spatial objects, and setup
database context file.
Jeff Slavitz - NorCal OAUG Training Day 2008
17
Split Configuration
 During test upgrades can bring up the application at




this point (application on old hardware, database on
new hardware) to confirm database migration was
successful.
In application context file change s_dbhost,
s_dbdomain and s_dbport to reflect new database
configuration.
Remove old database tier or clear network topology if
you haven’t already
Run autoconfig and start application.
See note 369693.1 for more details.
Jeff Slavitz - NorCal OAUG Training Day 2008
18
Migrating Application
 Much easier than database migration
 Main document - Note 238276.1 – Migrating to Linux
with Oracle Applications Release 11. The major steps
in the process are:
 Enable autoconfig in source if it isn’t already
 Copy APPL_TOP, OA_HTML, OA_JAVA,
COMMON_TOP/util, COMMON_TOP/pages to target
 Run adgenpsf.pl to generate manifest of customer
specific files. Upload file to Oracle
 Within 30 minutes a download patch file is ready. This
patch contains Linux library files.
Jeff Slavitz - NorCal OAUG Training Day 2008
19
Migrating Application
 Common problem with manifest upload - bad header:








Release: 11.5.10
.2
Correct it to read: Release: 11.5.10.2
Run clone context tool, adclonectx.pl, to generate
application context file on target.
Run rapidinstall –techstack to install iAS technology stack.
Apply interoperability patches
Run autoconfig
Apply customer specific patch downloaded above
Regenerate forms, reports, etc.
Start the application
Jeff Slavitz - NorCal OAUG Training Day 2008
20
Post-Migration Issue
Recompile All Database Objects
 Post-Go-Live-P1 : When querying orders users received
this error:
ORA-4068: existing state of packages () has been discarded
ORA-4065: not executed, altered or dropped stored procedure
APPS.OE_HEAEDER_ADJ_UTIL
ORA-6508: PL/SQL: could not find program unit being called
APPS.OE_HEAEDER_ADJ_UTIL
ORA-6512: at APPS.OE_OE_TOTALS_SUMMARY, line 18
ORA-6512: at APPS.OE_OE_TOTALS_SUMMARY, line 486
 To avoid this, after upgrade and migration complete, run
$ORACLE_HOME/rdbms/admin/utlirp to invalidate all
database objects then utlrp to re-compile all objects.
Jeff Slavitz - NorCal OAUG Training Day 2008
21
Post-Migration Issue
Confirm Concurrent Mgr Nodes
 After application migration concurrent manager
administration screen may show nodes from source
system. This causes concurrent manager to error on
startup.
Jeff Slavitz - NorCal OAUG Training Day 2008
22
Confirm Concurrent Mgr Nodes
Jeff Slavitz - NorCal OAUG Training Day 2008
23
Confirm Concurrent Mgr Nodes
 To change nodes to point to target system:
 As apps, exec fnd_conc_clone.setup_clean
 Run autoconfig on database then application
node
Jeff Slavitz - NorCal OAUG Training Day 2008
24
Post Migration Issue
RMAN Bug
 Bug 5247609 - With Oracle 10G RMAN has severe
performance degradation when using cost based
optimizer.
 Switch to rule based optimization at the start of all
RMAN scripts:
 sql “alter session set optimizer_mode=RULE”;
Jeff Slavitz - NorCal OAUG Training Day 2008
25
RMAN Bug
 Set to RBO before registering database in RMAN or
you will get errors like this:
Starting full resync of recovery catalog
RMAN-03014: implicit resync of recovery catalog failed
RMAN-03009: failure of full resync
ORA-01652: unable to extend temp segment by in tablespace
 Causes temp segment in SYSTEM tablespace to grow
to 50+ gb
 If your database is already registered in RMAN you will
need to unregister and then re-register database
Jeff Slavitz - NorCal OAUG Training Day 2008
26
Post-Migration Idea
Use an SPFILE
 Use spfile instead of init.ora to start database
 Allows you to maintain database initialization
parameters across instance shutdown and startup
 Allows you to dynamically change system parameters,
e.g. alter system set xxx scope=[spfile,memory,both]
 To create initial spfile, export init.ora to spfile:
 cd $ORACLE_HOME/dbs
 create spfile='spfilePROD.ora' from pfile='initPROD.ora';
 Restart database – spfile will be used automatically
 UNLESS …
Jeff Slavitz - NorCal OAUG Training Day 2008
27
Using an SPFILE with Oracle Apps
 The autoconfig generated addbctl.sh used to start the
database is hardcoded to startup using an init.ora in
adstrtdb.sql. To change to spfile:
 Change autoconfig template:
$ORACLE_HOME/appsutil/template/adstrtdb.sql
 OR, since autoconfig creates init.ora but never
overwrites it change your init.ora to:
spfile=$ORACLE_HOME/dbs/spfilePROD.ora
 Periodically copy spfile to init.ora as a backup.
 create pfile='initDB1test.ora' from spfile='spfileDB1.ora';
Jeff Slavitz - NorCal OAUG Training Day 2008
28
Using an SPFILE with Oracle Apps
 If you want to recreate SPFILE as part of running
autoconfig:
 Delete init.ora
 Run autoconfig to create new init.ora
 Shutdown database and restart using new init.ora
 Create new spfile from new init.ora
 Shutdown database
 Replace init.ora with custom init.ora that specifies spfile
(or change adstrtdb.sql template)
 Restart database using new spfile
 Note 249664.1 - Pfile vs SPfile
Jeff Slavitz - NorCal OAUG Training Day 2008
29
Performance
Before
After
 Database server frequently
 Database server load rarely
running at 100% capacity
during business hours
 Users frequently complain
about slowness
 Custom materialized view
refresh took 8 hours to
complete
above 25% even at peak
times
 Users never complain
about performance
 Custom materialized view
refresh takes 5 minutes to
complete
Jeff Slavitz - NorCal OAUG Training Day 2008
30
Lessons Learned
 Required a great deal of organization with documents
 Buy more CPUs and memory than you think you need
 Buy more disk space than you think you can use
 Disk drive failed during migration – have spare
hardware on-site
 Much of the work was done remotely. At one point my
PC at work was unplugged. Consider using console
with remote console software
 Use more than one DBA. Have all DBAs practice
entire master task list so they can be interchangeable.
Jeff Slavitz - NorCal OAUG Training Day 2008
31
Lessons Learned
 As a DBA I was concerned about the Big Bang
 The business dictates the timing of the project.
 Alternatives would have been to upgrade now/migrate
later, migrate database now/application later …
 The cost to the business of not doing the Big Bang was
more downtime, more user testing of each stage and
delay of final implementation.
 The more you are going to do at one time the more you
need a strong technical team.
Jeff Slavitz - NorCal OAUG Training Day 2008
32
Questions?
Jeff Slavitz
[email protected]
(415) 388 - 3003
Jeff Slavitz - NorCal OAUG Training Day 2008
33