Tuesday, December 21, 2010

RAC ASM with EVA Business Copy

Install Oracle Clusterware and RAC
To create an additional disk group using ASMCA:
1. Prepare the disks or devices for use with ASM, as described in "Configuring
Installation Directories and Shared Storage" on page 2-17.
2. Start the Automatic Storage Configuration Assistant (ASMCA) from the Grid
home:
/u01/grid/bin/asmca
The ASM Configuration Assistant starts, and displays the Disk Groups window.
3. Click the Create button at the bottom left-hand side of the window to create a new
disk group.
The Create Disk Group window appears.
4. Provide the following information:
■ In the Disk Group Name field, enter a name for the new disk group, for
example, FRA.
■ Choose a Redundancy level, for example, Normal.
■ Select the disks to include in the new disk group.
If you used ASMLIB to configure the disks for use with ASM, then the
available disks are displayed if you have the Show Eligible option selected,
and they have a Header Status of PROVISIONED.
After you have provided all the information, click OK. A progress window titled
DiskGroup: Creation appears. After a few minutes, a message appears indicating
the disk group was created successfully. Click OK to continue.
5. Repeat Step 3 and 4 to create additional disk groups, or click Exit, then select Yes
to exit the utility.

Using Oracle Universal Installer to Install Oracle RAC
After you have configured the operating system environment, you can use Oracle
Universal Installer to install the Oracle Database software and create an Oracle RAC
database.
To install Oracle Database software on your cluster and create a clustered
database:
1. As the oracle user, use the following commands to start OUI, where
staging_area is the location of the staging area on disk, or the location of the
mounted installation disk:
cd /staging_area
./runInstaller


<Case I> directly recovery from EVA/BC S-Vols to P-Vols
1. shutdown ASM/RAC/CRS on fgodb03m and fgodb04m
2. un-present all BC/S-Vols from fgodb03m and fgodb04m
3. EVA/BC restore mirrorclone from S-Vols to P-Vols
4. after completed restored, then “fractured” all S-Vols
5. startup ASM/RAC/CRS on fgodb03m and fgodb04m

<Case II> directly startup ASM/RAC/CRS from EVA/BC S-Vols
1.          ASM/RAC/CRS online mode, EVA/BC re-sync from P-Vols to S-Vols
2.          fgodb03m# su – oracle
3.          fgodb03m> run_begin_backup.sh    (enable begin backup mode)
4.          “fractured” all S-Vols
5.          fgodb03m> run_end_backup.sh      (end of backup mode)
6.          present all BC/S-Vols to fgodb03m and fgodb04m
7.          #ioscan –f  (on fgodb03m and fgodb04m)
8.          #insf –e  (on fgodb03m and fgodb04m)
Vdisks WWW
fgodb03m diskname
fgodb04m diskname
Old ASM diskname
00dd (asm using)
disk66 (c?t1d3) (new asm)
disk61 (c?t1d3) (new asm)
disk38 (grid:asmoper)
00c9
disk71 (c?t1d4)
disk66 (c?t1d4)
disk33
00cc
disk75 (c?t1d5)
disk71 (c?t1d5)
disk34
00d1
disk79 (c?t1d6)
disk75 (c?t1d6)
disk35
00d4
disk86 (c?t1d7)
disk79 (c?t1d7)
disk36
9.          fgodb03m#cd /dev/rdisk/
10.      fgodb03m #chown grid:asmoper disk66 disk71 disk75 disk79 disk86
11.      fgodb03m #chmod 664 disk66 disk71 disk75 disk79 disk86
12.      fgodb04m#cd /dev/rdisk/
13.      fgodb04m #chown grid:asmoper disk61 disk66 disk71 disk75 disk79
14.      fgodb04m #chmod 664 disk61 disk66 disk71 disk75 disk79
15.      shutdown ASM/RAC/CRS on fgodb03m and fgodb04m
16.      chown bin:sys /dev/rdisk/disk38 (asm) on fgodb03m and fgodb04m
17.      login as root on each nodes (fgodb03m,fgodb04m)
18.      change directory to $GRID_HOME/bin (/opt/oracle/grid/bin)
19.      run “#./crsctl start crs –excl”; “#./crsctl stop crs –f” on each nodes (fgodb03m run 1st , after done then run on fgodb04m) (./crsctl start crs –excl help ASM to find OCR and Voting disks)
20.      root@fgodb03m:/opt/oracle/grid/bin#./crsctl query css votedisk
21.      login as oracle and connect to ideal database instance
22.      SQL> startup mount
23.      SQL> alter database end backup;
24.      SQL> shutdown immediate
25.      start all crs and database resources
26.      #./crsctl start crs (on each nodes fgodb03m,fgodb04m)
27.      #./crsctl start resource -all

No comments: