在Oracl RAC 10.2.0.4 兩個(gè)節(jié)點(diǎn),操作系統(tǒng)為L(zhǎng)inux 的環(huán)境中,一節(jié)點(diǎn)服務(wù)器的本地硬盤(pán)突然全部損壞,停止運(yùn)行。剩下的一個(gè)節(jié)點(diǎn)還
在Oracl RAC 10.2.0.4 兩個(gè)節(jié)點(diǎn),操作系統(tǒng)為L(zhǎng)inux 的環(huán)境中,,一節(jié)點(diǎn)服務(wù)器的本地硬盤(pán)突然全部損壞,停止運(yùn)行。剩下的一個(gè)節(jié)點(diǎn)還能正常工作,繼續(xù)提供對(duì)外數(shù)據(jù)庫(kù)服務(wù)。
問(wèn)題很清楚,硬盤(pán)損壞的服務(wù)器在操作系統(tǒng)重做后,如何添加到RAC 集群中去?
在Google 以及METALINK 上查了一下,倒是有完全一樣的問(wèn)題,但沒(méi)有想要的答案。
其中在Oracle 官網(wǎng)討論區(qū)有這樣一個(gè)帖子,描述的情況同我的基本一致。
Hello all. I have a two-node 10g R2 RAC with ASM patched to 10.2.0.4 running on RedHat AS4 x86_64. We recently had an accidental release of an Inergen fire suppression system at or collocation facility. This release caused many of our disks to fail causing issues for some of our systems. For the most part, we were very lucky having built-in redundancy across LUNs; however, we lost all 4 disks of local storage on Node1 of our two-node RAC.
...............
I appreciate any help, and I'm greatful for your time.
(老外有點(diǎn)比較好,最后都是感謝的話(huà)。)
有人給出這樣的解決方法:
[root@webrac1 crs_1]# more root.sh
#!/bin/sh
/u01/app/oracle/product/10.2.0/crs_1/install/rootinstall
/u01/app/oracle/product/10.2.0/crs_1/install/rootconfig
[root@webrac1 crs_1]# ./root.sh
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
Checking to see if Oracle CRS stack is already configured
/etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory
Setting up NS directories
Oracle Cluster Registry configuration upgraded successfully
WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root
WARNING: directory '/u01/app/oracle/product' is not owned by root
WARNING: directory '/u01/app/oracle' is not owned by root
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10G Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node
node 1: webrac1 webrac1-priv webrac1
node 2: webrac2 webrac2-priv webrac2
clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override.
-force is destructive and will destroy any previous cluster
configuration.
Oracle Cluster Registry for cluster has already been initialized
Startup will be queued to init within 30 seconds.
Adding daemons to inittab
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
webrac1
webrac2
CSS is active on all nodes.
Waiting for the Oracle CRSD and EVMD to start
Oracle CRS stack installed and running under init(1M)
Running vipca(silent) for configuring nodeapps
在 (0) 節(jié)點(diǎn)上創(chuàng)建 VIP 應(yīng)用程序資源 .
在 (0) 節(jié)點(diǎn)上創(chuàng)建 GSD 應(yīng)用程序資源 .
在 (0) 節(jié)點(diǎn)上創(chuàng)建 ONS 應(yīng)用程序資源 .
啟動(dòng) (2) 節(jié)點(diǎn)上的 VIP 應(yīng)用程序資源 ...
啟動(dòng) (2) 節(jié)點(diǎn)上的 GSD 應(yīng)用程序資源 ...
啟動(dòng) (2) 節(jié)點(diǎn)上的 ONS 應(yīng)用程序資源 ...
Done.
第六步,修改配置文件/etc/oratab
這個(gè)文件從幸存節(jié)點(diǎn)拷貝過(guò)來(lái),修改一下屬性和內(nèi)容。
[root@webrac2 archivelog]# scp /etc/oratab webrac1:/etc/
root@webrac1's password:
oratab
100% 766 0.8KB/s 00:00
[root@webrac2 archivelog]#
[root@webrac1 etc]# chown -R oracle:root oratab
[root@webrac1 etc]# ls -ltr oratab
-rw-r--r-- 1 oracle root 766 05-08 17:12 oratab
[root@webrac1 etc]# vi oratab
#
+ASM1:/u01/app/oracle/product/10.2.0/db_1:N
webdb:/u01/app/oracle/product/10.2.0/db_1:N
~
第七步,執(zhí)行RDBMS 下的root.sh
[root@webrac1 db_1]# ./root.sh
Running Oracle10 root.sh script...
The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/10.2.0/db_1
Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
第八步,修改配置文件$ORACLE_HOME/network/admin/listener.ora
原來(lái)的監(jiān)聽(tīng)器文件的配置是基于節(jié)點(diǎn)2 的,所有這里修改成符合節(jié)點(diǎn)1 的。這個(gè)修改很容易。
LISTENER_WEBRAC1 =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = webrac1-vip)(PORT = 1521)(IP = FIRST))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.10.42)(PORT = 1521)(IP = FIRST))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
)
)
)
第九步,修改配置文件$ORACLE_HOME/dbs 下的spfile 文件和密碼文件,對(duì)象為ASM 實(shí)例和數(shù)據(jù)庫(kù)實(shí)例。
只要修改一下文件名稱(chēng)就可以,例如:
cp orapw+ASM2 orapw+ASM1
cp spfile+ASM2.ora spfile+ASM1.ora
第十步,使用crs_start -all 啟動(dòng)所有資源。
經(jīng)過(guò)這十步,新系統(tǒng)被快速加入到RAC 中。整個(gè)過(guò)程中,不需要停數(shù)據(jù)庫(kù)服務(wù)。
[oracle@webrac1 ~]$ crs_stat -t
名稱(chēng) 類(lèi)型 目標(biāo) 狀態(tài) 主機(jī)
------------------------------------------------------------
ora.webdb.db application ONLINE ONLINE webrac2
ora....ebdb.cs application ONLINE ONLINE webrac2
ora....db1.srv application ONLINE ONLINE webrac2
ora....b1.inst application ONLINE ONLINE webrac1
ora....b2.inst application ONLINE ONLINE webrac2
ora....SM1.asm application ONLINE ONLINE webrac1
ora....C1.lsnr application ONLINE ONLINE webrac1
ora....ac1.gsd application ONLINE ONLINE webrac1
ora....ac1.ons application ONLINE ONLINE webrac1
ora....ac1.vip application ONLINE ONLINE webrac1
ora....SM2.asm application ONLINE ONLINE webrac2
ora....C2.lsnr application ONLINE ONLINE webrac2
ora....ac2.gsd application ONLINE ONLINE webrac2
ora....ac2.ons application ONLINE ONLINE webrac2
ora....ac2.vip application ONLINE ONLINE webrac2
[oracle@webrac1 ~]$
2. 總結(jié)
這個(gè)問(wèn)題關(guān)鍵點(diǎn)在于 $CRS 目錄中 root.sh 文件,這個(gè)文件在 /etc 等目錄下創(chuàng)建了一些文件。這些文件,如果你很清楚,也可以手工去創(chuàng)建。
RAC 整個(gè)環(huán)境都是正常的, OCR 配置在存儲(chǔ)上正常訪(fǎng)問(wèn),所以問(wèn)題本質(zhì)上也就是配置配置訪(fǎng)問(wèn)鏈接。
更多Oracle相關(guān)信息見(jiàn)Oracle 專(zhuān)題頁(yè)面 ?tid=12
聲明:本網(wǎng)頁(yè)內(nèi)容旨在傳播知識(shí),若有侵權(quán)等問(wèn)題請(qǐng)及時(shí)與本網(wǎng)聯(lián)系,我們將在第一時(shí)間刪除處理。TEL:177 7030 7066 E-MAIL:11247931@qq.com