【Oracle 12c Flex Cluster专题】节点角色转换

前端之家收集整理的这篇文章主要介绍了【Oracle 12c Flex Cluster专题】节点角色转换前端之家小编觉得挺不错的,现在分享给大家,也给大家做个参考。

笔者上一篇译文中在介绍Leaf Node时提到,
虽然leaf node不要求直接访问共享存储,但最好还是连上共享存储,因为说不准未来哪天就要把这个leaf node转为hub node使用。
其实这样的说法并不够准确,在12cR1时,leaf node上是无法运行只读数据库实例的,这时不连接共享存储完全不影响其使用。而12cR2的leaf node是可以运行只读数据库实例的,一旦leaf node上有了数据库,这时leaf node(确切的说这时leaf node应该叫做reader node)就必须连接共享存储了。

这次就介绍下如何将节点的角色在hub node和leaf node之间互相转换。由于笔者实验环境中已经存在了一个leaf node,所以先从leaf node转为hub node做起。

初始状态:

  1. [root@rac1 ~]# crsctl get cluster mode status
  2. Cluster is running in "flex" mode
  3.  
  4. [root@rac1 ~]# srvctl status srvpool -detail
  5. Server pool name: Free
  6. Active servers count: 0
  7. Active server names:
  8. Server pool name: Generic
  9. Active servers count: 0
  10. Active server names:
  11. Server pool name: RF1POOL
  12. Active servers count: 1
  13. Active server names: rac3
  14. NAME=rac3 STATE=ONLINE
  15. Server pool name: ztp_pool
  16. Active servers count: 2
  17. Active server names: rac1,rac2
  18. NAME=rac1 STATE=ONLINE
  19. NAME=rac2 STATE=ONLINE
  20.  
  21. [root@rac1 ~]# crsctl get node role config -all
  22. Node 'rac1' configured role is 'hub'
  23. Node 'rac2' configured role is 'hub'
  24. Node 'rac3' configured role is 'leaf'
  25.  
  26. [root@rac1 ~]# crsctl get node role status -all
  27. Node 'rac1' active role is 'hub'
  28. Node 'rac2' active role is 'hub'
  29. Node 'rac3' active role is 'leaf'

leaf转hub

该集群上运行着名为orcl的数据库,在角色转换之前先观察下orcl库的状态

  1. ora.orcl.db
  2. 1 ONLINE ONLINE rac3 Open,Readonly,HOME=/
  3. u01/app/oracle/produ
  4. ct/12.2.0/dbhome_1,S
  5. TABLE
  6. 2 ONLINE ONLINE rac2 Open,HOME=/u01/app/o
  7. racle/product/12.2.0
  8. /dbhome_1,STABLE
  9. 3 ONLINE ONLINE rac1 Open,STABLE

显然,由于rac3现在是leaf node,所以rac3上的数据库实例只能以只读方式打开。

执行如下操作即可将rac3的角色从leaf node转为hub node

crsctl set node role {hub | leaf}

  1. [root@rac3 ~]# crsctl set node role hub
  2. CRS-4408: Node 'rac3' configured role successfully changed; restart Oracle High Availability Services for new role to take effect.

查看各节点角色信息

  1. [root@rac1 ~]# crsctl get node role config -all
  2. Node 'rac1' configured role is 'hub'
  3. Node 'rac2' configured role is 'hub'
  4. Node 'rac3' configured role is 'hub',but active role is 'leaf'.
  5. Restart Oracle High Availability Services for the new role to take effect.
  6.  
  7. [root@rac1 ~]# crsctl get node role status -all
  8. Node 'rac1' active role is 'hub'
  9. Node 'rac2' active role is 'hub'
  10. Node 'rac3' active role is 'leaf',but configured role is 'hub'.
  11. Restart Oracle High Availability Services for the new role to take effect.

根据命令输出信息可知,在配置生效前需要重启该节点的crs,即角色转换无法在线进行。

关闭rac3的crs服务

  1. [root@rac3 ~]# crsctl stop crs
  2. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac3'
  3. CRS-2673: Attempting to stop 'ora.crsd' on 'rac3'
  4. CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on server 'rac3'
  5. CRS-2673: Attempting to stop 'ora.orcl.db' on 'rac3'
  6. CRS-2677: Stop of 'ora.orcl.db' on 'rac3' succeeded
  7. CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac3'
  8. CRS-2673: Attempting to stop 'ora.LISTENER_LEAF.lsnr' on 'rac3'
  9. CRS-2677: Stop of 'ora.LISTENER_LEAF.lsnr' on 'rac3' succeeded
  10. CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac3' succeeded
  11. CRS-2673: Attempting to stop 'ora.rac3.vip' on 'rac3'
  12. CRS-2677: Stop of 'ora.rac3.vip' on 'rac3' succeeded
  13. CRS-2672: Attempting to start 'ora.rac3.vip' on 'rac2'
  14. CRS-2676: Start of 'ora.rac3.vip' on 'rac2' succeeded
  15. CRS-2673: Attempting to stop 'ora.net1.network' on 'rac3'
  16. CRS-2677: Stop of 'ora.net1.network' on 'rac3' succeeded
  17. CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac3' has completed
  18. CRS-2677: Stop of 'ora.crsd' on 'rac3' succeeded
  19. CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac3'
  20. CRS-2673: Attempting to stop 'ora.crf' on 'rac3'
  21. CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac3'
  22. CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac3'
  23. CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac3' succeeded
  24. CRS-2677: Stop of 'ora.gpnpd' on 'rac3' succeeded
  25. CRS-2673: Attempting to stop 'ora.ctssd' on 'rac3'
  26. CRS-2673: Attempting to stop 'ora.evmd' on 'rac3'
  27. CRS-2677: Stop of 'ora.mdnsd' on 'rac3' succeeded
  28. CRS-2677: Stop of 'ora.evmd' on 'rac3' succeeded
  29. CRS-2677: Stop of 'ora.crf' on 'rac3' succeeded
  30. CRS-2677: Stop of 'ora.ctssd' on 'rac3' succeeded
  31. CRS-2673: Attempting to stop 'ora.cssd' on 'rac3'
  32. CRS-2677: Stop of 'ora.cssd' on 'rac3' succeeded
  33. CRS-2673: Attempting to stop 'ora.driver.afd' on 'rac3'
  34. CRS-2673: Attempting to stop 'ora.gipcd' on 'rac3'
  35. CRS-2677: Stop of 'ora.driver.afd' on 'rac3' succeeded
  36. CRS-2677: Stop of 'ora.gipcd' on 'rac3' succeeded
  37. CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac3' has completed
  38. CRS-4133: Oracle High Availability Services has been stopped.

查看各个节点角色信息

  1. [root@rac1 ~]# crsctl get node role config -all
  2. Node 'rac1' configured role is 'hub'
  3. Node 'rac2' configured role is 'hub'
  4.  
  5. [root@rac1 ~]# crsctl get node role status -all
  6. Node 'rac1' active role is 'hub'
  7. Node 'rac2' active role is 'hub'

启动rac3的crs服务

  1. [root@rac3 ~]# crsctl start crs -wait CRS-4123: Starting Oracle High Availability Services-managed resources CRS-2672: Attempting to start 'ora.evmd' on 'rac3' CRS-2672: Attempting to start 'ora.mdnsd' on 'rac3' CRS-2676: Start of 'ora.mdnsd' on 'rac3' succeeded CRS-2676: Start of 'ora.evmd' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'rac3' CRS-2676: Start of 'ora.gpnpd' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'rac3' CRS-2676: Start of 'ora.gipcd' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac3' CRS-2676: Start of 'ora.cssdmonitor' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'rac3' CRS-2672: Attempting to start 'ora.diskmon' on 'rac3' CRS-2676: Start of 'ora.diskmon' on 'rac3' succeeded CRS-2676: Start of 'ora.cssd' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac3' CRS-2672: Attempting to start 'ora.ctssd' on 'rac3' CRS-2676: Start of 'ora.ctssd' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.crf' on 'rac3' CRS-2676: Start of 'ora.crf' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'rac3' CRS-2676: Start of 'ora.crsd' on 'rac3' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.drivers.acfs' on 'rac3' CRS-2676: Start of 'ora.drivers.acfs' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.asm' on 'rac3' CRS-2676: Start of 'ora.asm' on 'rac3' succeeded CRS-6017: Processing resource auto-start for servers: rac3 CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac3' CRS-2673: Attempting to stop 'ora.rac3.vip' on 'rac2' CRS-2673: Attempting to stop 'ora.LISTENER_SCAN2.lsnr' on 'rac2' CRS-2672: Attempting to start 'ora.ons' on 'rac3' CRS-2677: Stop of 'ora.rac3.vip' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.rac3.vip' on 'rac3' CRS-2677: Stop of 'ora.LISTENER_SCAN2.lsnr' on 'rac2' succeeded CRS-2673: Attempting to stop 'ora.scan2.vip' on 'rac2' CRS-2677: Stop of 'ora.scan2.vip' on 'rac2' succeeded CRS-2672: Attempting to start 'ora.scan2.vip' on 'rac3' CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac3' succeeded CRS-2676: Start of 'ora.rac3.vip' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'rac3' CRS-2676: Start of 'ora.ons' on 'rac3' succeeded CRS-2676: Start of 'ora.scan2.vip' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'rac3' CRS-2676: Start of 'ora.LISTENER.lsnr' on 'rac3' succeeded CRS-2679: Attempting to clean 'ora.asm' on 'rac3' CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'rac3' succeeded CRS-2681: Clean of 'ora.asm' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.asm' on 'rac3' CRS-2676: Start of 'ora.asm' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.DATA.dg' on 'rac3' CRS-2672: Attempting to start 'ora.FLEXDG.dg' on 'rac3' CRS-2676: Start of 'ora.FLEXDG.dg' on 'rac3' succeeded CRS-2676: Start of 'ora.DATA.dg' on 'rac3' succeeded CRS-2672: Attempting to start 'ora.orcl.db' on 'rac3' CRS-2672: Attempting to start 'ora.prod1.db' on 'rac3' CRS-2676: Start of 'ora.orcl.db' on 'rac3' succeeded CRS-2676: Start of 'ora.prod1.db' on 'rac3' succeeded CRS-6016: Resource auto-start has completed for server rac3 CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources CRS-4123: Oracle High Availability Services has been started.

启动完成后在查看各个节点角色信息

  1. [root@rac1 ~]# crsctl get node role config -all
  2. Node 'rac1' configured role is 'hub'
  3. Node 'rac2' configured role is 'hub'
  4. Node 'rac3' configured role is 'hub'
  5. [root@rac1 ~]# crsctl get node role status -all
  6. Node 'rac1' active role is 'hub'
  7. Node 'rac2' active role is 'hub'
  8. Node 'rac3' active role is 'hub'

此时观察下整个集群的状态

  1. [root@rac1 ~]# crsctl status res -t
  2. -------------------------------------------------------------------------------- Name Target State Server State details --------------------------------------------------------------------------------
  3. Local Resources --------------------------------------------------------------------------------
  4. ora.ASMNET1LSNR_ASM.lsnr ONLINE ONLINE rac1 STABLE ONLINE ONLINE rac2 STABLE ONLINE ONLINE rac3 STABLE ora.DATA.dg ONLINE ONLINE rac1 STABLE ONLINE ONLINE rac2 STABLE ONLINE ONLINE rac3 STABLE ora.FLEXDG.dg ONLINE ONLINE rac1 STABLE ONLINE ONLINE rac2 STABLE ONLINE ONLINE rac3 STABLE ora.LISTENER.lsnr ONLINE ONLINE rac1 STABLE ONLINE ONLINE rac2 STABLE ONLINE ONLINE rac3 STABLE ora.OCR.dg ONLINE ONLINE rac1 STABLE ONLINE ONLINE rac2 STABLE ONLINE ONLINE rac3 STABLE ora.net1.network ONLINE ONLINE rac1 STABLE ONLINE ONLINE rac2 STABLE ONLINE ONLINE rac3 STABLE ora.ons ONLINE ONLINE rac1 STABLE ONLINE ONLINE rac2 STABLE ONLINE ONLINE rac3 STABLE ora.proxy_advm
  5. OFFLINE OFFLINE rac1 STABLE
  6. OFFLINE OFFLINE rac2 STABLE
  7. OFFLINE OFFLINE rac3 STABLE --------------------------------------------------------------------------------
  8. Cluster Resources --------------------------------------------------------------------------------
  9. ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE rac1 STABLE ora.LISTENER_SCAN2.lsnr
  10. 1 ONLINE ONLINE rac3 STABLE
  11. ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE rac2 STABLE ora.MGMTLSNR 1 OFFLINE OFFLINE STABLE ora.asm 1 ONLINE ONLINE rac1 Started,STABLE 2 ONLINE ONLINE rac2 Started,STABLE 3 ONLINE ONLINE rac3 Started,STABLE ora.cvu 1 ONLINE ONLINE rac2 STABLE ora.gns 1 ONLINE ONLINE rac1 STABLE ora.gns.vip 1 ONLINE ONLINE rac1 STABLE ora.orcl.db 1 ONLINE ONLINE rac3 Open,HOME=/u01/app/o racle/product/12.2.0 /dbhome_1,STABLE
  12. 2 ONLINE ONLINE rac2 Open,HOME=/u01/app/o
  13. racle/product/12.2.0
  14. /dbhome_1,STABLE
  15. 3 ONLINE ONLINE rac1 Open,STABLE
  16. ora.prod1.db
  17. 1 ONLINE ONLINE rac1 Open,STABLE
  18. 2 ONLINE ONLINE rac2 Open,STABLE
  19. 3 ONLINE ONLINE rac3 Open,STABLE
  20. ora.qosmserver
  21. 1 OFFLINE OFFLINE STABLE
  22. ora.rac1.vip
  23. 1 ONLINE ONLINE rac1 STABLE
  24. ora.rac2.vip
  25. 1 ONLINE ONLINE rac2 STABLE
  26. ora.rac3.vip
  27. 1 ONLINE ONLINE rac3 STABLE
  28. ora.scan1.vip
  29. 1 ONLINE ONLINE rac1 STABLE
  30. ora.scan2.vip
  31. 1 ONLINE ONLINE rac3 STABLE
  32. ora.scan3.vip
  33. 1 ONLINE ONLINE rac2 STABLE --------------------------------------------------------------------------------

此时rac3上的orcl库的实例已变为open状态,而不是之前的Open,Readonly

hub转leaf

在12cR2中,如果想将一个节点角色设置为leaf node,那么该集群的scan解析方式必须为GNS。
通过上面的整个集群的状态信息也可以看出笔者的实验环境是配置了GNS的。如果未配置,执行crsctl set node role leaf命令时将报错。

  1. [root@rac3 ~]# crsctl set node role leaf
  2. CRS-4408: Node 'rac3' configured role successfully changed; restart Oracle High Availability Services for new role to take effect.

同上,rac3依然需要重启crs来使配置生效。

过程略

重启后各个节点角色信息如下:

  1. [root@rac1 ~]# crsctl get node role status -all
  2. Node 'rac1' active role is 'hub'
  3. Node 'rac2' active role is 'hub'
  4. Node 'rac3' active role is 'leaf'
  5. [root@rac1 ~]# crsctl get node role config -all
  6. Node 'rac1' configured role is 'hub'
  7. Node 'rac2' configured role is 'hub'
  8. Node 'rac3' configured role is 'leaf'

此时整个集群状态如下:

  1. [root@rac1 ~]# crsctl status res -t
  2. --------------------------------------------------------------------------------
  3. Name Target State Server State details
  4. --------------------------------------------------------------------------------
  5. Local Resources
  6. --------------------------------------------------------------------------------
  7. ora.ASMNET1LSNR_ASM.lsnr
  8. ONLINE ONLINE rac1 STABLE
  9. ONLINE ONLINE rac2 STABLE
  10. ora.DATA.dg
  11. ONLINE ONLINE rac1 STABLE
  12. ONLINE ONLINE rac2 STABLE
  13. ora.FLEXDG.dg
  14. ONLINE ONLINE rac1 STABLE
  15. ONLINE ONLINE rac2 STABLE
  16. ora.LISTENER.lsnr
  17. ONLINE ONLINE rac1 STABLE
  18. ONLINE ONLINE rac2 STABLE
  19. ONLINE ONLINE rac3 STABLE
  20. ora.LISTENER_LEAF.lsnr
  21. OFFLINE OFFLINE rac3 STABLE
  22. ora.OCR.dg
  23. ONLINE ONLINE rac1 STABLE
  24. ONLINE ONLINE rac2 STABLE
  25. ora.net1.network
  26. ONLINE ONLINE rac1 STABLE
  27. ONLINE ONLINE rac2 STABLE
  28. ONLINE ONLINE rac3 STABLE
  29. ora.ons
  30. ONLINE ONLINE rac1 STABLE
  31. ONLINE ONLINE rac2 STABLE
  32. ora.proxy_advm
  33. OFFLINE OFFLINE rac1 STABLE
  34. OFFLINE OFFLINE rac2 STABLE
  35. --------------------------------------------------------------------------------
  36. Cluster Resources
  37. --------------------------------------------------------------------------------
  38. ora.LISTENER_SCAN1.lsnr
  39. 1 ONLINE ONLINE rac1 STABLE
  40. ora.LISTENER_SCAN2.lsnr
  41. 1 ONLINE ONLINE rac1 STABLE
  42. ora.LISTENER_SCAN3.lsnr
  43. 1 ONLINE ONLINE rac2 STABLE
  44. ora.MGMTLSNR
  45. 1 OFFLINE OFFLINE STABLE
  46. ora.asm
  47. 1 ONLINE ONLINE rac1 Started,STABLE
  48. 2 ONLINE ONLINE rac2 Started,STABLE
  49. 3 ONLINE OFFLINE Instance Shutdown,ST
  50. ABLE
  51. ora.cvu
  52. 1 ONLINE ONLINE rac2 STABLE
  53. ora.gns
  54. 1 ONLINE ONLINE rac1 STABLE
  55. ora.gns.vip
  56. 1 ONLINE ONLINE rac1 STABLE
  57. ora.orcl.db
  58. 1 ONLINE ONLINE rac3 Open,S
  59. TABLE
  60. 2 ONLINE ONLINE rac2 Open,STABLE
  61. 3 ONLINE ONLINE rac1 Open,STABLE
  62. ora.prod1.db
  63. 1 ONLINE ONLINE rac1 Open,STABLE
  64. 2 ONLINE ONLINE rac2 Open,ST
  65. ABLE
  66. ora.qosmserver
  67. 1 OFFLINE OFFLINE STABLE
  68. ora.rac1.vip
  69. 1 ONLINE ONLINE rac1 STABLE
  70. ora.rac2.vip
  71. 1 ONLINE ONLINE rac2 STABLE
  72. ora.rac3.vip
  73. 1 ONLINE ONLINE rac3 STABLE
  74. ora.scan1.vip
  75. 1 ONLINE ONLINE rac1 STABLE
  76. ora.scan2.vip
  77. 1 ONLINE ONLINE rac1 STABLE
  78. ora.scan3.vip
  79. 1 ONLINE ONLINE rac2 STABLE
  80. --------------------------------------------------------------------------------

可以发现在rac3切换为leaf node之后,多了ora.LISTENER_LEAF.lsnr这个资源,
而且rac3上的asm实例是不启动的,db实例又变成了readonly方式打开。

需要注意的一点是,leaf node上的只读db实例会把服务注册到LISTENER_LEAF这个监听中,而不是LISTENER。
所以lsnrctl status的输出结果始终看不到任何已注册的服务。

  1. [root@rac3 ~]# srvctl start listener -listener LISTENER_LEAF [grid@rac3 ~]$ lsnrctl status LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 27-JUL-2017 16:46:01 Copyright (c) 1991,2016,Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production Start Date 27-JUL-2017 16:24:27 Uptime 0 days 0 hr. 21 min. 34 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/12.2.0/grid/network/admin/listener.ora Listener Log File /u01/app/grid/diag/tnslsnr/rac3/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.70.103)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.70.186)(PORT=1521))) The listener supports no services The command completed successfully [grid@rac3 ~]$ lsnrctl status listener_leaf LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 27-JUL-2017 16:46:02 Copyright (c) 1991,Oracle. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER_LEAF))) STATUS of the LISTENER ------------------------ Alias LISTENER_LEAF Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production Start Date 27-JUL-2017 16:44:31 Uptime 0 days 0 hr. 1 min. 31 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /u01/app/12.2.0/grid/network/admin/listener.ora Listener Log File /u01/app/grid/diag/tnslsnr/rac3/listener_leaf/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=LISTENER_LEAF))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.70.103)(PORT=1525))) Services Summary... Service "5491bed1838610f0e05366460a0a5736" has 1 instance(s). Instance "orcl_1",status READY,has 1 handler(s) for this service... Service "5507ca8c0abd4747e05365460a0a8d01" has 1 instance(s). Instance "orcl_1",has 1 handler(s) for this service... Service "orcl" has 1 instance(s). Instance "orcl_1",has 1 handler(s) for this service... Service "orclXDB" has 1 instance(s). Instance "orcl_1",has 1 handler(s) for this service... Service "orclpdb" has 1 instance(s). Instance "orcl_1",has 1 handler(s) for this service... Service "ztp" has 1 instance(s). Instance "orcl_1",has 1 handler(s) for this service... The command completed successfully

最后需要注意的是:leaf node上默认监听端口为1525

结论

  • 转换节点角色需要重启该节点crs
  • 12cR2中节点转换为leaf node要求必须配置GNS
  • leaf node上的asm实例是不会启动的,db实例只能以只读方式启动
  • 12cR1中还需要手动更新inventory,12cR2中已不再需要,角色修改操作大幅简化

猜你在找的Oracle相关文章