Step by Step Create ACFS File System Using RAC Commands

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

ACFS Oracle ASM Cluster File System is cluster file system service used for High availability services.
Example – For achieving high availability in goldenate for Oracle RAC, we can use ACFS for goldengate relate files.
In this article, we will show how to create a ACFS file system in oracle RAC using command line.

ENVIRONMENT DETAILS

ORACLE GRID VERSION – 12.1.0.2
NODES – NODE3 , NODE4
OS – Linux OEL

1.Create an ASM DISKGROUP( ON NODE 3)

echo $ORACLE_HOME
/crsapp/app/oracle/product/grid12c

echo $ORACLE_SID
+ASM1

sqlplus / as sysasm

SQL> CREATE DISKGROUP ACFSPOC EXTERNAL REDUNDANCY
DISK '/dev/rdsk/c0t514F0C5785C00A0Bd0s6' SIZE 269G
ATTRIBUTE 'compatible.asm' = '12.1.0.0.0',
'compatible.rdbms'='12.1.0.0.0' ,
'compatible.advm' = '12.1.0.0.0';

Diskgroup created.

2. Mount the diskgroup from other nodes(On NODE 4)

echo $ORACLE_HOME
/crsapp/app/oracle/product/grid12c

echo $ORACLE_SID
+ASM2

asmcmd

ASMCMD> mount ACFSPOC

3. check the diskgroup status( from any node)

SQL> SQL> set lines 299
SQL> select GROUP_NUMBER,NAME,COMPATIBILITY,DATABASE_COMPATIBILITY from gv$asm_diskgroup where NAME='ACFSPOC';

GROUP_NUMBER NAME COMPATIBILITY DATABASE_COMPATIBILITY
------------ ------------------------------ ------------------------------------------------------------ ------------------------------------------------------------
13 ACFSPOC 12.1.0.0.0 12.1.0.0.0
13 ACFSPOC 12.1.0.0.0 12.1.0.0.0

$ crsctl stat res ora.ACFSPOC.dg -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ACFSPOC.dg
ONLINE ONLINE node3 STABLE
ONLINE ONLINE node4 STABLE
--------------------------------------------------------------------------------

4. Create a new ADVM volume in the diskgroup(ACFSPOC):(ON NODE 3)

ASMCMD> volcreate -G ACFSPOC -s 50G SHAREDVOL1
ASMCMD> volinfo --all
Diskgroup Name: ACFSPOC

Volume Name: SHAREDVOL2
Volume Device: /dev/asm/sharedvol2-201 --- >> This is the volume device
State: ENABLED
Size (MB): 51200
Resize Unit (MB): 512
Redundancy: UNPROT
Stripe Columns: 8
Stripe Width (K): 1024
Usage:
Mountpath:

oracle@NODE4:~$ crsctl stat res ora.ACFSPOC.SHAREDVOL2.advm -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ACFSPOC.SHAREDVOL1.advm
ONLINE ONLINE NODE3 STABLE
ONLINE ONLINE NODE4 STABLE

5. create mount point with proper permission on both nodes:

------- node 3:(from root)

mkdir /acfspoc
chown oracle:oinstall /acfspoc

--------from node 4 also:

mkdir /acfspoc
chown oracle:oinstall /acfspoc

6.Create ACFS FILESYSTEM In the ADVM VOLUME (On node 3) ( as grid owner)

$ /sbin/mkfs -F acfs /dev/asm/sharedvol2-201
mkfs: version = 12.1.0.2.0
mkfs: on-disk version = 39.0
mkfs: volume = /dev/asm/sharedvol2-201
mkfs: volume size = 53687091200 ( 50.00 GB )
mkfs: Format complete.

NOTE – In case of linux the command will be /sbin/mkfs -t acfs /dev/asm/sharedvol2-201

7.Register the ACFS file system with CRS:

export ORACLE_HOME=/crsapp/app/oracle/product/grid12c

-- Run from root

root@NODE3:~# $ORACLE_HOME/bin/srvctl add filesystem -d /dev/asm/sharedvol1-201 -m /acfspoc -u oracle -fstype ACFS -autostart ALWAYS

-- Check the resource status

$ crsctl stat res ora.acfspoc.sharedvol2.acfs -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.acfspoc.sharedvol1.acfs
OFFLINE OFFLINE NODE3 STABLE
OFFLINE OFFLINE NODE4 STABLE
--------------------------------------------------------------------------------

8. Start the ACFS file system resource:( on node 3 from the root)

root@NODE3:~# $ORACLE_HOME/bin/srvctl start filesystem -d /dev/asm/sharedvol2-201

---- Check the status

$ crsctl stat res ora.acfspoc.sharedvol1.acfs -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.acfspoc.sharedvol1.acfs
ONLINE ONLINE NODE3 mounted on /acfspoc,
STABLE
ONLINE ONLINE NODE4 mounted on /acfspoc,
STABLE
--------------------------------------------------------------------------------

$ srvctl config filesystem
Volume device: /dev/asm/sharedvol2-201
Canonical volume device: /dev/asm/sharedvol2-201
Auxiliary volume devices:
Mountpoint path: /acfspoc
User: oracle
Type: ACFS
Mount options:
Description:
ACFS file system is enabled
ACFS file system is individually enabled on nodes:
ACFS file system is individually disabled on nodes:

9. Now validate the ACFS mount points

oracle@NODE3$ df -kh /acfspoc
Filesystem Size Used Available Capacity Mounted on
/dev/asm/sharedvol1-201
50G 178M 50G 1% /acfspoc

oracle@NODE4:/acfspoc$ df -kh /acfspoc
Filesystem Size Used Available Capacity Mounted on
/dev/asm/sharedvol1-201
50G 178M 50G 1% /acfspoc

Try creating a test file on node3 and check whether the same is available on node 4.

oracle@NODE3:/acfspoc$ touch test.log

oracle@NODE3:/acfspoc$ ls -ltr
total 128
drwx------ 2 root root 65536 Oct 30 10:34 lost+found
-rw-r--r-- 1 oracle oinstall 0 Oct 30 10:39 test.log

oracle@NODE4:~$ cd /acfspoc/
oracle@NODE4:/acfspoc$ ls -ltr
total 128
drwx------ 2 root root 65536 Oct 30 10:34 lost+found
-rw-r--r-- 1 oracle oinstall 0 Oct 30 10:39 test.log

We have successfully created the ACFS file system in a two node RAC.

Leave a Reply

Your email address will not be published.