Last Updated: December 10, 2025
Tested On: Oracle Enterprise Linux 9.5, Oracle ASM 19c Patch 25

Oracle Automatic Storage Management (ASM) provides a robust storage management solution for Oracle databases. In this comprehensive guide, we’ll walk through ASM 19c Installation (patch set 25) in a standalone “Oracle Restart” grid infrastructure configuration on Oracle Enterprise Linux (OEL) 9.5.
Why Use Oracle ASM for Database Storage?
Before diving into installation, understanding why ASM is the industry standard for Oracle database storage management is crucial:
Key Benefits of Oracle ASM:
Performance Advantages:
- Automatic I/O balancing – Distributes load evenly across all available disks
- Stripe and mirror everywhere (SAME) – Optimizes performance without manual tuning
- Direct asynchronous I/O – Eliminates file system overhead
Management Simplification:
- No file system needed – Direct disk management reduces complexity
- Automatic rebalancing – Dynamically redistributes data when disks are added/removed
- Online disk operations – Add, remove, or replace disks without downtime
High Availability:
- Built-in redundancy – Normal or high redundancy options
- Automatic failure handling – Detects and recovers from disk failures
- Fast mirror resync – Quickly rebuilds mirrors after failures
Enterprise Features:
- Integrated with Oracle Database – No third-party software needed
- Scalability – Supports petabyte-scale databases
- Oracle RAC compatibility – Seamless cluster storage
When to Use ASM vs. File System:
| Storage Type | Best For | Avoid When |
|---|---|---|
| Oracle ASM | Production databases, RAC environments, large databases (>100GB) | Small dev databases, non-Oracle workloads |
| File System | Development/test databases, small databases, mixed workloads | Production critical systems |
Prerequisites
Before starting the installation process, ensure you have:
System Requirements:
- Oracle Enterprise Linux (OEL) 9.x installed
- At least 100GB storage mounted as
/u01 - Minimum 8GB RAM (16GB+ recommended for production)
- At least 2 CPU cores (4+ recommended)
Software Requirements:
- Oracle 19c Grid Infrastructure software from Oracle eDelivery
- Required patches from Oracle Support
- OPatch utility (version 12.2.0.1.45 or higher)
Network Requirements:
- Static IP address configured
- Hostname properly set in
/etc/hosts - DNS resolution working (if using DNS)
Storage Requirements:
- Physical disks or LUNs for ASM disk groups
- Minimum 3 disks recommended for production
- Each disk should be at least 10GB
System Preparation
Begin by updating your system and installing the Oracle database preinstall package:
# Login as root
sudo su -
# Update the system
yum update
# Install Oracle database preinstall package
yum install oracle-database-preinstall-19c
What the preinstall package does:
- Sets kernel parameters (shmmax, shmall, file-max, etc.)
- Creates necessary system groups
- Configures resource limits in
/etc/security/limits.conf - Sets up required packages and dependencies
The preinstall package automatically configures many kernel parameters required for Oracle installations, saving you time and reducing the risk of configuration errors.
Creating Users and Groups
Next, create the necessary user groups for Oracle ASM and database operations:
# Create Oracle database groups
groupadd oinstall
groupadd dba
groupadd oper
groupadd backupdba
groupadd dgdba
groupadd kmdba
# Create ASM-specific groups
groupadd asmadmin
groupadd asmdba
groupadd asmoper
groupadd osoper
# Update oracle user and create grid user
usermod -a -G dba,oper,backupdba,dgdba,kmdba oracle
usermod -a -G asmdba,osoper oracle
useradd -u 54323 -g oinstall -G asmadmin,asmdba,asmoper,racdba grid
Group Purpose Explanation:
| Group | Purpose | Used By |
|---|---|---|
oinstall | Software installation ownership | grid, oracle |
asmadmin | ASM administration (SYSASM) | grid |
asmdba | ASM database access | grid, oracle |
asmoper | ASM limited operations | grid |
dba | Database administration (SYSDBA) | oracle |
backupdba | RMAN backup operations | oracle |
Note: You might see “group already exists” errors for some commands, which is normal if the preinstall package has already created these groups.
Directory Structure Setup
Create the necessary directory structure and set appropriate permissions:
# Create directory structure for Oracle user
mkdir -p /u01/app/oracle/product/19.0.0/db_1
# Create directory structure for Grid user
mkdir -p /u01/app/grid
mkdir -p /u01/app/19.0.0/grid
# Set ownership and permissions
chown -R grid:oinstall /u01
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01
Directory Structure Explained:
/u01/
├── app/
│ ├── grid/ # Grid ORACLE_BASE
│ ├── 19.0.0/
│ │ └── grid/ # Grid ORACLE_HOME
│ └── oracle/
│ └── product/
│ └── 19.0.0/
│ └── db_1/ # Database ORACLE_HOME
The directory structure follows Oracle’s Optimal Flexible Architecture (OFA) standard, which organizes Oracle software and database files in a logical, consistent manner.
Why OFA Matters:
- Simplifies administration across multiple databases
- Prevents disk I/O bottlenecks
- Facilitates backups and disaster recovery
- Makes upgrades and patching easier
Environment Configuration
Configure the environment for Grid users:
Grid User Environment
Set up the environment for the Grid user:
su - grid
vi .bash_profile
Add the following content to the Grid user’s .bash_profile:
ORACLE_SID=+ASM; export ORACLE_SID
ORACLE_BASE=/u01/app/grid; export ORACLE_BASE
ORACLE_HOME=/u01/app/19.0.0/grid; export ORACLE_HOME
LD_LIBRARY_PATH=$ORACLE_HOME/lib; export LD_LIBRARY_PATH
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN
PATH=.:${PATH}:$HOME/bin:$ORACLE_HOME/bin
PATH=${PATH}:/usr/bin:/bin:/usr/local/bin
export PATH
export TEMP=/tmp
export TMPDIR=/tmp
umask 022
Save and activate:
source ~/.bash_profile
Verify environment:
echo $ORACLE_HOME
echo $ORACLE_SID
Expected output:
/u01/app/19.0.0/grid
+ASM
Oracle ASM 19c Installation (Grid Software)
Now we’ll install the Grid Infrastructure software:
Extracting and Patching the Software
# Login as the grid user
su - grid
# Extract the Grid Infrastructure software
unzip V982068-01.zip -d /u01/app/19.0.0/grid
# Extract the patch
unzip p36916690_190000_Linux-x86-64.zip
# Extract the OPatch utility
unzip p6880880_190000_Linux-x86-64.zip
# Check the existing OPatch version
cd $ORACLE_HOME
./OPatch/opatch version
The output should show:
OPatch Version: 12.2.0.1.17
OPatch succeeded.
Now replace the OPatch utility with the new version:
mv OPatch/ OPatch.12.2.0.1.17
mv /u01/software/OPatch/ .
./OPatch/opatch version
The output should show the updated version:
OPatch Version: 12.2.0.1.45
OPatch succeeded.
Why Update OPatch?
- Newer OPatch versions support latest patches
- Fixes bugs in older OPatch versions
- Required for applying patch 36916690
Configure Additional Swap Space
Oracle ASM installation requires adequate swap space. Create additional swap if needed:
# Login as root
sudo su -
# Create a swap file
dd if=/dev/zero of=/swapfile bs=1M count=6144
chmod 600 /swapfile
mkswap /swapfile
swapon /swapfile
# Add the swap file to /etc/fstab for persistence
vi /etc/fstab
# Add: /swapfile swap swap defaults 0 0
# Verify the swap space
free -g
You should see output similar to:
total used free shared buff/cache available
Mem: 28 0 0 0 28 27
Swap: 5 0 5
Swap Space Guidelines:
| RAM Size | Recommended Swap |
|---|---|
| 4-8 GB | Equal to RAM |
| 8-16 GB | 1.5x RAM |
| 16-32 GB | Same as RAM |
| 32GB+ | 16-32 GB |
Running the Grid Setup
Create a response file named ResponseFileGridSetup.rsp with the appropriate configuration parameters for your environment, then run the Grid Infrastructure setup:
Reference: Response file for Oracle ASM blog
./gridSetup.sh -silent -ignoreInternalDriverError -responseFile /u01/software/ResponseFileGridSetup.rsp INVENTORY_LOCATION=/u01/app/oraInventory -applyRU /u01/software/36916690 SELECTED_LANGUAGES=en oracle.install.asm.SYSASMPassword=yoursecurepassword oracle.install.asm.monitorPassword=yoursecurepassword
Command Breakdown:
-silent– Run in silent mode (no GUI)-ignoreInternalDriverError– Bypass driver warnings-responseFile– Use response file for installation parameters-applyRU– Apply Release Update patch during installationINVENTORY_LOCATION– Central inventory directoryoracle.install.asm.SYSASMPassword– SYSASM user passwordoracle.install.asm.monitorPassword– ASMSNMP user password
The output will show progress and eventually indicate success:
Preparing the home to patch...
Applying the patch /u01/software/36916690...
Successfully applied the patch.
The log can be found at: /tmp/GridSetupActions2025-05-01_12-42-53PM/installerPatchActions_2025-05-01_12-42-53PM.log
Launching Oracle Grid Infrastructure Setup Wizard...
[WARNING] [INS-32047] The location (/u01/app/oraInventory) specified for the central inventory is not empty.
ACTION: It is recommended to provide an empty location for the inventory.
[WARNING] [INS-13014] Target environment does not meet some optional requirements.
CAUSE: Some of the optional prerequisites are not met. See logs for details. gridSetupActions2025-05-01_12-42-53PM.log
ACTION: Identify the list of failed prerequisite checks from the log: gridSetupActions2025-05-01_12-42-53PM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually.
The response file for this session can be found at:
/u01/app/19.0.0/grid/install/response/grid_2025-05-01_12-42-53PM.rsp
You can find the log of this install session at:
/tmp/GridSetupActions2025-05-01_12-42-53PM/gridSetupActions2025-05-01_12-42-53PM.log
As a root user, execute the following script(s):
1. /u01/app/oraInventory/orainstRoot.sh
2. /u01/app/19.0.0/grid/root.sh
Execute /u01/app/oraInventory/orainstRoot.sh on the following nodes:[sanjeeva-test01] sh /u01/app/19.0.0/grid/root.sh
Successfully Setup Software with warning(s). Moved the install session logs to: /u01/app/oraInventory/logs/GridSetupActions2025-05-01_12-42-53PM
Note on Warnings:
- INS-32047 is informational – safe to ignore if upgrading
- INS-13014 refers to optional prerequisites – review log if concerned
Running the Root Scripts
As instructed, run the root scripts:
# Login as root
sudo su -
# Run the inventory script
sh /u01/app/oraInventory/orainstRoot.sh
# Run the root.sh script
sh /u01/app/19.0.0/grid/root.sh
What root.sh Does:
- Sets file permissions for Grid Infrastructure
- Creates
/etc/oracledirectory - Sets up init scripts for automatic startup
- Configures Oracle Restart (Oracle High Availability Services)
- Creates cluster synchronization services (CSS)
Configure ASM Disks Using UDEV Rules
Prepare the physical disks for ASM:
# Login as root
sudo su -
# Partition the disks (example for /dev/sdd)
fdisk /dev/sdd
Follow the prompts to create a new primary partition using the entire disk. Repeat for other disks like /dev/sde.
fdisk Commands:
n # New partition
p # Primary
1 # Partition number
# Default first sector (press Enter)
# Default last sector (press Enter)
w # Write changes
Next, create UDEV rules to map the disk partitions to ASM disk names:
# Get the PARTUUID for the partitions
blkid /dev/sdd1
blkid /dev/sde1
Example output:
/dev/sdd1: PARTUUID="f5f8d48f-01"
/dev/sde1: PARTUUID="239e6945-01"
Create UDEV rules file:
vi /etc/udev/rules.d/99-oracle-asmdevices.rules
Add rules like these, replacing the PARTUUIDs with your actual values:
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/sbin/blkid -s PARTUUID -o value /dev/%k", RESULT=="f5f8d48f-01", SYMLINK+="oracleasm/disks/REDO01", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/sbin/blkid -s PARTUUID -o value /dev/%k", RESULT=="239e6945-01", SYMLINK+="oracleasm/disks/REDO02", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/sbin/blkid -s PARTUUID -o value /dev/%k", RESULT=="d2f5cc92-01", SYMLINK+="oracleasm/disks/REDO03", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/sbin/blkid -s PARTUUID -o value /dev/%k", RESULT=="eea48c3e-01", SYMLINK+="oracleasm/disks/ARCH01", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/sbin/blkid -s PARTUUID -o value /dev/%k", RESULT=="fa329063-01", SYMLINK+="oracleasm/disks/FRA01", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/sbin/blkid -s PARTUUID -o value /dev/%k", RESULT=="4a17a820-01", SYMLINK+="oracleasm/disks/DATA01", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/sbin/blkid -s PARTUUID -o value /dev/%k", RESULT=="c14094dc-01", SYMLINK+="oracleasm/disks/DATA02", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/sbin/blkid -s PARTUUID -o value /dev/%k", RESULT=="7ef38522-01", SYMLINK+="oracleasm/disks/DATA03", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", SUBSYSTEM=="block", PROGRAM=="/sbin/blkid -s PARTUUID -o value /dev/%k", RESULT=="e2b8082d-01", SYMLINK+="oracleasm/disks/DATA04", OWNER="grid", GROUP="asmadmin", MODE="0660"
UDEV Rule Syntax Explained:
KERNEL=="sd?1"– Match any sd device partition 1 (sda1, sdb1, etc.)SUBSYSTEM=="block"– Block device typePROGRAM=="/sbin/blkid..."– Get PARTUUID dynamicallyRESULT=="..."– Match specific PARTUUIDSYMLINK+="..."– Create symbolic linkOWNER="grid"– Set ownerGROUP="asmadmin"– Set groupMODE="0660"– Set permissions (rw-rw—-)
Reload the UDEV rules and trigger them:
udevadm control --reload-rules
udevadm trigger
Verify the disks are properly mapped:
ls -ltra /dev/oracleasm/disks/*
ls -ld /dev/sd*1
You should see output like:
lrwxrwxrwx. 1 root root 10 May 1 13:44 /dev/oracleasm/disks/REDO03 -> ../../sde1
lrwxrwxrwx. 1 root root 10 May 1 13:44 /dev/oracleasm/disks/REDO02 -> ../../sdd1
lrwxrwxrwx. 1 root root 10 May 1 13:44 /dev/oracleasm/disks/REDO01 -> ../../sdc1
Executing De-config and Config ASM
If you encounter issues during configuration, you may need to run the roothas.sh script with the -deconfig flag first:
sh /u01/app/19.0.0/grid/crs/install/roothas.sh -deconfig -force
When to Use -deconfig:
- Failed previous ASM configuration attempt
- Need to reconfigure ASM from scratch
- Changing fundamental ASM settings
Then run the roothas.sh script without flags:
sh /u01/app/19.0.0/grid/crs/install/roothas.sh
The output should indicate successful configuration:
Using configuration parameter file: /u01/app/19.0.0/grid/crs/install/crsconfig_params
The log of current session can be found at:
/u01/app/grid/crsdata/sanjeeva-test01/crsconfig/roothas_2025-05-01_01-03-25PM.log
2025/05/01 13:03:36 CLSRSC-363: User ignored prerequisites during installation
Redirecting to /bin/systemctl restart rsyslog.service
LOCAL ADD MODE
Creating OCR keys for user 'grid', privgrp 'oinstall'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node sanjeeva-test01 successfully pinned.
2025/05/01 13:03:47 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.service'
sanjeeva-test01 2025/05/01 13:04:36 /u01/app/grid/crsdata/sanjeeva-test01/olr/backup_20250501_130436.olr 760403972
2025/05/01 13:04:37 CLSRSC-327: Successfully configured Oracle Restart for a standalone server
Key Success Indicators:
- “Operation successful” for OCR key creation
- “Successfully configured Oracle Restart”
- “Successfully pinned” the node
ASM Disk Configuration
Now we need to add ASM resources and configure disks:
# Add ASM and listener resources
srvctl add asm
srvctl add listener -l LISTENER
# Try to start ASM (may fail until disk groups are created)
srvctl start asm
Note: ASM may fail to start initially because disk groups haven’t been created yet. This is expected.
Initialize the ASM Instance
Create a parameter file for the ASM instance:
# Login as grid user
su - grid
# Create the parameter file
vi $ORACLE_HOME/dbs/init+ASM.ora
Add the following content:
+ASM.__oracle_base='/u01/app/grid'#ORACLE_BASE set from in memory value
#+ASM.asm_diskgroups='REDO01','REDO02','REDO03','DATA','FRA','ARCH' #Manual Mount
*.asm_diskstring='/dev/oracleasm/disks/*'
*.asm_power_limit=1
*.large_pool_size=12M
*.remote_login_passwordfile='EXCLUSIVE'
Parameter Explanations:
| Parameter | Value | Purpose |
|---|---|---|
asm_diskstring | /dev/oracleasm/disks/* | Tells ASM where to find candidate disks |
asm_power_limit | 1 | Controls rebalance speed (1-11, higher = faster) |
large_pool_size | 12M | Memory for parallel execution |
remote_login_passwordfile | EXCLUSIVE | Allows remote SYSASM connections |
Note: The asm_diskgroups line is commented out initially. We’ll uncomment it after creating the disk groups.
Creating ASM Disk Groups
Now we can create the ASM disk groups:
# Login as grid user
su - grid
# Connect to ASM instance as SYSASM
sqlplus / as sysasm
Execute the following SQL commands to create the disk groups:
-- Create REDO01 disk group
CREATE DISKGROUP REDO01 EXTERNAL REDUNDANCY DISK '/dev/oracleasm/disks/REDO01';
ALTER DISKGROUP REDO01 SET ATTRIBUTE 'compatible.asm' = '19.0.0.0.0';
-- Create REDO02 disk group
CREATE DISKGROUP REDO02 EXTERNAL REDUNDANCY DISK '/dev/oracleasm/disks/REDO02';
ALTER DISKGROUP REDO02 SET ATTRIBUTE 'compatible.asm' = '19.0.0.0.0';
-- Create REDO03 disk group
CREATE DISKGROUP REDO03 EXTERNAL REDUNDANCY DISK '/dev/oracleasm/disks/REDO03';
ALTER DISKGROUP REDO03 SET ATTRIBUTE 'compatible.asm' = '19.0.0.0.0';
-- Create ARCH disk group
CREATE DISKGROUP ARCH EXTERNAL REDUNDANCY DISK '/dev/oracleasm/disks/ARCH01';
ALTER DISKGROUP ARCH SET ATTRIBUTE 'compatible.asm' = '19.0.0.0.0';
-- Create FRA disk group
CREATE DISKGROUP FRA EXTERNAL REDUNDANCY DISK '/dev/oracleasm/disks/FRA01';
ALTER DISKGROUP FRA SET ATTRIBUTE 'compatible.asm' = '19.0.0.0.0';
-- Create DATA disk group with normal redundancy
CREATE DISKGROUP DATA NORMAL REDUNDANCY
DISK
'/dev/oracleasm/disks/DATA01' NAME DATA01,
'/dev/oracleasm/disks/DATA02' NAME DATA02,
'/dev/oracleasm/disks/DATA03' NAME DATA03,
'/dev/oracleasm/disks/DATA04' NAME DATA04
ATTRIBUTE
'compatible.asm' = '19.0.0';
Redundancy Levels Explained:
| Redundancy | Mirrors | Usable Space | Use Case |
|---|---|---|---|
| EXTERNAL | 0 (none) | 100% | External RAID already provides protection |
| NORMAL | 2-way | 50% | Standard production (recommended) |
| HIGH | 3-way | 33% | Mission-critical, maximum protection |
Why Different Disk Groups?
- DATA – Database datafiles (most critical)
- FRA – Fast Recovery Area (backups, archive logs)
- ARCH – Archive logs (separate from FRA for performance)
- REDO01/02/03 – Separate redo log groups (I/O isolation)
After creating the disk groups, update the init+ASM.ora file to include the disk groups:
vi $ORACLE_HOME/dbs/init+ASM.ora
Uncomment the line:
+ASM.asm_diskgroups='REDO01','DATA','FRA','REDO02','ARCH'
Why This Matters: This tells ASM to automatically mount these disk groups at startup.
Starting and Verifying ASM
Restart the ASM instance and verify that everything is working:
# Restart ASM manually using the updated parameter file
sqlplus / as sysasm
Then in SQL*Plus:
SHUTDOWN IMMEDIATE;
STARTUP PFILE='$ORACLE_HOME/dbs/init+ASM.ora';
EXIT;
Expected output:
ASM instance shutdown
ASM instance started
Total System Global Area 1140850688 bytes
Fixed Size 9136128 bytes
Variable Size 1107296256 bytes
ASM Cache 24418304 bytes
ASM diskgroups mounted
ASM diskgroups volume enabled
Now you can use crsctl to verify the resources:
crsctl stat res -t
You should see output showing all resources online:
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ARCH.dg
ONLINE ONLINE sanjeeva-test01 STABLE
ora.DATA.dg
ONLINE ONLINE sanjeeva-test01 STABLE
ora.FRA.dg
ONLINE ONLINE sanjeeva-test01 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE sanjeeva-test01 STABLE
ora.REDO01.dg
ONLINE ONLINE sanjeeva-test01 STABLE
ora.REDO02.dg
ONLINE ONLINE sanjeeva-test01 STABLE
ora.REDO03.dg
ONLINE ONLINE sanjeeva-test01 STABLE
ora.asm
ONLINE ONLINE sanjeeva-test01 Started,STABLE
ora.ons
OFFLINE OFFLINE sanjeeva-test01 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
1 ONLINE ONLINE sanjeeva-test01 STABLE
ora.diskmon
1 OFFLINE OFFLINE STABLE
ora.evmd
1 ONLINE ONLINE sanjeeva-test01 STABLE
--------------------------------------------------------------------------------
Resource Status Meanings:
| Status | Meaning |
|---|---|
ONLINE/ONLINE | Resource is up and healthy |
OFFLINE/OFFLINE | Resource is intentionally stopped (normal for some) |
ONLINE/INTERMEDIATE | Resource is starting/stopping |
ONLINE/UNKNOWN | CRS can’t determine state (problem!) |
Verify Disk Group Space:
sqlplus / as sysasm
SELECT name, state, type, total_mb, free_mb,
ROUND((total_mb-free_mb)/total_mb*100,2) pct_used
FROM v$asm_diskgroup;
Expected output:
NAME STATE TYPE TOTAL_MB FREE_MB PCT_USED
-------- -------- --------- ---------- -------- --------
DATA MOUNTED NORMAL 204800 204750 0.02
FRA MOUNTED EXTERN 102400 102390 0.01
ARCH MOUNTED EXTERN 51200 51195 0.01
REDO01 MOUNTED EXTERN 10240 10235 0.05
REDO02 MOUNTED EXTERN 10240 10235 0.05
REDO03 MOUNTED EXTERN 10240 10235 0.05
Troubleshooting Common Issues
If you encounter issues during ASM installation, here are comprehensive solutions:
Issue 1: ASM Instance Fails to Start
Symptom:
ORA-15032: not all alterations performed
ORA-15017: diskgroup "DATA" cannot be mounted
ORA-15063: ASM discovered an insufficient number of disks
Diagnosis Steps:
Step 1 – Verify disk permissions:
ls -l /dev/oracleasm/disks/*
Expected:
brw-rw---- 1 grid asmadmin 8, 49 May 1 13:44 /dev/oracleasm/disks/DATA01
brw-rw---- 1 grid asmadmin 8, 65 May 1 13:44 /dev/oracleasm/disks/DATA02
If permissions are wrong:
chown grid:asmadmin /dev/oracleasm/disks/*
chmod 660 /dev/oracleasm/disks/*
Step 2 – Check ASM parameter file:
cat $ORACLE_HOME/dbs/init+ASM.ora
Verify:
asm_diskstringmatches your disk location- Disk group names are correct
- No syntax errors
Step 3 – Review ASM alert log:
less $ORACLE_BASE/diag/asm/+asm/+ASM/trace/alert_+ASM.log
Look for:
- ORA- errors
- Disk discovery issues
- Permission denied messages
Step 4 – Manually discover disks:
sqlplus / as sysasm
SELECT path, header_status, mode_status, state
FROM v$asm_disk
WHERE header_status != 'MEMBER';
Header Status Meanings:
| Status | Meaning |
|---|---|
CANDIDATE | Disk is available for ASM |
MEMBER | Disk is part of a disk group |
FORMER | Disk was previously used |
PROVISIONED | Disk is allocated but not used |
Issue 2: UDEV Rules Not Working
Symptom:
ls: cannot access '/dev/oracleasm/disks/*': No such file or directory
Solution Step-by-Step:
Step 1 – Verify PARTUUID exists:
blkid /dev/sd*1
If no PARTUUID shown:
# Recreate partition with GPT
gdisk /dev/sdd
# Use: o (create new GPT), n (new partition), w (write)
Step 2 – Check UDEV rule syntax:
cat /etc/udev/rules.d/99-oracle-asmdevices.rules
Common syntax errors:
- Missing quotes around PARTUUID
- Wrong kernel pattern (should be
sd?1notsd*1) - Incorrect path separators
Step 3 – Test UDEV rule manually:
# Test for specific device
udevadm test /sys/block/sdd/sdd1
# Should show: creating link '/dev/oracleasm/disks/...'
Step 4 – Reload and trigger UDEV:
udevadm control --reload-rules
udevadm trigger
udevadm settle
Step 5 – Verify symlinks created:
ls -la /dev/oracleasm/disks/
Issue 3: Oracle Restart (HAS) Issues
Symptom:
CRS-4639: Could not contact Oracle High Availability Services
Diagnosis:
Step 1 – Check HAS status:
crsctl check has
If not running:
crsctl start has
Step 2 – Check resource status:
crsctl stat res -t
If resources are OFFLINE:
# Start individual resource
srvctl start asm
# Or start all resources
crsctl start resource -all
Step 3 – Check Oracle Restart service:
systemctl status oracle-ohasd.service
If inactive:
systemctl enable oracle-ohasd.service
systemctl start oracle-ohasd.service
Step 4 – Review HAS logs:
less /u01/app/grid/crsdata/$(hostname -s)/crsconfig/roothas_*.log
Issue 4: “CRS-4535: Cannot communicate with Cluster Ready Services”
Symptom:
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Start failed, or completed with errors.
Solution:
# As root, stop and start HAS
crsctl stop has
crsctl start has
# Wait 2-3 minutes, then check
crsctl check has
If still failing:
# De-configure and reconfigure
cd /u01/app/19.0.0/grid/crs/install
sh roothas.sh -deconfig -force
sh roothas.sh
Issue 5: Disk Groups Won’t Mount After Reboot
Symptom: After server reboot, ASM starts but disk groups are dismounted.
Cause: asm_diskgroups parameter not set or incorrect.
Solution:
su - grid
sqlplus / as sysasm
-- Check current setting
SHOW PARAMETER asm_diskgroups
-- If empty or wrong, update init file
EXIT;
vi $ORACLE_HOME/dbs/init+ASM.ora
# Add or update:
+ASM.asm_diskgroups='DATA','FRA','ARCH','REDO01','REDO02','REDO03'
sqlplus / as sysasm
SHUTDOWN IMMEDIATE;
STARTUP PFILE='$ORACLE_HOME/dbs/init+ASM.ora';
-- Create SPFILE from PFILE
CREATE SPFILE FROM PFILE;
EXIT;
Issue 6: “ORA-27140: attach to post/wait facility failed”
Symptom:
ORA-27140: attach to post/wait facility failed
ORA-27300: OS system dependent operation:invalid_thread failed with status: 3
Cause: Insufficient shared memory or semaphores.
Solution:
Step 1 – Check kernel parameters:
sysctl -a | grep -i shm
sysctl -a | grep -i sem
Step 2 – Update if needed:
vi /etc/sysctl.conf
# Add or update:
kernel.shmmax = 4398046511104
kernel.shmall = 1073741824
kernel.sem = 250 32000 100 128
# Apply changes
sysctl -p
Step 3 – Restart ASM:
srvctl stop asm
srvctl start asm
Conclusion
You’ve now successfully installed Oracle ASM 19c with patch 25 in a standalone “Oracle Restart” grid infrastructure configuration. This setup provides robust storage management for your Oracle databases, with features such as:
✅ Automatic rebalancing – Redistribute data when disks are added or removed
✅ Mirroring and redundancy – Built-in data protection
✅ Automatic disk failure management – Self-healing storage
✅ Optimal performance – Stripe and mirror everywhere (SAME)
✅ Simplified administration – No file system overhead
With ASM properly configured, you can proceed to install and configure your Oracle database to use these ASM disk groups for optimal performance and reliability.
Next Steps
Security Hardening:
- Change default SYSASM and ASMSNMP passwords
- Restrict network access to ASM instance
- Enable auditing for ASM operations
Performance Optimization:
- Tune
asm_power_limitbased on workload - Monitor disk I/O balance across disk groups
- Implement ASM preferred mirror read (if using external storage)
Monitoring:
- Set up alerts for disk group free space
- Monitor ASM rebalance operations
- Track disk failures and repairs
Backup Strategy:
- Configure RMAN to use FRA disk group
- Set up archive log backup to ARCH disk group
- Implement retention policies
Related Guides
- Enable Archive Log Mode in Oracle 19c
- ORA-00257: Archiver Error Complete Fix
- Oracle Database Memory Monitoring Guide
- Response File for Oracle ASM Standalone Oracle 19c
This blog post was created as a comprehensive guide to Oracle ASM 19c installation. For official documentation, please refer to Oracle’s website. If you have any questions or need clarification on any steps, please leave a comment below.
