ORA-01555: Snapshot Too Old – Complete Fix Guide for Oracle 19c


Table of Contents

The Problem: Long-Running Query Fails with ORA-01555

You’re running a critical report or data export, and after 30 minutes it fails with:

ORA-01555: snapshot too old: rollback segment number 23 with name "_SYSSMU23_2847562093$" too small

When this happens:

  • Long-running SELECT queries fail mid-execution
  • Data exports (expdp, SQL Developer) abort
  • ETL processes crash after hours of running
  • Backup validations fail

The query was working fine yesterday. The data hasn’t changed much. But today it fails with ORA-01555.

This guide shows you exactly how to fix it permanently.


What is ORA-01555 Snapshot Too Old?

ORA-01555 occurs when Oracle cannot maintain read consistency for your query because the undo data needed to reconstruct the original snapshot has been overwritten.

In simple terms:

  1. Your query starts at 10:00 AM
  2. It reads data as it existed at 10:00 AM (snapshot)
  3. Meanwhile, other users UPDATE/DELETE the same data
  4. Oracle keeps the “before image” in UNDO tablespace
  5. Your query (still running at 10:30 AM) needs that original data
  6. But Oracle already overwrote the UNDO blocks with newer transactions
  7. Result: ORA-01555 – Can’t guarantee read consistency

Root Causes of ORA-01555

1. UNDO Tablespace Too Small

Check current UNDO size:

SELECT tablespace_name, 
       ROUND(SUM(bytes)/1024/1024/1024,2) size_gb,
       ROUND(SUM(maxbytes)/1024/1024/1024,2) max_gb
FROM dba_data_files
WHERE tablespace_name LIKE 'UNDO%'
GROUP BY tablespace_name;

Output:

TABLESPACE_NAME    SIZE_GB    MAX_GB
-----------------  ---------  --------
UNDOTBS1           10.00      32.00

If SIZE_GB < 10GB: Definitely too small for busy production systems

2. UNDO_RETENTION Too Low

Check UNDO_RETENTION parameter:

SELECT name, value 
FROM v$parameter 
WHERE name = 'undo_retention';

Output:

NAME              VALUE    UNIT
----------------- -------- ------
undo_retention    900      seconds

Translation: 900 seconds = 15 minutes

Problem: If your query runs longer than 15 minutes, you risk ORA-01555.

3. Query Takes Too Long

Identify slow query:

SELECT sql_id, elapsed_time/1000000 elapsed_seconds,
       executions, buffer_gets, disk_reads,
       SUBSTR(sql_text,1,100) sql_text
FROM v$sqlarea
WHERE elapsed_time/1000000 > 900
ORDER BY elapsed_time DESC
FETCH FIRST 10 ROWS ONLY;

Output:

SQL_ID        ELAPSED_SECONDS  EXECUTIONS  BUFFER_GETS  SQL_TEXT
------------- ---------------  ----------  -----------  --------
8fzx9m7np5k3w  3245.67         1           12458925     SELECT * FROM large_table WHERE...

3245 seconds = 54 minutes → High risk for ORA-01555

4. Heavy Concurrent DML Activity

Check transaction volume:

SELECT TO_CHAR(begin_time,'HH24:MI') time,
       undoblks undo_blocks_consumed,
       txncount transaction_count
FROM v$undostat
ORDER BY begin_time DESC
FETCH FIRST 20 ROWS ONLY;

Output:

TIME   UNDO_BLOCKS_CONSUMED  TRANSACTION_COUNT
-----  --------------------  -----------------
14:30  458925               1245
14:20  892341               2387
14:10  1234567              3456

High transaction count = UNDO gets reused quickly = ORA-01555 risk


Solution 1: Increase UNDO Tablespace Size

Step 1: Check UNDO Tablespace Usage

SELECT tablespace_name,
       ROUND(SUM(bytes)/1024/1024/1024,2) used_gb,
       ROUND(SUM(maxbytes)/1024/1024/1024,2) max_gb,
       ROUND((SUM(bytes)/SUM(maxbytes))*100,2) pct_used
FROM dba_data_files
WHERE tablespace_name LIKE 'UNDO%'
GROUP BY tablespace_name;

Output:

TABLESPACE_NAME  USED_GB  MAX_GB  PCT_USED
---------------  -------  ------  --------
UNDOTBS1         9.85     32.00   30.78

If PCT_USED > 80%: Definitely need more space

Step 2: Add Datafile to UNDO Tablespace

ALTER TABLESPACE undotbs1 
ADD DATAFILE '/u01/oradata/PRODDB/undotbs01_02.dbf' 
SIZE 10G 
AUTOEXTEND ON 
NEXT 512M 
MAXSIZE 32G;

Verify addition:

SELECT file_name, 
       ROUND(bytes/1024/1024/1024,2) size_gb,
       ROUND(maxbytes/1024/1024/1024,2) max_gb,
       autoextensible
FROM dba_data_files
WHERE tablespace_name = 'UNDOTBS1'
ORDER BY file_id;

Output:

FILE_NAME                              SIZE_GB  MAX_GB  AUTOEXTENSIBLE
-------------------------------------  -------  ------  --------------
/u01/oradata/PRODDB/undotbs01_01.dbf  10.00    32.00   YES
/u01/oradata/PRODDB/undotbs01_02.dbf  10.00    32.00   YES

Total UNDO space now: 20GB with max 64GB

Step 3: Extend Existing Datafile (Alternative)

If you can’t add new files:

ALTER DATABASE DATAFILE '/u01/oradata/PRODDB/undotbs01_01.dbf'
RESIZE 20G;

-- Or change maxsize:
ALTER DATABASE DATAFILE '/u01/oradata/PRODDB/undotbs01_01.dbf'
AUTOEXTEND ON MAXSIZE 50G;

Solution 2: Increase UNDO_RETENTION

Step 1: Calculate Required UNDO_RETENTION

Formula: UNDO_RETENTION = (Longest query runtime in seconds) * 1.5

Example:

  • Longest query: 60 minutes = 3600 seconds
  • Recommended UNDO_RETENTION: 3600 * 1.5 = 5400 seconds

Step 2: Set UNDO_RETENTION

ALTER SYSTEM SET undo_retention=5400 SCOPE=BOTH;

Verify change:

SELECT name, value 
FROM v$parameter 
WHERE name = 'undo_retention';

Output:

NAME             VALUE
---------------- -----
undo_retention   5400

Note: 5400 seconds = 90 minutes

Step 3: Monitor UNDO Usage After Change

Check if UNDO can support new retention:

SELECT TO_CHAR(begin_time,'YYYY-MM-DD HH24:MI') time,
       undoblks blocks_per_10min,
       maxquerylen max_query_seconds,
       tuned_undoretention recommended_retention
FROM v$undostat
ORDER BY begin_time DESC
FETCH FIRST 10 ROWS ONLY;

Output:

TIME              BLOCKS_PER_10MIN  MAX_QUERY_SECONDS  RECOMMENDED_RETENTION
----------------  ----------------  -----------------  ---------------------
2025-12-10 14:40  25896            3245               5400
2025-12-10 14:30  28741            2987               4800

If RECOMMENDED_RETENTION fits: You have enough UNDO space


Solution 3: Enable UNDO_RETENTION GUARANTEE

Warning: Only use if you have enough UNDO space.

What is UNDO_RETENTION GUARANTEE?

Without GUARANTEE (default):

  • Oracle tries to keep UNDO for UNDO_RETENTION period
  • If UNDO fills up, Oracle can overwrite “unexpired” UNDO
  • Active transactions never fail, but queries may get ORA-01555

With GUARANTEE:

  • Oracle NEVER overwrites UNDO before UNDO_RETENTION expires
  • Queries protected from ORA-01555
  • Risk: New transactions may fail with “no space in undo tablespace”

Enable UNDO_RETENTION GUARANTEE

ALTER TABLESPACE undotbs1 RETENTION GUARANTEE;

Verify:

SELECT tablespace_name, retention
FROM dba_tablespaces
WHERE tablespace_name LIKE 'UNDO%';

Output:

TABLESPACE_NAME  RETENTION
---------------  -----------
UNDOTBS1         GUARANTEE

Disable GUARANTEE (if needed)

ALTER TABLESPACE undotbs1 RETENTION NOGUARANTEE;

Solution 4: Optimize the Query

Often the real problem is an inefficient query that runs too long.

Step 1: Identify Problematic SQL

SELECT sql_id, 
       elapsed_time/1000000 elapsed_seconds,
       buffer_gets,
       disk_reads,
       executions,
       ROUND(buffer_gets/executions) avg_buffer_gets,
       SUBSTR(sql_text,1,80) sql_text
FROM v$sqlarea
WHERE elapsed_time/1000000 > 1800
ORDER BY elapsed_time DESC;

Output:

SQL_ID        ELAPSED_SEC  BUFFER_GETS  DISK_READS  SQL_TEXT
------------- -----------  -----------  ----------  --------
8fzx9m7np5k3w 3245         12458925     987456      SELECT * FROM transactions WHERE...

Step 2: Get Execution Plan

SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR('8fzx9m7np5k3w', NULL, 'ALLSTATS LAST'));

Look for:

  • FULL TABLE SCANS on large tables
  • Missing indexes (high Rows processed, no index access)
  • Cartesian joins (MERGE JOIN CARTESIAN)
  • Sorts on large datasets

Step 3: Add Missing Index

Example – Query filtering on transaction_date:

-- Before: Full table scan on 50M row table
SELECT * FROM transactions 
WHERE transaction_date BETWEEN '2024-01-01' AND '2024-12-31';

-- Create index:
CREATE INDEX idx_trans_date 
ON transactions(transaction_date) 
TABLESPACE users 
PARALLEL 4;

-- After: Index range scan, much faster

Verify improvement:

SELECT sql_id, elapsed_time/1000000 elapsed_seconds
FROM v$sqlarea
WHERE sql_id = '8fzx9m7np5k3w';

Output:

SQL_ID        ELAPSED_SECONDS
------------- ---------------
8fzx9m7np5k3w 124.5           -- Down from 3245 seconds!

Step 4: Rewrite Query with Better Logic

Bad – Reads entire table:

SELECT * FROM large_table
WHERE status = 'PENDING'
AND created_date < SYSDATE - 30;

Better – Add limiting conditions:

SELECT * FROM large_table
WHERE status = 'PENDING'
AND created_date BETWEEN SYSDATE-60 AND SYSDATE-30
AND ROWNUM <= 10000;  -- Limit rows

Solution 5: Use Flashback Query (Time Travel)

If you need a consistent snapshot for a report, use Flashback Query instead of regular SELECT.

Flashback Query Syntax

-- Query data as it existed 1 hour ago
SELECT * 
FROM transactions 
AS OF TIMESTAMP (SYSTIMESTAMP - INTERVAL '60' MINUTE)
WHERE status = 'COMPLETED';

Advantages:

  • Uses UNDO, but Oracle reserves it
  • No ORA-01555 risk
  • Consistent snapshot guaranteed

Limitations:

  • Can only go back as far as UNDO_RETENTION allows
  • If UNDO already purged, you get ORA-01555 immediately (fail fast)

Flashback Table to Timestamp

-- Create report table with snapshot from 2 hours ago
CREATE TABLE monthly_report AS
SELECT * FROM transactions
AS OF TIMESTAMP (SYSTIMESTAMP - INTERVAL '120' MINUTE)
WHERE transaction_month = 'DEC-2024';

Solution 6: Split Long Query into Batches

Instead of one 4-hour query, break it into smaller chunks.

Batch Processing Example

Original query (prone to ORA-01555):

-- Runs for 4 hours, fails with ORA-01555
SELECT * FROM transactions
WHERE transaction_date BETWEEN '2024-01-01' AND '2024-12-31';

Rewritten as daily batches:

-- Process one day at a time (runs ~10 minutes each)
DECLARE
  v_start_date DATE := TO_DATE('2024-01-01','YYYY-MM-DD');
  v_end_date DATE := TO_DATE('2024-12-31','YYYY-MM-DD');
  v_current_date DATE;
BEGIN
  v_current_date := v_start_date;
  
  WHILE v_current_date <= v_end_date LOOP
    -- Process one day
    INSERT INTO report_table
    SELECT * FROM transactions
    WHERE transaction_date = v_current_date;
    
    COMMIT;  -- Release UNDO after each day
    
    v_current_date := v_current_date + 1;
  END LOOP;
END;
/

Benefits:

  • Each batch runs short (< 15 minutes)
  • COMMIT releases UNDO frequently
  • No ORA-01555 risk

Solution 7: Use Direct Path Export (Bypasses UNDO)

For data exports, use direct path to avoid UNDO altogether.

Data Pump with QUERY Parameter

Standard export (uses UNDO):

expdp user/password \
  directory=dpump_dir \
  dumpfile=export.dmp \
  tables=large_table

With date range (reduces data, faster):

expdp user/password \
  directory=dpump_dir \
  dumpfile=export.dmp \
  tables=large_table \
  query='large_table:"WHERE transaction_date >= TO_DATE('\''2024-01-01'\'','\''YYYY-MM-DD'\'')"'

SQL*Plus with ARRAYSIZE

Increase fetch array size to reduce round-trips:

SET ARRAYSIZE 5000
SET PAGESIZE 0
SET FEEDBACK OFF
SET HEADING OFF

SPOOL output.csv

SELECT /*+ PARALLEL(4) */ 
       column1 || ',' || column2 || ',' || column3
FROM large_table
WHERE transaction_date = '2024-12-01';

SPOOL OFF

ARRAYSIZE 5000 = Fetch 5000 rows per network round-trip (faster, less UNDO time)


Monitoring UNDO Usage

Real-time UNDO Monitoring

SELECT TO_CHAR(begin_time,'HH24:MI:SS') time,
       undoblks undo_blocks,
       txncount transactions,
       maxquerylen longest_query_sec,
       ROUND(undoblks/((end_time-begin_time)*24*60*60),2) blocks_per_sec
FROM v$undostat
ORDER BY begin_time DESC
FETCH FIRST 20 ROWS ONLY;

Output:

TIME      UNDO_BLOCKS  TRANSACTIONS  LONGEST_QUERY_SEC  BLOCKS_PER_SEC
--------  -----------  ------------  -----------------  --------------
14:45:30  125896       2456          1234               2098.27
14:35:30  98745        1987          987                1645.75

Check for UNDO Shortage

SELECT TO_CHAR(begin_time,'YYYY-MM-DD HH24:MI') time,
       unxpstealcnt unexpired_stolen,
       expstealcnt expired_stolen
FROM v$undostat
WHERE unxpstealcnt > 0
ORDER BY begin_time DESC;

If UNEXPIRED_STOLEN > 0: Oracle is reusing UNDO before UNDO_RETENTION expires → ORA-01555 likely

Output:

TIME              UNEXPIRED_STOLEN  EXPIRED_STOLEN
----------------  ----------------  --------------
2025-12-10 14:30  456              12789
2025-12-10 14:20  892              15632

Action: Increase UNDO tablespace size or reduce UNDO_RETENTION


Best Practices to Prevent ORA-01555

1. Right-Size UNDO Tablespace

Rule of thumb:

  • Small database (<100GB): 5-10GB UNDO
  • Medium database (100GB-1TB): 10-30GB UNDO
  • Large database (>1TB): 30-100GB UNDO
  • Heavy OLTP: 50-200GB UNDO

Formula:

UNDO Size (GB) = (Max transactions per second * Transaction size * UNDO_RETENTION) / 1GB

2. Set Appropriate UNDO_RETENTION

-- For typical OLTP system (15-minute queries max):
ALTER SYSTEM SET undo_retention=1800 SCOPE=BOTH;

-- For mixed workload (1-hour queries):
ALTER SYSTEM SET undo_retention=5400 SCOPE=BOTH;

-- For data warehouse (4-hour queries):
ALTER SYSTEM SET undo_retention=21600 SCOPE=BOTH;

3. Monitor Long-Running Queries

Create alert for queries > 30 minutes:

CREATE OR REPLACE PROCEDURE check_long_queries
IS
  v_count NUMBER;
BEGIN
  SELECT COUNT(*)
  INTO v_count
  FROM v$session s, v$sqlarea a
  WHERE s.sql_id = a.sql_id
  AND s.status = 'ACTIVE'
  AND s.last_call_et > 1800;  -- 30 minutes
  
  IF v_count > 0 THEN
    -- Send alert email
    DBMS_OUTPUT.PUT_LINE('WARNING: ' || v_count || ' queries running over 30 minutes');
  END IF;
END;
/

4. Use COMMIT in Long Transactions

Bad – One massive transaction:

BEGIN
  FOR rec IN (SELECT * FROM massive_table) LOOP
    UPDATE target_table SET ...;
  END LOOP;
  COMMIT;  -- After 4 hours
END;

Good – Batch commits:

BEGIN
  FOR rec IN (SELECT * FROM massive_table) LOOP
    UPDATE target_table SET ...;
    
    IF MOD(SQL%ROWCOUNT, 10000) = 0 THEN
      COMMIT;  -- Every 10,000 rows
    END IF;
  END LOOP;
  COMMIT;  -- Final commit
END;

5. Schedule Long Reports Off-Peak

Run data exports and long reports during:

  • Night hours (low transaction volume)
  • Weekends (less UNDO pressure)
  • Maintenance windows

Troubleshooting Persistent ORA-01555

Issue 1: ORA-01555 After Increasing UNDO

Symptom: Increased UNDO from 10GB to 50GB, still getting ORA-01555

Cause: Query still runs longer than UNDO_RETENTION

Solution:

-- Check actual query runtime vs. UNDO_RETENTION
SELECT sql_id, 
       elapsed_time/1000000/60 elapsed_minutes,
       v.value/60 undo_retention_minutes
FROM v$sqlarea s, v$parameter v
WHERE v.name = 'undo_retention'
AND elapsed_time/1000000 > v.value
ORDER BY elapsed_time DESC;

Fix: Increase UNDO_RETENTION to match query runtime

Issue 2: ORA-01555 During Data Pump Export

Symptom: expdp fails with ORA-01555 on specific large table

Solution:

# Add FLASHBACK_TIME parameter (uses specific SCN)
expdp user/password \
  directory=dpump_dir \
  dumpfile=export.dmp \
  flashback_time="TO_TIMESTAMP('2025-12-10 10:00:00', 'YYYY-MM-DD HH24:MI:SS')"

# Or use FLASHBACK_SCN (more precise)
expdp user/password \
  directory=dpump_dir \
  dumpfile=export.dmp \
  flashback_scn=12845673421

Issue 3: ORA-01555 on Materialized View Refresh

Symptom: MV refresh fails with ORA-01555

Cause: MV refresh is a long transaction reading base tables

Solution:

-- Option 1: Use ATOMIC_REFRESH=FALSE (commits per batch)
BEGIN
  DBMS_MVIEW.REFRESH(
    list => 'MV_SALES_SUMMARY',
    method => 'C',
    atomic_refresh => FALSE
  );
END;
/

-- Option 2: Increase UNDO before refresh
ALTER SYSTEM SET undo_retention=7200 SCOPE=MEMORY;

-- Refresh MV
EXEC DBMS_MVIEW.REFRESH('MV_SALES_SUMMARY','C');

-- Restore original retention
ALTER SYSTEM SET undo_retention=1800 SCOPE=MEMORY;

ORA-01555 vs. Other Snapshot Errors

ORA-01555 vs. ORA-30036

ORA-01555: UNDO overwritten, query fails
ORA-30036: UNDO tablespace full, transaction fails

ORA-30036 fix:

-- Emergency: Add datafile immediately
ALTER TABLESPACE undotbs1 
ADD DATAFILE SIZE 10G AUTOEXTEND ON;

-- Kill blocking sessions (use carefully)
SELECT 'ALTER SYSTEM KILL SESSION ''' || sid || ',' || serial# || ''' IMMEDIATE;'
FROM v$session
WHERE undo_segment_id IS NOT NULL
ORDER BY undo_retention DESC;

ORA-01555 vs. ORA-01552

ORA-01552: Cannot use rollback segment (specified segment doesn’t exist)

Cause: Referenced a specific UNDO segment that’s offline

Fix:

-- Check UNDO segments
SELECT segment_name, status
FROM dba_rollback_segs
WHERE tablespace_name = 'UNDOTBS1';

-- Bring offline segment online
ALTER ROLLBACK SEGMENT "_SYSSMU23_2847562093$" ONLINE;

Conclusion

ORA-01555 is caused by insufficient UNDO retention for long-running queries. The solution depends on your specific workload:

For OLTP systems with short transactions:

  • 10-20GB UNDO tablespace
  • UNDO_RETENTION = 1800-3600 seconds

For mixed workloads with occasional long queries:

  • 30-50GB UNDO tablespace
  • UNDO_RETENTION = 5400-7200 seconds
  • Query optimization critical

For data warehouse with long reports:

  • 50-100GB+ UNDO tablespace
  • UNDO_RETENTION = 14400-21600 seconds
  • Use batch processing and Flashback Query

Key Takeaways:

  • Size UNDO tablespace for your longest queries
  • Set UNDO_RETENTION = (longest query time * 1.5)
  • Monitor UNDO usage with v$undostat
  • Optimize slow queries with proper indexing
  • Use batch processing for very long operations
  • Consider Flashback Query for consistent snapshots

Additional Resources


Related Posts:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.