When Oracle originally created LogMiner, the intent was to use the tool for forensics analysis and manual, logical recovery scenarios. Administrators could investigate activities in the database within a given period of time, where changes happened and who initiated them. There were known limits to LogMiner like unsupported datatypes and a lack of efficiency in addition to the fact that it was not designed for replication. Ultimately, LogMiner was optimal for insights and analysis on simple databases with a low rate of changes, but over time users stretched its capacity for other purposes.

Logminer was the cost-free API provided by Oracle to achieve what some thought was the answer to capturing up to the second changes in their Oracle database. Users assumed that if they were querying their database to see what happened yesterday at 2pm, why not try using LogMiner to query information more frequently (every 2 minutes or two seconds) and view changes to the data source within seconds.


The problem 
– LogMiner was not built for analyzing massive amounts of changes. LogMiner consumes up to 1 CPU at any given time and will not go over it to avoid creating big overhead on the database. Because of the CPU limitations, LogMiner cannot handle more than 10,000 changes per second. However, this number shrinks as the size of changes increase. Additionally, you would need to build in some buffering capacity in the event of a network hiccup or lag otherwise, your system would never fully recover from the disruption. LogMiner lags would keep growing continuously until a restream occurs, after which the lag will start growing again. In systems processing 50,000-60,000 changes per second, LogMiner will never keep up with the speed of data updates.

To allow for a simpler, continuous streaming of changes without having to stop and start LogMiner reads for every redo log file, Oracle developed the “Continuous Mining” feature in Logminer. This allowed users to subscribe to a Logminer stream of events and be notified when there were changes in the database. Continuous Mining didn’t offer the full capabilities of true Change Data Capture, only accessing semi-readable data from Oracle, but users were able to continuously stream changes. In essence, Continuous Mining was only a usability improvement on top of LogMiner.

Users may have experienced some small latency when looking at a non-loaded database, but by subscribing to the stream of events, changes were continuously pushed to the user for review.

In 2009, Oracle began offering a proprietary change data capture solution with expensive licensing but capacity that far surpassed LogMiner. Continuous Mining was still a free and functioning option for Oracle users until ten years later, when in January of 2019, Oracle introduced version 19C, a long-term support release – essentially Oracle 12c Release 2. Oracle then announced the deprecation of Streams, advanced replication, change data capture and Continuous Mining. Deprecation of Continuous Mining then moved to desupported status, leaving users with the option to purchase Oracle’s costly, no UI, proprietary product or explore other options if seeking out efficient and performant Change Data Capture solutions.



With Logminer stripped of Continuous Mining functionality, the ability to read data from all redo logs continuously became much more complicated once again, on top of LogMiner’s CPU limitations driving bottlenecks and headaches.

Companies working with Oracle databases found themselves in one of these three buckets:

  • #1) No CDC Solution and a very loaded system. Companies know LogMiner alone cannot support their data load, especially as they look ahead to move from batch to CDC/streaming.
  • #2) Using LogMiner currently, and it’s not keeping up with the load because their load size has grown. In peak hours of business operation, LogMiner might experience a delay of hours or even days with the data and then catches up when things calm down, but it’s clear that something is starting to break under the strain of growing data volume and velocity.
  • #3) Already using a binary parser from a licensed provider, but paying high prices and want to move to a different solution. Expensive, but feels like the only option that can handle the load.

Meet the Oracle Binary Log Parser (OBLP) by Equalum

It became clear that companies needed a tool that could read the logs faster than LogMiner but without the cost associated with the big, expensive, proprietary tools and without a heavy load on the source system.

Equalum came into view as a growing and nimble data ingestion start-up, developing the Oracle Binary Log Parser or OBLP. Equalum’s OBLP offers the best option for CDC data replication that is 10 times faster than LogMiner. The platform also supports LogMiner with continuous mine available or not available. LogMiner can still be a user’s preferred tool when the load is low, leaving Equalum to solve the continuous mine deprecation issue on their behalf.

Instead of all of the computing work happening on the production database servers, overloading the system, Equalum can do roughly 20% of the work on database servers and the rest done remotely on the Equalum servers. With the Oracle Binary Log Parser, there is no more parsing of data in your database which is the heavy part that takes up CPU. Instead, it reads the redo log in a binary fashion and parses the bytes based on offset positions, essentially reverse engineering the binary output. This method offers speeds of up to 100,000 changes per second with potentially lower overhead on the database.

With Equalum, the amount of Input/Output (I/O) to read the logs should be negligible as the Oracle Databases are designed to handle much bigger reading and writing of data rather than parsing and computing. Systems can continue to operate smoothly and to peak performance for their intended purpose.

In addition to Change Data Capture, Equalum’s no-code UI platform offers stream processing, ETL and batch processing deployed both quickly and easily with infinite scalability as data volumes grow.