Skip to Content
Author's profile photo Stefan Koehler

[Oracle] DB Optimizer Part VII – Looking under the hood of Adaptive Query Optimization / Adaptive Plans (Oracle 12c)


We already talked about optimizer features like adaptive cursor sharing or cardinality feedback (that are disabled in a SAP environment by default), but all of these features have a common problem. These features are designed and focused on a better execution plan for subsequent SQL executions, which basically means that the first execution of the SQL statement maybe very slow or never finishes at all. These inefficient SQL execution plans are mostly based on wrong cardinality estimates (with the corresponding costs) and further wrong join methods, etc.. So wouldn’t it be great, if Oracle already takes notice of that wrong estimates at the first execution and changes the execution plan during execution / “on-the-fly”?

Yes, it would be great and so we get to the topic of this blog post called “Adaptive Query Optimization / Adaptive Plans (Oracle 12c)”. This feature was introduced with Oracle 12c – so let’s take a look at the official documentation of Oracle 12c first.

Oracle Documentation

In Oracle Database, adaptive query optimization is a set of capabilities that enables the optimizer to make run-time adjustments to execution plans and discover additional information that can lead to better statistics. Adaptive optimization is helpful when existing statistics are not sufficient to generate an optimal plan.

4.4.1 Adaptive Plans 

An adaptive plan enables the optimizer to defer the final plan decision for a statement until execution time. The ability of the optimizer to adapt a plan, based on information learned during execution, can greatly improve query performance. Adaptive plans are useful because the optimizer occasionally picks a suboptimal default plan because of a cardinality misestimate. The ability to adapt the plan at run time based on actual execution statistics results in a more optimal final plan. After choosing the final plan, the optimizer uses it for subsequent executions, thus ensuring that the suboptimal plan is not reused. How Adaptive Plans Work

An adaptive plan contains multiple predetermined subplans, and an optimizer statistics collector. A subplan is a portion of a plan that the optimizer can switch to as an alternative at run time. For example, a nested loops join could be switched to a hash join during execution. An optimizer statistics collector is a row source inserted into a plan at key points to collect run-time statistics. These statistics help the optimizer make a final decision between multiple subplans. During statement execution, the statistics collector gathers information about the execution, and buffers a portion of the rows received by the subplan. Based on the information observed by the statistics collector, the optimizer makes a final decision about which subplan to use. After the optimizer chooses a subplan, the statistics collector stops collecting statistics and buffering rows, and permits the rows to pass through instead. On subsequent executions of the child cursor, the optimizer disables buffering, and chooses the same final plan.  The database uses adaptive plans when OPTIMIZER_FEATURES_ENABLE is or later, and the OPTIMIZER_ADAPTIVE_REPORTING_ONLY initialization parameter is set to the default of false (see “Controlling Adaptive Optimization”).


The Optimizer development team also published two blog posts about that topic in the past. For further information please check out these blogs as well:

The first part of this blog series is pretty important, because of it describes the “inflection point” which will be essential in the examples later on.

In addition there is an interview with Maria Colgan about the “Oracle Database 12c Optimizer changes” on YouTube, which includes some important little details as well.

“The optimizer is becoming what we called “Adaptive Query Optimization”. That allows us to delay the final plan decision after parsing and doing it during execution. We are doing that in 12.1 with the introduction of adaptive joins. The adaptive joins allow us to delay the join method decision until execution (until we see the actual number of rows that flow into that join).

In 12c you will see a new operation called “statistics collector”. The statistic collector is gonna buffer the rows until we get a good sense of how many rows are coming out of the left hand side. It will not buffer all of them, but it will buffer up to a given threshold and then we will make a decision based on that threshold wether we continue with the original choice of plan or whether we should switch the join method instead. Once we made that final decision, the new statistics collector becomes a pass-through operation and no longer buffers rows, but allows them to flow through to the join.

This adaptation occurs just on the first execution. So it is possible that the initial execution of a SQL statement that may take advantage of adaptive joins would see a small degradation of performance due to that buffering while we make our decision, but all subsequent executions should not see any problem, because of that statistics collector is disabled (pass-through operation) and the final plan for that we decided during the initial execution will be used by all subsequent executions.”

So far enough of explanations and basic information. Let’s check out how the feature looks like in real life and let’s take a tiny look under the hood of it.

The demonstration

The following demo was run on an Oracle database ( on OEL 6.4 (2.6.39-400.109.1.el6uek.x86_64). The data pattern and the statistic collection method was chosen to get a wrong cardinality estimation for the default execution plan and only literals are used for simplicity.

The following statements are based on observations with different (test) application work loads on 12c and adapted to a simple example. The SQL statement itself makes no sense from an application point of view, but it should illustrate the adaptive plan feature only of course.



Data Pattern.png

                   METHOD_OPT => 'FOR ALL COLUMNS SIZE 1');

SQL> select /*+ gather_plan_statistics */ * from ORDERTAB a, ORDERTAB b
          where a.ORDERNR = b.ORDERNR and a.STORAGELOC = 600 and b.STORAGELOC = 100;


Oracle has created an adaptive plan for that particular SQL as you can see in the note section of the SQL execution plan output. Oracle also estimated the wrong cardinality (33.333 rows) based on the data pattern and collected statistics. The best way in this example would be to switch the tables for the hash join, but that should not be the topic here as the test case was specifically created that way.

Oracle 12c introduced a new format option called “ADAPTIVE” for the PL/SQL function DBMS_XPLAN.DISPLAY_CURSOR, which provides more information about the adaptive plans in the SQL execution plan output. Let’s check the official documentation first:


– Displays the final plan, or the current plan if the execution has not completed. This section includes notes about runtime optimizations that affect the plan, such as switching from a Nested Loops join to a Hash join.

– Plan lineage. This section shows the plans that were run previously due to automatic reoptimization. It also shows the default plan, if the plan changed due to dynamic plans.

– Recommended plan. In reporting mode, the plan is chosen based on execution statistics displayed. Note that displaying the recommended plan for automatic reoptimization requires re-compiling the query with the optimizer adjustments collected in the child cursor. Displaying the recommended plan for a dynamic plan does not require this.

– Dynamic plans. This summarizes the portions of the plan that differ from the default plan chosen by the optimizer.

SQL> select /*+ gather_plan_statistics */ * from ORDERTAB a, ORDERTAB b
          where a.ORDERNR = b.ORDERNR and a.STORAGELOC = 600 and b.STORAGELOC = 100;


So we know that our SQL is using an adaptive plan (in this case the default plan) and which parts are active or inactive, but what we do not know yet is at which point (= inflection point) the Oracle engine would have switched to a nested loop. So let’s check the cost based optimizer trace, if we can get some information about the particular inflection point.

SQL> alter system flush shared_pool;
SQL> alter session set events '10053 trace name context forever, level 1';
SQL> select /*+ gather_plan_statistics */ * from ORDERTAB a, ORDERTAB b
          where a.ORDERNR = b.ORDERNR and a.STORAGELOC = 600 and b.STORAGELOC = 100;
shell> vi /oracle/T12DB/oratrace/diag/rdbms/t12db/T12DB/trace/T12DB_ora_1928.trc 


The calculated inflection point is at a cardinality of 1.53 rows for the left hand join source (in our case table “ORDERTAB a” with predicate “a.STORAGELOC = <VALUE>”). So that basically means, that the (default) execution plan should be adapted and a nested loop should be performed, if the statistics collector notices that not more than 1.53 rows are passed up from table “ORDERTAB a”.

So let’s try the same SQL with predicate “a.STORAGELOC = 900”, that returns only one row (for details check the data pattern from above).

SQL> select /*+ gather_plan_statistics */ * from ORDERTAB a, ORDERTAB b
          where a.ORDERNR = b.ORDERNR and a.STORAGELOC = 900 and b.STORAGELOC = 100;


The optimizer still calculates the wrong cardinality of 33.333 rows as before (based on special data pattern and collected statistics), but the runtime engine performs a nested loop instead of a hash join now.

So just one open question remains in that context: What happens when the data pattern changes and the predicate “a.STORAGELOC = 900” returns more than 1.53 rows (= inflection point)? Remember once again, that the statistics collector is only enabled at the initial execution and works as a pass-through operator later on.

SQL> insert into ORDERTAB values(300002, 900 ,'TEST DESCRIPTION');
SQL> commit;
SQL> select /*+ gather_plan_statistics */ * from ORDERTAB a, ORDERTAB b
          where a.ORDERNR = b.ORDERNR and a.STORAGELOC = 900 and b.STORAGELOC = 100;


The final execution plan is still the same as before, even if two rows are returned from the left hand join source now (in our case table “ORDERTAB a”).

At last let’s flush the shared pool and re-parse the statement with the new data pattern.

SQL> alter system flush shared_pool;
SQL> select /*+ gather_plan_statistics */ * from ORDERTAB a, ORDERTAB b
          where a.ORDERNR = b.ORDERNR and a.STORAGELOC = 900 and b.STORAGELOC = 100;


Now we got our old (default) plan with the hash join again. This is the proof, that the statistics collector kicks-in at the initial execution of the SQL and that changes to the data pattern influences the adaptive query optimization at the first execution only.


“Adaptive Query Optimization / Adaptive Plans” closes a gap for finding the optimal/best execution plan. The feature is currently limited to specific execution plan parts like join methods (it does not influence join orders like cardinality feedback or ACS does for example) or PX distribution methods, but i can imagine, that this feature will get extended functionality in the future as we are just on the initial Oracle release yet.

If you have any further questions – please feel free to ask or get in contact directly, if you need assistance by troubleshooting Oracle database (performance) issues.


Assigned Tags

      1 Comment
      You must be Logged on to comment or reply to a post.
      Author's profile photo Eric Vanier
      Eric Vanier

      I am really happy to say it’s an interesting post to read . I learn new information about query optimization oracle from your article