With SQL Plan Management being relatively new, it is inevitable that many people run into problems using it.
Most of those problems are caused by our (relative) ignorance: SPM does change the way how ORACLE runs SQL statements and it simply takes time to get used to how it works. Yet, some of the consequences of using SPM are truly bizarre and will surprise many people (including, probably, some ORACLE developers).
In this post I will describe the 3 scenarios where we have recently run into problems using SPM and you can judge for yourself …
In the previous post we learned what SPM building blocks or baselines are. In this post, we are going to see how we can get them to work to provide plan stability for our SQL statements.
In a nutshell, SQL Baselines will be used with any SQL statement when the following 3 conditions are met:
- SQL Plan Management must be active
- SPM Baseline for the SQL must exists (duh!)
- SPM Baseline must be ACCEPTED
In the previous post we discussed the WHAT and WHY of ORACLE 11g SQL Plan Management, in this post will will begin discussing the HOW. In my personal experience, in order to understand (and appreciate) any new thing that comes from ORACLE you really need to go down into actual nuts and bolts and see how that thing works.
So, let’s get our hands dirty and go into the actual mechanics of SPM.
One of our developers called me recently and said that they had an interesting problem on their hands.
The essence of a problem was that the schema upgrade script misfired and did not create a very important index on one of the largest tables. As a result, critical queries that were supposed to hit that index were now doing full table scans and dragging on for minutes.
The weird thing was that a developer, realizing her mistake, connected to the schema and created the index manually. Yet even after that was done, target queries still full table scanned the table, completely ignoring the index.
Not sure if it is news, but here we go …
I was recently looking at hard parsing SQL statements overtaking one of my systems and needed to find literal SQLs that caused the problem.
The traditional way to look for literal SQLs in ORACLE (at least as it was for me) is to search for statements that are exactly the same in the first N symbols, something like:
GROUP BY substr(sql_text, 1, 80)
HAVING COUNT(1) > 10
ORDER BY 2
Those of you who read my previous post would remember that ORACLE implicit type conversion approach
contradicts common sense, yet, you would probably agree, that there are some excellent reasons for ORACLE to go with it.
This post will explore implicit type conversion further and, in a way, turn everything you read about in part 1 upside down.
If the motto of the first part of this post was: Always question your common sense, the motto of the second part will be: Do NOT accept anything as dogma (even if your read it on this blog 🙂 )
It is said that when you hear the words this is common sense or everybody knows that you should take notice because when enough people say it – it is likely that the results are exactly the opposite. Especially, when you are dealing with ORACLE 🙂
Here is a short puzzle to prove the point.
Imagine that you have a table with ID column that stores, say, 100000 unique numbers from 1 to 100000. The column has a unique constraint (and a unique underlying index to enforce it).
Assume also that the hardware is regular (we are not yet on Exadata), table is large enough, all ORACLE metadata is correct, statistics for both table and indexes are collected and are up-to-date, there are no unusual optimizer directives or nasty bugs etc
Here is the question:
What would be the expected execution plan for the following query:
Be careful when using SQLPlus to investigate performance problems as it might show you incorrect results in certain conditions.
Here is one example:
In the following scenario, we are trying to find out if a simple expression:
will produce different execution plans with NUMBER and CHAR binds.