In the previous article, I introduced the concept of percentiles and you can hopefully see that measuring percentiles is important to track SQL performance as it makes performance metrics a lot more precise and meaningful.
Unfortunately, ORACLE does not collect percentile data out of the box, so we need to figure out a way to do it ourselves.
In the previous article, we’ve seen that “average” SQL performance metrics that ORACLE provides out of the box can be useful, but only in a limited set of circumstances when underlying data distribution is normal.
Let’s try to find better metrics.
Let’s say that you have a very important SQL in your system.
How do you know if it is performing well ? Is it running slow sometimes resulting in users having bad experience ? If so, when ? How many users ? And just exactly how bad does user experience get when it does get “bad” ?
In this series of articles I’m going to talk about tracking and measuring performance of individual SQLs. We’ll look at the standard metrics that ORACLE provides to do that and introduce some new (and hopefully better) metrics that we can use.
If you use DBMS_XPLAN package to analyze execution plans for your SQL statements (and you really should these days), you might have noticed that at times the following line might be displayed along with your execution plan:
- SQL profile "SYS_SQLPROF_012ad8267d9c0000" used FOR this statement
This seems to mean that:
- An external ‘helper’ object was used during evaluation of your SQL statement
- This ‘helper’ object changed (or, at least, influenced) its execution plan
While having some extra help is nice, one has to wonder: what exactly is this “profile” object ? What does it do ? And, in a bigger picture, why do we need “external” help evaluating SQL statements at all ?
With SQL Plan Management being relatively new, it is inevitable that many people run into problems using it.
Most of those problems are caused by our (relative) ignorance: SPM does change the way how ORACLE runs SQL statements and it simply takes time to get used to how it works. Yet, some of the consequences of using SPM are truly bizarre and will surprise many people (including, probably, some ORACLE developers).
In this post I will describe the 3 scenarios where we have recently run into problems using SPM and you can judge for yourself …
In the previous post we learned what SPM building blocks or baselines are. In this post, we are going to see how we can get them to work to provide plan stability for our SQL statements.
In a nutshell, SQL Baselines will be used with any SQL statement when the following 3 conditions are met:
- SQL Plan Management must be active
- SPM Baseline for the SQL must exists (duh!)
- SPM Baseline must be ACCEPTED
In the previous post we discussed the WHAT and WHY of ORACLE 11g SQL Plan Management, in this post will will begin discussing the HOW. In my personal experience, in order to understand (and appreciate) any new thing that comes from ORACLE you really need to go down into actual nuts and bolts and see how that thing works.
So, let’s get our hands dirty and go into the actual mechanics of SPM.
One of our developers called me recently and said that they had an interesting problem on their hands.
The essence of a problem was that the schema upgrade script misfired and did not create a very important index on one of the largest tables. As a result, critical queries that were supposed to hit that index were now doing full table scans and dragging on for minutes.
The weird thing was that a developer, realizing her mistake, connected to the schema and created the index manually. Yet even after that was done, target queries still full table scanned the table, completely ignoring the index.