I am presenting on measuring performance by percentiles at RMOUG 2014 in Denver.
My session is on Friday Feb 7 at 2:45 pm at room 501.
Latest PPT: SQL Execution Outliers – final
The toolbox that used: tracking_sql_performance.zip
See you in Denver.
In the previous article, I introduced the concept of percentiles and you can hopefully see that measuring percentiles is important to track SQL performance as it makes performance metrics a lot more precise and meaningful.
Unfortunately, ORACLE does not collect percentile data out of the box, so we need to figure out a way to do it ourselves.
In the previous article, we’ve seen that “average” SQL performance metrics that ORACLE provides out of the box can be useful, but only in a limited set of circumstances when underlying data distribution is normal.
Let’s try to find better metrics.
Let’s say that you have a very important SQL in your system.
How do you know if it is performing well ? Is it running slow sometimes resulting in users having bad experience ? If so, when ? How many users ? And just exactly how bad does user experience get when it does get “bad” ?
In this series of articles I’m going to talk about tracking and measuring performance of individual SQLs. We’ll look at the standard metrics that ORACLE provides to do that and introduce some new (and hopefully better) metrics that we can use.
I just returned from RMOUG 2013 training days in Denver, CO and I have to say that I’m impressed!
This was an incredibly well organized and well run event with lots of great speakers and great presentations. As a former board member of Tampa’s ORACLE users group, I know how hard it is to make an event like this happen, especially at this scale. Great job, @DbaKevlar and the rest!
Read consistency is one of the coolest features of ORACLE database.
In a nutshell, it means that your queries (at least, logically) do not pay attention to the outside world. I.e. your tables may be hit with thousands of changes per second, but your query will ignore all the hoopla and will always see only the data that existed when the query started.
Have you ever heard that the speed of the system is determined by its slowest component ? I am made painfully aware of that every time I do data migrations.
I.e. it doesn’t matter if you have 64 core systems with 100+ Gb of memory on either end if the majority of time is spent waiting for data to trickle across a slow 1 Gb network link.
Watching data trickle for hours, while the rest of the system is doing nothing is a pretty frustrating experience. But limitations breed creativity … so lately, I’ve been experimenting with several different copy techniques to see if there is any way transfer speed can be improved, perhaps using some of the idle capacity to speed things up.
Thanks everyone, who attended my presentation about database sharding at IOUG Collaborate. That was a lot of fun!
I’ve always wondered how many of my SQLs are NOT using baselines.
Of course, when you run DBMS_XPLAN.DISPLAY_CURSOR for the statement, it can put a nicely formatted note for you, something like:
- SQL plan baseline "SQL_PLAN_01yu884fpund494ecae5c" used FOR this statement
But can you find it globally for ALL SQLs in the shared pool ?
If you’ve been a DBA long enough, you’ve probably seen multiple cases where certain SQL queries just refuse to cooperate.
I.e. you KNOW that the query is supposed to use index IDX1, but ORACLE stubbornly decides to take IDX2. Rats!
The usual suspect here is bad statistics and thus the usual advise is: start re-collecting them. But while this might help, the success is far from guaranteed! Moreover, with a ‘real production’ data statistics collection could take hours (days?) and you already have people screaming about slow performance … In other words, you need to fix the problem NOW and waiting until stats collection maybe fixes the problem a few hours down the road is just not an option!