Intermediate SQL Color Coded SQL, UNIX and Database Essays

30Jan/140

How to track SQL Performance. Part 2: Percentiles!

In the previous article, we’ve seen that “average” SQL performance metrics that ORACLE provides out of the box can be useful, but only in a limited set of circumstances when underlying data distribution is normal.

Let’s try to find better metrics.

Here is what the requirements are:

  • We still want a summary (one point) metric, so that we do not need to store and process large raw data during performance analysis
  • It has to be meaningful, i.e. represent what people actually care about
  • It has to be universal, i.e. not dependent on data shape or sample size

Thinking clearly about performance

Let’s step back a bit and talk about performance in general. It is a well known maxima (thanks, Cary Millsap!), that

People feel variance, not the “mean”

I.e. people will generally feel better if things (i.e. query timing) remain constant but will detect (and complain about) things that get far outside the norm.

In my experience with tuning SQL queries, this maxima usually gets one sided.

I.e. imagine yourself tuning a random OLTP query and improving its latency by a factor of 10. Chances are, this change will not even be noticed by the users (yes, it’s a thankless job sometimes…). However, make this query performance 10 times worse and I’ll bet than people will start “screaming” … (relatively speaking 🙂 )

So my slightly modified maxima is:

People feel (right side “BAD”) variance, not the “mean”

In very simple terms, when it comes to query performance, people only care about slow performing queries.

So, how do we track them ?

How to track “slow” queries ?

The first approach that comes to mind is to set a “slow threshold” and just record all (or, at least the count of) individual executions crossing it. This way we should have a very precise answer of how many queries are “bad”.

However, there is slight problem with this approach: What if we do not see anything ?

Does the fact that we measure 0 “bad” executions tell us if things are all good ?

I.e. you can imagine the situation when queries are “almost bad”, or slowly creeping to be “almost bad”, but because of the strict threshold, we are blind to their existence until they actually cross it (at which point it might be too late to react).

A more general solution to track “bad queries” is to think in “percentages” (or: “percentiles”). The idea is actually quite simple.

Let’s take the same measurements from our UPSERT example, but now order individual runs by “elapsed time”:

upsert_ordered

With the newly ordered data, let’s take a point at “90” on X axis and think what it represents in terms of performance.

I can summarize it as:

Latency of the worst 10% of individual UPSERT executions is at least as bad as 248 milliseconds

Just for contrast, compare it with implied performance definition when we are using “averages”:

(what we think is) typical UPSERT latency is 102 milliseconds

What if we selected “data point at 90” to represent query performance ?

  • This would still be a single data point to deal with (“Summary metric” – check!)
  • It has a very well defined and obvious performance meaning (“Meaningful metric” – check!)
  • It does not really depend on data size or shape as “any shape can be reordered” (“Universality” – check!)

So, the bottom line is that this point (let’s finally give it a proper name – “90th percentile” or p90 for short), makes for a very good performance metric, a lot better than “average”.

In addition to that, percentiles are continuous – as long as sample size is not 0, p90 will always have some value. It can be tracked over time, plotted on screen and we (and more importantly our automated tools) will have a chance to react in time to potential problems.

The resulting performance plot will look very similar to the original “average” plot (except for it being more precise and meaningful, of course 🙂 )

sql_performance

Tracking percentiles

The typical use of percentiles is to track “bad” queries with ever increasing precision. That’s why, it makes sense to capture and track several percentile metrics, not just one.

I.e. you might want to track p50 (“50% of our queries are at least as bad as …”), p90 (“the worst 10%”) and p99 (“the worst 1%”). Occasionally, there might be a need to be more precise and track i.e. p99.9 or p99.999 percentiles if requirements are very strict.

upsert_ordered_p509099

R code to reproduce examples:

library(ggplot2)
library(scales)

d <- read.table("http://intermediatesql.com/wp-content/uploads/2014/01/upsert_1hr.txt",
head=T)  

d$CAT <- factor(ifelse(d$IOTIME > 0.5*d$ETIME, 'IO',
  ifelse((d$CTIME + d$ATIME) > 0.5*d$ETIME, 'Concurrency', 'CPU')))
d <- d[order(d$ETIME), ]
d$N <- 1:nrow(d)
d$PERCENTILE <- round(d$N/nrow(d)*100)

avg_x <- d[abs(d$ETIME-mean(d$ETIME)) == min(abs(d$ETIME-mean(d$ETIME))),]$N
p90_x <- min(d[d$PERCENTILE == 90,]$N)

# Ordered elapsed times
ggplot(d, aes(PERCENTILE, round(ETIME/1000), color=CAT)) + geom_point() +
  theme_minimal() + xlab("Percentile") +ylab("Elapsed time (ms)") +
  scale_y_continuous(labels=comma) +
  scale_x_continuous(breaks=seq(from=10, to=100, by=10)) +
  theme(legend.title=element_blank()) +
  geom_vline(xintercept=90, color="green", size=1) +
  annotate("rect", ymin=-Inf, ymax=Inf, xmin=90, xmax=Inf, fill="green", alpha=0.1) +
  geom_point(aes(x=90, y=d[d$N == p90_x, ]$ETIME/1000),
    size=4, fill="green", alpha=0.1) +
  annotate("text", x=90, y=600,
    label=paste("p90=", round(quantile(d$ETIME, c(0.9))/1000), " ms")) +
  geom_point(aes(x=avg_x/nrow(d)*100, y=mean(d$ETIME)/1000),
    size=4, fill="blue", alpha=0.1) +
  annotate("text", x=avg_x/nrow(d)*100, y=400,
    label=paste("Avg=", round(mean(d$ETIME/1000)), " ms"))

# p50, p90, p99
ggplot(d, aes(PERCENTILE, round(ETIME/1000), color=CAT)) + geom_point() +
  theme_minimal() + xlab("Percentile") +ylab("Elapsed time (ms)") +
  scale_y_continuous(labels=comma) +
  scale_x_continuous(breaks=seq(from=10, to=100, by=10)) +
  theme(legend.title=element_blank()) +
  geom_vline(xintercept=50, color="blue", size=1) +
  annotate("rect", ymin=-Inf, ymax=Inf, xmin=50, xmax=Inf,
    fill="blue", alpha=0.1) +
  annotate("text", x=50, y=500,
    label=paste("p50=", round(quantile(d$ETIME, c(0.5))/1000), " ms")) +
  geom_vline(xintercept=90, color="green", size=1) +
  annotate("rect", ymin=-Inf, ymax=Inf, xmin=90, xmax=Inf,
    fill="green", alpha=0.1) +
  annotate("text", x=90, y=1000,
    label=paste("p90=", round(quantile(d$ETIME, c(0.9))/1000), " ms")) +
  geom_vline(xintercept=99, color="red", size=1) +
  annotate("rect", ymin=-Inf, ymax=Inf, xmin=99, xmax=Inf,
    fill="red", alpha=0.1) +
  annotate("text", x=90, y=1500,
    label=paste("p99=", round(quantile(d$ETIME, c(0.99))/1000), " ms"))

Ok, where can I find these “percentiles” ?

I hope you agree that percentiles are useful performance metrics and tracking them is a worthy thing to do.

But where can we find them ? If you look around ORACLE dictionary, you’ll notice that V$SQL does not exactly have ELAPSED_TIME_P90 or ELAPSED_TIME_P99 columns and v$SQL_PERCENTILES view is nowhere to be found as well.

The (sad) truth is that, at the moment, tracking percentiles is a “do it yourself” exercise. Stick around as I’ll be talking about it in the next article.

Comments (0) Trackbacks (0)

No comments yet.


Leave a comment

No trackbacks yet.