Those of you who read my previous post would remember that ORACLE implicit type conversion approach
contradicts common sense, yet, you would probably agree, that there are some excellent reasons for ORACLE to go with it.
This post will explore implicit type conversion further and, in a way, turn everything you read about in part 1 upside down.
If the motto of the first part of this post was: Always question your common sense, the motto of the second part will be: Do NOT accept anything as dogma (even if your read it on this blog )
It is said that when you hear the words this is common sense or everybody knows that you should take notice because when enough people say it – it is likely that the results are exactly the opposite. Especially, when you are dealing with ORACLE
Here is a short puzzle to prove the point.
Imagine that you have a table with ID column that stores, say, 100000 unique numbers from 1 to 100000. The column has a unique constraint (and a unique underlying index to enforce it).
Assume also that the hardware is regular (we are not yet on Exadata), table is large enough, all ORACLE metadata is correct, statistics for both table and indexes are collected and are up-to-date, there are no unusual optimizer directives or nasty bugs etc
Here is the question:
What would be the expected execution plan for the following query: