rmoff

August 15, 2011

New blog from Oracle – OBI Product Assurance

Filed under: bi, obiee — rmoff @ 10:01

Blogging from Oracle itself about OBIEE has always been a bit sparse, certainly in comparison to that which there is for core RDBMS.

It’s good to see a new blog emerge in the last couple of months from OBI Product Assurance, including some nice ‘n spicy detailed config/tuning info.

Find it here: http://blogs.oracle.com/pa/.

There’s a couple more OBI blogs from Oracle, but both are fairly stale:

August 8, 2011

OBIEE 10.1.3.4.2 released

Filed under: obiee — rmoff @ 08:04

A new version of OBI 10g (remember that?) has just been released, the Oracle twitter machine announced:

Along with presumably a bunch of bugfixes, the release notes list new functionality in catalog manager:

Download 10.1.3.4.2 from here

Did you hear that thunk? That was me falling off my chair in shock

Filed under: HP, Itanium, obiee — rmoff @ 08:00

OK, a bit tired on a Monday morning, and so a bit sarcastic.

I’ve not really fallen off my chair, but I am shocked. I honestly didn’t think it would happen.

Oracle have finally released OBI 11g for HP-UX Itanium:

Change notes for the OBI 11g certification doc

In other news, patchset 10.1.3.4.2 for OBI 10g was released today, I wonder if/when we’ll get an HP-UX Itanium version? The download page has it conspicuous by its absence even from “Coming Soon”:

Have you defined CLIENT_ID in OBIEE yet?

Filed under: obiee, oracle — rmoff @ 07:34

Have you defined CLIENT_ID in your OBIEE RPD yet?
You really ought to.

As well as helping track down users of troublesome queries, it also tags dump files with the OBIEE user of an offending query should the worst occur:

And the culprit is ...

For details, see:

August 4, 2011

ODI 10g connectivity problem with OCI

Filed under: odi — rmoff @ 11:45

Trying to connect to a repository in ODI using OCI. Target database is Oracle 11.1.0.7.

ODI Repository connection configuration - using OCI, not thin jdbc

Throws this error:

com.sunopsis.sql.l: Oracle Data Integrator Timeout: connection with URL jdbc:oracle:oci8:@ODIPRD and user ODI_USER.
	at com.sunopsis.sql.SnpsConnection.a(SnpsConnection.java)
	at com.sunopsis.sql.SnpsConnection.t(SnpsConnection.java)
	at com.sunopsis.sql.SnpsConnection.connect(SnpsConnection.java)
	at com.sunopsis.tools.connection.DwgRepositoryConnectionsCreator.a(DwgRepositoryConnectionsCreator.java)
	at com.sunopsis.tools.connection.DwgRepositoryConnectionsCreator.a(DwgRepositoryConnectionsCreator.java)
	at com.sunopsis.graphical.l.oi.a(oi.java)
[...]

Normally this error would be caused by a misconfigured Oracle client. For example, a missing or incorrect tnsnames.ora entry. I validated these and got a successful response using tnsping.

It turns out that there are two versions of the /drivers/ojdbc5.jar file, and only one of them would work. The difference in files is this:

Bytes    Date modified  File
-------  -------------  ------------------
2030460  Mar 11 00:22   ojdbc5.notwork.jar
1879924  Jul 25  2007   ojdbc5.works.jar

Extracting the jar files and examining META-INF/manifest shows the difference:

Comparison of manifest files for conflicting versions of ojdbc5.jar

Solution

Use the correct version of ojdbc5.jar.

Looking at the downloads for ojdbc5.jar, there are different versions of ojdbc5.jar for different versions of the database.

The version that worked for me was for 11.1.0.6 (1,879,860 bytes). The version that doesn’t work for me is presumably one for 11.2. I’ve not tested with the 11.1.0.7 one.

Security issue on OBIEE 10.1.3.4.1, 11.1.1.3

Filed under: bug, obiee, security — rmoff @ 10:00

July’s Critical Patch Update from Oracle includes CVE-2011-2241, which affects OBIEE versions 10.1.3.4.1 and 11.1.1.3.
No details of the exploit other than it “allows remote attackers to affect availability via unknown vectors related to Analytics Server.”

It is categorised with a CVSS score of 5 (on a scale of 10), with no impact on Authentication, Confidentiality, or Integrity, and “Partial+” impact on Availability. So to a security-unqualified layman (me), it sounds like someone could remotely crash your NQSServer process, but not do any more damage than that.

Patches 11833743 and 11833750 for 10.1.3.4.1 and 11.1.1.3 respectively.

July 13, 2011

Undocumented nqcmd parameters

Filed under: documentation, hack, nqcmd, obiee — rmoff @ 12:49

I noticed on Nico’s wiki (which is amazing by the way, it has so much information in it) a bunch of additional parameters for nqcmd other than those which are displayed in the default helptext (nqcmd -h).

These are the additional ones:

-b<super batch file name>
-w<# wait seconds>
-c<# cancel interval seconds>
-n<# number of loops>
-r<# number of requests per shared session>
-t<# number of threads>
-T (a flag to turn on time statistics)
-SmartDiff (a flag to enable SmartDiff tags in output)
-P<the percent of statements to disable cache hit>
-impersonate <the impersonate username>
-runas <the runas username>

Most parameters don’t appear to work in default call of nqcmd in 10g and 11g, throwing a Argument error near: error.

-b<super batch file name>
-w<# wait seconds>
-c<# cancel interval seconds>
-n<# number of loops>
-r<# number of requests per shared session>
-t<# number of threads>
-P<the percent of statements to disable cache hit>
-SmartDiff (a flag to enable SmartDiff tags in output)

I wonder if there’s an Open Sesame type flag that needs to be used to enable these parameters by support. Or maybe they don’t even exist.

This leaves this handful of additional parameters which do work (/don’t throw an error) in the default invocation of nqcmd:

-T (a flag to turn on time statistics)
-impersonate <the impersonate username>
-runas <the runas username>

Oracle Support directed me to the documentation (Table 14-1), but this covers the standard parameters, not these extra ones.

Oracle Support also pointed out that undocumented parameters are not supported except under direct instruction

The -T flag looks very useful for performance testing purposes, as it appends this information to the output from nqcmd:

Clock time: batch start: 15:44:32.000 Query from: 15:44:32.000 to: 15:44:59.000 Row count: 0
 total: 27 prepare:  1 execute: 26 fetch:  0
Cumulative time(seconds): Batch elapsed: 26 Query total: 27 prepare:  1, execute: 26, fetch:  0, query count:  1, cumulative rows:  0

I’m intrigued to know where Nico got his list from (he couldn’t remember when I asked him :-)). Has anyone else come across these and/or know what they do and how to invoke them? Stuff like SmartDiff sounds tantalisingly interesting.

June 28, 2011

Oracle 11g – How to force a sql_id to use a plan_hash_value using SQL Baselines

Filed under: etl, oracle, performance, plan management, sql plan baseline — rmoff @ 14:13

Here’s a scenario that’ll be depressingly familiar to most reading this: after ages of running fine, and no changes to the code, a query suddenly starts running for magnitudes longer than it used to.

In this instance it was an ETL step which used to take c.1 hour, and was now at 5 hours and counting. Since it still hadn’t finished, and the gods had conspired to bring down Grid too (unrelated), I generated a SQL Monitor report to see what was happening:

select DBMS_SQLTUNE.REPORT_SQL_MONITOR(
   type=>'HTML',
   report_level=>'ALL',sql_id=>'939abmqmvcc4d') as report
FROM dual;

(h/t to Martin Berger for this)

It showed a horrendous explain plan:

A very naughty plan

Using Kerry Osborne’s script to look at the plan_hash_value over time from AWR, it was clear that the CBO had picked a new, bad, explain plan.

So we knew the sql_id, and we knew the plan_hash_value of the plan which we wanted the CBO to use. But how to do this?

Back to Kerry Osborne again, and his article about SQL Plan Baselines. He (and others) write in detail about how and what SQL Plan Baselines are, but in essence it lets you tell Oracle which plan to use (or optionally, prefer) for a given sql_id.

Since the desired plan_hash_value was no longer in the cursor cache, we could get it back from AWR, loaded in via a SQL Tuning Set. Here’s the code with in-line comments explaining the function of each block:

/* 
Set up a SQL Baseline using known-good plan, sourced from AWR snapshots
https://rnm1978.wordpress.com/

In this example, sql_id is 939abmqmvcc4d and the plan_hash_value of the good plan that we want to force is 1239572551
*/

-- Drop SQL Tuning Set (STS)
BEGIN
  DBMS_SQLTUNE.DROP_SQLSET(
    sqlset_name => 'MySTS01');
END;

-- Create SQL Tuning Set (STS)
BEGIN
  DBMS_SQLTUNE.CREATE_SQLSET(
    sqlset_name => 'MySTS01',
    description => 'SQL Tuning Set for loading plan into SQL Plan Baseline');
END;

-- Populate STS from AWR, using a time duration when the desired plan was used
--  List out snapshot times using :   SELECT SNAP_ID, BEGIN_INTERVAL_TIME, END_INTERVAL_TIME FROM dba_hist_snapshot ORDER BY END_INTERVAL_TIME DESC;
--  Specify the sql_id in the basic_filter (other predicates are available, see documentation)
DECLARE
  cur sys_refcursor;
BEGIN
  OPEN cur FOR
    SELECT VALUE(P)
    FROM TABLE(
       dbms_sqltune.select_workload_repository(begin_snap=>22673, end_snap=>22710,basic_filter=>'sql_id = ''939abmqmvcc4d''',attribute_list=>'ALL')
              ) p;
     DBMS_SQLTUNE.LOAD_SQLSET( sqlset_name=> 'MySTS01', populate_cursor=>cur);
  CLOSE cur;
END;
/

-- List out SQL Tuning Set contents to check we got what we wanted
SELECT 
  first_load_time          ,
  executions as execs              ,
  parsing_schema_name      ,
  elapsed_time  / 1000000 as elapsed_time_secs  ,
  cpu_time / 1000000 as cpu_time_secs           ,
  buffer_gets              ,
  disk_reads               ,
  direct_writes            ,
  rows_processed           ,
  fetches                  ,
  optimizer_cost           ,
  sql_plan                ,
  plan_hash_value          ,
  sql_id                   ,
  sql_text
   FROM TABLE(DBMS_SQLTUNE.SELECT_SQLSET(sqlset_name => 'MySTS01')
             );

-- List out the Baselines to see what's there
SELECT * FROM dba_sql_plan_baselines ;

-- Load desired plan from STS as SQL Plan Baseline
-- Filter explicitly for the plan_hash_value here if you want
DECLARE
my_plans pls_integer;
BEGIN
  my_plans := DBMS_SPM.LOAD_PLANS_FROM_SQLSET(
    sqlset_name => 'MySTS01', 
    basic_filter=>'plan_hash_value = ''1239572551'''
    );
END;
/

-- List out the Baselines
SELECT * FROM dba_sql_plan_baselines ;

Now when the query’s run, it will use the desired plan.

Things to note:

  • In 10g and 11gR1 the default for SELECT_WORKLOAD_REPOSITORY is to return only BASIC information, which excludes the plan! So DBMS_SPM.LOAD_PLANS_FROM_SQLSET doesn’t load any plans.
    • It doesn’t throw a warning either, which it could sensibly, since the STS has no plan, and it can see that</grumble>
    • This changes to TYPICAL in 11gR2 (thanks Surachart!)
  • Parameter “optimizer_use_sql_plan_baselines” must be set to TRUE for a baseline to be used
  • Flush the cursor cache after loading the baseline to make sure it gets picked up on next execution of the sql_id

References:

Thanks to John Hallas for his help with this problem.

June 15, 2011

Global statistics high/low values when using DBMS_STATS.COPY_TABLE_STATS

Filed under: copy_table_stats, dbms_stats, DWH, oracle, Statistics — rmoff @ 08:16

There is a well-documented problem relating to DBMS_STATS.COPY_TABLE_STATS between partitions where high/low values of the partitioning key column were just copied verbatim from the source partition. This particular problem has now been patched (see 8318020.8). For background, see Doug Burns’ blog and his excellent paper which covers the whole topic of statistics on partitioned tables.

This post Maintaining statistics on large partitioned tables on the Oracle Optimizer blog details what the dbms_stats.copy_table_stats does with regards to the high/low values:

It adjusts the minimum and maximum values of the partitioning column as follows; it uses the high bound partitioning value as the maximum value of the first partitioning column (it is possible to have concatenated partition columns) and high bound partitioning value of the previous partition as the minimum value of the first partitioning column for range partitioned table

However, two problems as I see them remain:

  1. Table global stats don’t update high_value for partitioning key
  2. high_value of one partition overlaps with low_value of the next.
    • Partition high-bound values are defined as LESS THAN, not LESS THAN OR EQUAL TO – therefore the maximum possible value of the column is less than this, not equal to it.
    • The minimum value of the partitioning column is correct using this method (although be aware of 10233186 if you use a MAXVALUE in your range partitioning).

Here’s a script that demonstrates the two issues, written and commented based on execution on 11.1.0.7:

/* copy_stats_1.sql

Illustrate apparent problem with high_val on partition statistics when using partition to partition statistics copy
  * Table global stats do not update high_value for partitioning key
  * high_value of one partition overlaps with low_value of the next.

Requires display_raw function by Greg Rahn, see here: http://tinyurl.com/display-raw

https://rnm1978.wordpress.com/

*/

set echo off
set timing off
set feedback off
set linesize 156
set pagesize 57
col owner for a10
col table_name for a30
col column_name for a30
col partition_name for a20
col low_val for a10
col high_val for a10
col num_rows for 999,999,999,999
col "sum of num_rows" for 999,999,999,999
break on stats_update_time skip 1 duplicates

clear screen

prompt ===== This script uses the DISPLAY_RAW function =======
prompt
prompt Available here: http://structureddata.org/2007/10/16/how-to-display-high_valuelow_value-columns-from-user_tab_col_statistics/
prompt
prompt ========================================================
prompt
prompt
prompt
prompt =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
prompt 1. Set up an partitioned table with data and examine the statistics
prompt
prompt
set echo on
pause
-- Create fact table
drop table BASE_DATA;
CREATE table BASE_DATA ( day_key integer, store_key INTEGER,  item_key  INTEGER, fact_001 NUMBER(15,0), fact_002 NUMBER(15,0), fact_003 NUMBER(18,2))
 PARTITION BY RANGE (DAY_KEY)
  SUBPARTITION BY HASH (store_key)
  SUBPARTITION TEMPLATE ( SUBPARTITION "SP1" , SUBPARTITION "SP2" , SUBPARTITION "SP3" , SUBPARTITION "SP4")
 ( PARTITION "PART_20110401"  VALUES LESS THAN (20110402))
 PARALLEL;

pause
-- Create indexes
CREATE UNIQUE INDEX BASE_DATA_PK ON BASE_DATA ("DAY_KEY", "STORE_KEY", "ITEM_KEY") LOCAL parallel;
create bitmap index base_data_ix2 on base_data (store_key) local parallel;
create bitmap index base_data_ix3 on base_data (item_key) local parallel;

pause 

-- Populate fact table
exec DBMS_RANDOM.SEED('StraussCookPieterson');
insert into BASE_DATA values (20110401,101,2000, dbms_random.value(0,999) , dbms_random.value(0,999) , dbms_random.value(0,999) );
insert into BASE_DATA values (20110401,102,2000, dbms_random.value(0,999) , dbms_random.value(0,999) , dbms_random.value(0,999) );
commit;

pause 

-- Gather full stats on table
set feedback on
exec dbms_stats.gather_table_stats(     ownname=>USER, tabname=>'BASE_DATA', granularity=>'AUTO');
set feedback off

pause 

select * from base_data order by day_key;
pause
-- Examine statistics

set echo off
prompt
prompt DBA_PART_TABLES
select partitioning_type, subpartitioning_type, partition_count from dba_part_tables where table_name='BASE_DATA' and owner=USER;

prompt
prompt DBA_TAB_STATS_HISTORY
SELECT table_name, partition_name, stats_update_time
FROM   dba_tab_stats_history
WHERE  owner = USER
AND table_name = 'BASE_DATA'
ORDER  BY stats_update_time asc;
pause

prompt
prompt  DBA_TAB_STATISTICS (table level only):
prompt  **************************************
select table_name,num_rows,
to_char(LAST_ANALYZED,'YYYY-MM-DD-HH24:MI:SS') "LAST_ANALYZED"
from DBA_TAB_STATISTICS
where table_name='BASE_DATA' and owner=USER
and partition_name is null
;

pause 

compute sum of num_rows on report
prompt
prompt DBA_TAB_STATISTICS (Partition level):
prompt *************************************
select table_name,partition_name,num_rows,
to_char(LAST_ANALYZED,'YYYY-MM-DD-HH24:MI:SS') "LAST_ANALYZED"
from DBA_TAB_STATISTICS
where table_name='BASE_DATA' and owner=USER
and partition_name is not null
and subpartition_name is null
order by table_name,partition_name
;
clear computes

pause 

prompt DBA_PART_COL_STATISTICS:
prompt ************************
select a.partition_name,a.column_name,to_char(a.LAST_ANALYZED,'YYYY-MM-DD-HH24:MI:SS') "LAST_ANALYZED",
display_raw(a.low_value,b.data_type) as low_val,display_raw(a.high_value,b.data_type) as high_val
from DBA_PART_COL_STATISTICS a
inner join dba_tab_cols b on a.table_name=b.table_name and a.column_name=b.column_name and a.owner=b.owner
where a.table_name='BASE_DATA' and a.owner=USER and a.partition_name is not null
and a.column_name = 'DAY_KEY'
;

prompt
prompt Observe: Partition high/low values for DAY_KEY - currently 1st April
pause 

prompt
prompt DBA_TAB_COL_STATISTICS:
prompt ***********************
select a.column_name,to_char(a.LAST_ANALYZED,'YYYY-MM-DD-HH24:MI:SS') "LAST_ANALYZED",
display_raw(a.low_value,b.data_type) as low_val,display_raw(a.high_value,b.data_type) as high_val
from DBA_TAB_COL_STATISTICS a inner join dba_tab_cols b on a.table_name=b.table_name and a.column_name=b.column_name and a.owner=b.owner
where a.table_name='BASE_DATA' and a.owner=USER and a.column_name = 'DAY_KEY'
;
prompt
prompt Observe: Table high/low values for DAY_KEY - currently 1st April
pause 

prompt
prompt
prompt
prompt =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
prompt 2. Create new partition and use dbms_stats.copy_table_stats to set the stats for it. Leave data in the table unchanged.
prompt
prompt
pause

set feedback on
set echo on
alter table base_data add PARTITION "PART_20110402"  VALUES LESS THAN (20110403);
exec dbms_stats.copy_table_stats(ownname=>USER, tabname=>'BASE_DATA',SRCPARTNAME=>'PART_20110401',DSTPARTNAME=>'PART_20110402');
pause
set feedback off

select * from base_data order by day_key;
pause
-- Examine statistics

set echo off
prompt
prompt DBA_PART_TABLES
select partitioning_type, subpartitioning_type, partition_count from dba_part_tables where table_name='BASE_DATA' and owner=USER;

prompt
prompt DBA_TAB_STATS_HISTORY
SELECT table_name, partition_name, stats_update_time
FROM   dba_tab_stats_history
WHERE  owner = USER
AND table_name = 'BASE_DATA'
ORDER  BY stats_update_time asc;
pause

prompt  DBA_TAB_STATISTICS (table level only):
prompt  **************************************
select table_name,num_rows,
to_char(LAST_ANALYZED,'YYYY-MM-DD-HH24:MI:SS') "LAST_ANALYZED"
from DBA_TAB_STATISTICS
where table_name='BASE_DATA' and owner=USER
and partition_name is null
;

compute sum of num_rows on report
prompt
prompt DBA_TAB_STATISTICS (Partition level):
prompt *************************************
select table_name,partition_name,num_rows,
to_char(LAST_ANALYZED,'YYYY-MM-DD-HH24:MI:SS') "LAST_ANALYZED"
from DBA_TAB_STATISTICS
where table_name='BASE_DATA' and owner=USER
and partition_name is not null
and subpartition_name is null
order by table_name,partition_name
;
clear computes

prompt
prompt Side note: Oracle doesn't aggregate the partition num_rows statistic up to global when doing a copy stats, 
prompt            so whilst the sum of partition num_rows is four, the global num_rows is still two.
prompt            Of course, at this point, there are only actually two rows of data in the table.
prompt   
prompt (also, observe that LAST_ANALYZED for the new partition is that of the partition from where the stats were copied, and isn't
prompt  the same as STATS_UPDATE_TIME for the partition on DBA_TAB_STATS_HISTORY - which makes sense when you think about it)
pause

prompt
prompt DBA_TAB_PARTITIONS:
prompt ********************
select partition_name, high_value from dba_tab_partitions where table_name='BASE_DATA' and table_owner=USER;
prompt
prompt DBA_PART_COL_STATISTICS:
prompt ************************
select a.partition_name,a.column_name,to_char(a.LAST_ANALYZED,'YYYY-MM-DD-HH24:MI:SS') "LAST_ANALYZED",
display_raw(a.low_value,b.data_type) as low_val,display_raw(a.high_value,b.data_type) as high_val
from DBA_PART_COL_STATISTICS a
inner join dba_tab_cols b on a.table_name=b.table_name and a.column_name=b.column_name and a.owner=b.owner
where a.table_name='BASE_DATA' and a.owner=USER and a.partition_name is not null
and a.column_name = 'DAY_KEY'
;
prompt
prompt See the Partition high/low values for DAY_KEY in the new partition (PART_20110402) into which we copied the stats:
prompt --> low_value is correct
prompt --> high_value is out of range for possible data in that partition
prompt -----> high_value of the partition is < 20110403, ** not **  <= 20110403
prompt
pause
prompt
prompt DBA_TAB_COL_STATISTICS:
prompt ***********************
select a.column_name,to_char(a.LAST_ANALYZED,'YYYY-MM-DD-HH24:MI:SS') "LAST_ANALYZED",
display_raw(a.low_value,b.data_type) as low_val,display_raw(a.high_value,b.data_type) as high_val
from DBA_TAB_COL_STATISTICS a inner join dba_tab_cols b on a.table_name=b.table_name and a.column_name=b.column_name and a.owner=b.owner
where a.table_name='BASE_DATA' and a.owner=USER and a.column_name = 'DAY_KEY'
;
prompt
prompt See the Table high/low values for DAY_KEY - currently 1st April, even though the stats on individual partitions has a (wrong) high_val of 3rd April.
pause 
prompt
prompt
prompt
prompt =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
prompt 3. Add another new partition and use dbms_stats.copy_table_stats to set the stats for it. Leave data in the table unchanged.
prompt
prompt
pause

set feedback on
set echo on
alter table base_data add PARTITION "PART_20110403"  VALUES LESS THAN (20110404);
exec dbms_stats.copy_table_stats(ownname=>USER, tabname=>'BASE_DATA',SRCPARTNAME=>'PART_20110401',DSTPARTNAME=>'PART_20110403');
pause
set feedback off

select * from base_data order by day_key;
pause
-- Examine statistics

set echo off
prompt
prompt DBA_PART_TABLES
select partitioning_type, subpartitioning_type, partition_count from dba_part_tables where table_name='BASE_DATA' and owner=USER;

prompt
prompt DBA_TAB_STATS_HISTORY
SELECT table_name, partition_name, stats_update_time
FROM   dba_tab_stats_history
WHERE  owner = USER
AND table_name = 'BASE_DATA'
ORDER  BY stats_update_time asc;
pause

prompt  DBA_TAB_STATISTICS (table level only):
prompt  **************************************
select table_name,num_rows,
to_char(LAST_ANALYZED,'YYYY-MM-DD-HH24:MI:SS') "LAST_ANALYZED"
from DBA_TAB_STATISTICS
where table_name='BASE_DATA' and owner=USER
and partition_name is null
;

compute sum of num_rows on report
prompt
prompt DBA_TAB_STATISTICS (Partition level):
prompt *************************************
select table_name,partition_name,num_rows,
to_char(LAST_ANALYZED,'YYYY-MM-DD-HH24:MI:SS') "LAST_ANALYZED"
from DBA_TAB_STATISTICS
where table_name='BASE_DATA' and owner=USER
and partition_name is not null
and subpartition_name is null
order by table_name,partition_name
;
clear computes

pause

prompt
prompt DBA_TAB_PARTITIONS:
prompt ********************
select partition_name, high_value from dba_tab_partitions where table_name='BASE_DATA' and table_owner=USER;
prompt
prompt DBA_PART_COL_STATISTICS:
prompt ************************
select a.partition_name,a.column_name,to_char(a.LAST_ANALYZED,'YYYY-MM-DD-HH24:MI:SS') "LAST_ANALYZED",
display_raw(a.low_value,b.data_type) as low_val,display_raw(a.high_value,b.data_type) as high_val
from DBA_PART_COL_STATISTICS a
inner join dba_tab_cols b on a.table_name=b.table_name and a.column_name=b.column_name and a.owner=b.owner
where a.table_name='BASE_DATA' and a.owner=USER and a.partition_name is not null
and a.column_name = 'DAY_KEY'
;
prompt
prompt You can see that the high_value for the new partition is again too high for the possible values the partition could contain
prompt
prompt But this time we can also see the overlapping high_value of previous column with low_value of the next.
prompt   PART_20110401 has real stats 
prompt   PART_20110402 has copied stats, with a (wrong) high_value of 20110403
prompt   PART_20110403 has copied stats, with a low_value of 20110403 - which is the same as the high_value of the previous partition

prompt
pause
prompt
prompt DBA_TAB_COL_STATISTICS:
prompt ***********************
select a.column_name,to_char(a.LAST_ANALYZED,'YYYY-MM-DD-HH24:MI:SS') "LAST_ANALYZED",
display_raw(a.low_value,b.data_type) as low_val,display_raw(a.high_value,b.data_type) as high_val
from DBA_TAB_COL_STATISTICS a inner join dba_tab_cols b on a.table_name=b.table_name and a.column_name=b.column_name and a.owner=b.owner
where a.table_name='BASE_DATA' and a.owner=USER and a.column_name = 'DAY_KEY'
;
prompt
prompt Table high/low values for DAY_KEY - still 1st April, even though the stats on individual partitions has a (wrong) high_val of 4th April.
pause 

prompt
prompt
prompt
prompt =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
prompt 4. Add data to the table, gather real statistics, examine them.
prompt
prompt
pause

set echo on
-- Populate fact table
insert into BASE_DATA values (20110402,101,2000, dbms_random.value(0,999) , dbms_random.value(0,999) , dbms_random.value(0,999) );
insert into BASE_DATA values (20110403,101,2000, dbms_random.value(0,999) , dbms_random.value(0,999) , dbms_random.value(0,999) );
commit;

pause 

-- gather full stats
exec dbms_stats.gather_table_stats(     ownname=>USER, tabname=>'BASE_DATA', granularity=>'AUTO');
pause

select * from base_data order by day_key;
pause
-- Examine statistics

set echo off
prompt
prompt DBA_PART_TABLES
select partitioning_type, subpartitioning_type, partition_count from dba_part_tables where table_name='BASE_DATA' and owner=USER;

prompt
prompt DBA_TAB_STATS_HISTORY
SELECT table_name, partition_name, stats_update_time
FROM   dba_tab_stats_history
WHERE  owner = USER
AND table_name = 'BASE_DATA'
ORDER  BY stats_update_time asc;
pause

prompt  DBA_TAB_STATISTICS (table level only):
prompt  **************************************
select table_name,num_rows,
to_char(LAST_ANALYZED,'YYYY-MM-DD-HH24:MI:SS') "LAST_ANALYZED"
from DBA_TAB_STATISTICS
where table_name='BASE_DATA' and owner=USER
and partition_name is null
;
pause

compute sum of num_rows on report
prompt
prompt DBA_TAB_STATISTICS (Partition level):
prompt *************************************
select table_name,partition_name,num_rows,
to_char(LAST_ANALYZED,'YYYY-MM-DD-HH24:MI:SS') "LAST_ANALYZED"
from DBA_TAB_STATISTICS
where table_name='BASE_DATA' and owner=USER
and partition_name is not null
and subpartition_name is null
order by table_name,partition_name
;
clear computes
prompt
prompt Table num_rows is now accurate
pause 

prompt
prompt DBA_TAB_PARTITIONS:
prompt ********************
select partition_name, high_value from dba_tab_partitions where table_name='BASE_DATA' and table_owner=USER;
prompt
prompt DBA_PART_COL_STATISTICS:
prompt ************************
select a.partition_name,a.column_name,to_char(a.LAST_ANALYZED,'YYYY-MM-DD-HH24:MI:SS') "LAST_ANALYZED",
display_raw(a.low_value,b.data_type) as low_val,display_raw(a.high_value,b.data_type) as high_val
from DBA_PART_COL_STATISTICS a
inner join dba_tab_cols b on a.table_name=b.table_name and a.column_name=b.column_name and a.owner=b.owner
where a.table_name='BASE_DATA' and a.owner=USER and a.partition_name is not null
and a.column_name = 'DAY_KEY'
;

prompt
prompt Partition high/low values for DAY_KEY in each partition is correct
pause

prompt
prompt DBA_TAB_COL_STATISTICS:
prompt ***********************
select a.column_name,to_char(a.LAST_ANALYZED,'YYYY-MM-DD-HH24:MI:SS') "LAST_ANALYZED",
display_raw(a.low_value,b.data_type) as low_val,display_raw(a.high_value,b.data_type) as high_val
from DBA_TAB_COL_STATISTICS a inner join dba_tab_cols b on a.table_name=b.table_name and a.column_name=b.column_name and a.owner=b.owner
where a.table_name='BASE_DATA' and a.owner=USER and a.column_name = 'DAY_KEY'
;
prompt
prompt Table high/low values for DAY_KEY are now correct
pause 

/* #EOF */

From everything that I’ve read, representative stats are essential for Oracle to generate the most efficient explain plan to give the most optimal performance. Out of range problems caused by inaccurate statistics is something frequently referenced. However I’m out of my depth here to determine whether that’s true for the global statistics of this partitioning column not getting updated.

Copying stats have never been intended as a replacement for real stats, that much is clear and frequently stated. They should be part of a carefully designed stats gathering method, based on your applications data and frequency of loading. Hopefully the above, along with the other articles about copy stats out there, will add to the understanding of the functionality and importantly, its limitations. Copying the stats will just buy you time in a critical load schedule, postponing the point at which you do a proper gather. All copy stats is doing is making the statistics a bit more representative of the data – it’s not a proper sample of the data so the quality of the stats will never be as good as if you do a proper gather. When you do the real gather should be whichever comes first of:

  • the point at which you have time in your batch schedule
    or,
  • the stats are too unrepresentative of your data for the Oracle optimizer to generate a sufficiently efficient explain plan in order for your queries to run in the time which the users require.

Maria Colgan from Oracle has kindly reviewed my script and findings, and commented:

Your argument that copy stats sets the high_value wrongly (to high), is correct. We do over estimate the high value by setting it to the partition definition. As you correctly point out no value in the partition will have reach that high_value because a range partition is always specified as less than. We did this so that we can ensure there will be no greater value than this in the partition, otherwise we would have to guess what the max value is.

Maria also pointed out that with regards to the overlapping high/low values

However, this is not the expected behavior. The goal of copy_stats is to provide a temporary fix to the out of range problem by providing a representative set of statistics for a new partition. It is not supposed to be a replacement for statistics gathering.


Reading:

Watch out for these other bugs that I came across reference to:

  • 10234419 Extend dbms_stats.copy_table_stats to all range partitioning key columns
  • Doc ID 1292269.1ORA-01422 While running dbms_stats.copy_table_stats
    • ” This issue would occur when there are more than one schema with same table name.”

Many thanks to Maria Colgan and Doug Burns for reviewing this post and providing useful feedback.

May 26, 2011

Data Warehousing and Statistics in Oracle 11g – Automatic Optimizer Statistics Collection

Filed under: dbms_stats, DWH, oracle, Statistics — rmoff @ 13:35

Chucking a stick in the spokes of your carefully-tested ETL/BI …

My opinion is that automated stats gathering for non-system objects should be disabled on Oracle Data Warehouses across all environments.

All it does it cover up poor design or implementation which has omitted to consider statistics management. Once you get into the realms of millions or billions of rows of data, the automated housekeeping may well not have time to stat all of your tables on each run. And then it becomes a quasi-lottery when your tables will get processed. Or what if you’re working with intra-day loads (eg. near real-time) – the housekeeping job only runs once a day by default.

Even if you have a suitable window and are happy that the job gathers all that it needs to all of the time, what if you want to run your batch at the same time as the task window defined? If you want to run your batch highly parallel (and why wouldn’t you?) then will the stats gather suffer? or affect your batch by running stats highly parallel too?

Suppose you are relying on the auto stats job, and don’t want to run it at the same time as your batch, so you come up with a suitable schedule for them to run at different times. What happens when your DW grows and you need to add a new batch process, and so have to move the window again? How do you know that moving it won’t affect the previous batch’s stats?

If you’re building on an existing system and want to test the performance of your new batch, how are you going to simulate the behaviour of your auto stats job? Even if you trigger it manually, are you going to accurately simulate the statistics that it may or may not need to gather each night? How do you factor in the magical 10% staleness to trigger a stats gather? That is one serious test rig if you want to reproduce all of that.

If you have stats management in place, then turning the auto stats off (for non-system objects) won’t hurt. And if you’re not, then the auto stats job will cover this up in your environments all the way from Dev through to Prod. The first time someone will ask about stats management is when you’re scratching your head over a report or ETL stage “that used to work fine”. And then the horrible truth will dawn that you screwed up, and should have built it into your design from the beginning.

As we say around here, if you want a job done properly, do it tha’ sen. Or rather, as Greg Rahn more articulately says:

I tend to advise people that for a DW the stats gathering should be part of the data flow (ETL/ELT) process and to disable the default job
[…]
If you wish to collect your statistics manually, then you should change the value of AUTOSTATS_TARGET to ORACLE instead of AUTO (DBMS_STATS.SET_PARAM(‘AUTOSTATS_TARGET’,’ORACLE’)). This will keep the dictionary stats up to date and allow you to manually gather stats on your schemas

Julian Dyke says something supporting this view too in his presentation here:

In complex databases do not rely on Auto job
– Unpredictable collection behaviour / duration
– Unpredictable execution plan changes

If you can’t disable the autostats job for whatever reason (maybe another application on the same DB would require changes to accommodate it), then you can shield yourself from its nefarious influences by using LOCK_SCHEMA_STATS to lock the stats on your schema(s). When you manually maintain the stats yourself, you either unlock them first, or use the FORCE option of the stats procedures.

Stabilisers on a high-performance motorbike

It’s easy enough to understand why Oracle built the Automated Stats job, and why it’s enabled by default. In an effort to move towards the Self Managing Database, it makes sense to automate whatever you can, so that the scope for poor performance is reduced. Abstracting things slightly, the optimizer is just part of the DB code, and stats reason for being is to support the optimizer, so why not keep it under the covers where possible?
The trouble with this is that it might be fine for the middle of the road. The bog standard, quick-win, fire it and run project doing nicely standard OLTP work. One fewer thing for the developer to worry about. It’s probably quite good for lots of things. But Oracle RDBMS is a big beast, and an expensive bit of kit. Do you really want to meander along in the slow lane all the time, content to be using a one-size-fits-all approach?
Kawasaki motorbike with stabilisers

If you’re serious about exploiting the power of Oracle for your data warehouse, then you need to understand what needs to be done to get it to perform. One of the big factors is accurate, representative statistics. And to get these you have to take the stabilisers off and learn how to do it properly yourself, because you’re the one that understands your data. Data loads are going to be different, data distribution is going to be different, reporting is going to be different. There’s a finite set of patterns that you’ll find in standard DW methodology, but it’s up to you to read about them (Greg Rahn, Doug Burns, et al) and understand how they apply to your system, and not rely on Oracle’s approximation of a stats method for an average system.

Why do I need to manage the stats myself? Doesn’t Oracle do it automagically when they’re stale?

Doesn’t Oracle gather stats automagically when they’re stale?
Yes, it does, BUT:

  • Only if the window allocated to it allows for time
  • not stale ≠ representative stats .
    Or to rearrange the equation: your stats can be unrepresentative of your data, and the stats not be ‘stale’.

So even whilst they’re not “stale”, that’s not to say the global statistics for your table are still representative. After one day, the statistics are already becoming unrepresentative of the data (think max value of date, transaction number, etc), but are still not “stale”.
Oracle will, by default, consider a table “stale” once 10% has changed. But most DWs are going to be loading many millions of rows a day, so the 10% (default) change for a table to be considered stale is going to be quite high. A table loading 20 million rows per day will hit c.1 billion rows in total after less than two months. But of a billion rows, a hundred million (10%) need to change before the table’s statistics are “stale”. 20 into 100 goes 5 … so your statistics would only become “stale” roughly every five days.

There’s a good presentation from OpenWorld 2008 by Oracle’s Real World Performance Group, entitled Real-World Database Performance Techniques and Methods. In it they discuss statistics management in detail, including the following “Six Challenges to the Cost Based Optimizer”:

1. Data skew
2. Bind peeking
3. Column low/high values
4. Data correlation between columns
5. Cardinality Approximations
6. The debugging process

At least two of these (data skew, and column low/high values – out-of-range) can occur (which is bad, mm’kay?) with statistics which are STALE=FALSE.

The point is, if you’re serious about getting the best explain plan from the CBO, you can’t rely on STALE as a sole indicator of how representative your statistics are of your data.

Let’s remember why we even care about good statistics. Some people seem to think that it’s optional. That it’s the geek equivalent of spending every weekend lovingly polishing the exterior of one’s favourite car – nice to have and ideally should be done, but ultimately just for show and won’t make it go any faster.
The DB is there to support the users of whatever application it is. And users being users, want their answers now. This gives us our starting point, and a logical flow of conclusions drawn from this:

  • Our requirement is for performance, and repeatable, consistent performance.
    • To get this we want Oracle to execute the query as efficiently as possible.
    • To do this, Oracle needs to understand the data that it’s being asked to query.
    • If it doesn’t understand the data, how can we expect to access it in the most efficient way?
    • This understanding is imparted to Oracle through statistics.
    • So statistics need to be representative of the data.

As soon as you are changing data (eg a DW batch load), you need to consider whether the stats are still going to give the CBO the best chance of getting the right plan. If they aren’t as representative of your data as they could be then you can’t expect the CBO to come up with the best plan.
If your data doesn’t change much and once a week works for you then great. But the point is you need to understand your data, so that you can plan your statistics strategy around it so that Oracle can understand it.

Reading & References

Thanks to Greg Rahn for reviewing my post and suggesting some changes

« Newer PostsOlder Posts »

Create a free website or blog at WordPress.com.