rmoff

June 14, 2010

Measuring real user response times for OBIEE

Filed under: obiee, performance, sawserver — rmoff @ 12:54

@alexgorbachev tweeted me recently after picking up my presentation on Performance Testing and OBIEE.

His question got me thinking, and as ever the answer “It Depends” is appropriate here πŸ™‚

Why is the measurement being done?

Without knowing the context of the work Alex is doing, how to measure depends on whether the measurement needs to be of: –

  1. The actual response times that the users are getting, or
  2. The response times that the system is currently capable of delivering

This may sound like splitting hairs or beard-scratching irrelevance, but it’s not. If the aim of the exercise is to be able to make a statement along the lines of:

On Monday morning between 09:00 and 10:00 we saw system response times of x seconds

then we can consider simulating a user and recording response times this way. After all, what difference does it make whether it’s Jim, Jemima or Jeremy using the system, or a simulated web client? They’re all sending an HTTP request to the same web server, hitting the same presentation services, BI server, and database.
If on the other hand we want to say something like:

On Monday morning between 09:00 and 10:00 response times experienced by the end user were x seconds

then we need to audit and trace user activity through some means. We can’t use a simulated user session, because it would only ever be that – simulated. If a user says that the system performance is awful then you need to be able to quantify and diagnose that, and the best way is through their eyes. A simulated user is only ever going to be a best-guess of user activity, or even if it’s a replay of past behaviour it may not be the same as they’re doing currently.

These considerations also feed into the point at which we take the measurements. There is no out of the box tracking of response times at the end-user, but there is out of the box tracking of response times at the BI Server. If you are happy to settle for the latter then you save yourself a lot of work. If your requirement is to give an extremely accurate figure for the response time at the end-user then Usage Tracking data from the BI Server is irrelevant (because it doesn’t account for time spent in Presentation Services). However, if you know anecdotally that your reports aren’t that complex and generally time in Presentation Services is minimal then you should consider Usage Tracking, unless the precision required for response time is so great. Consider which is better – to spend an hour configuring Usage Tracking and get response times accurate to within a few seconds (assuming that Presentation Services time is either minimal or consistent so can be factored in), or spend x days or weeks trying to hack together a way or measuring times at the end user — is the extra accuracy definitely necessary?
See slides 11-13 of my presentation for more discussion around this and defining the scope of a test and measurement taking.

So, these thoughts aside, what are the options for examining response times at the end-user point of OBIEE?

Actual response times as experienced by users

As discussed above, Usage Tracking data will get you the response times at the BI server, but doesn’t include anything upstream of that (Presentation Services, App/Web server, network, client rendering).
The options that I can think of for recording timings at the end user are:

  1. Presentation Services Session Monitor – This is a point-in-time record in Presentation Services of each request that is served. It logs the Logical SQL, request type and source, user, records returned, and response time. For a single dashboard there may be many entries. It’s entirely transient so far as I know, so is only useful for observing a session as it happens. It would be nice if there were a web services interface to this but it doesn’t look like there is. You can access it directly at http://%5Bserver%5D:%5Bport%5D/analytics/saw.dll?Sessions
  2. Log mining – sawserver – The presentation services log file, sawserver.log, can be configured to record detail down to a very low level, certainly enough to be able to track user requests and responses. However unless you’re looking at diagnosing a problem for a specific user then this method is probably unrealistic because such levels of logging on a production server would be unwise.
  3. Client side logging – some kind of hack to monitor and record the user’s experience. Something like FireBug or Fiddler2 in logging mode? Not very viable unless it’s low number of users and you have access to their web browser & machine.

Bear in mind that options 1 and 2 only give the response time as far as Presentation Services; they do not include network and rendering at the client. In some cases these times can be considerable (particularly if you have badly designed reports).

Response times of the system

If you’re just trying to measure response times of requests sent to Presentation Services there are several possibilities. As above it depends on the aim of your testing as to which approach you choose:

  1. Simulate user client activity – Use a web client testing tool (eg. Load runner, OATS, Selenium) to record and replay user actions in Answers/Dashboards as if through a web browser, and capture the timings. NB just because Load Runner is best known for Load testing, there’s no reason it can’t be used for replaying individual users to measure standard response times rather than under load. I think (although haven’t tried) HP’s BAC can also replay LoadRunner VUser scripts and capture & monitor timings over time, alerting for deviances.
  2. Go URL – Documented in Chapter 11 of the Presentation Services Admin Guide (and Nico has a nice summary and set of examples here), this is a way of issuing direct requests to Presentation Services by building up the request in the URL. Using this method you could then wrap a simple wget / curl script around it and build up a set of timings that way.
    curl -o c:\scratch\tmp.html "http://[server]:[port]/analytics/saw.dll?Dashboard&PortalPath=%2Fshared%2FFinancials%2F_portal%2FPayables&Page=Overview&NQUser=User&NQPassword=Password"

    Bear in mind that Answers/Dashboards are asynchronous so the first server response may not equate to a fully-loaded dashboard (you may get “Searching … ” first, and then the chart/table is delivered & rendered). See some of the discussion on my earlier postings around Load Runner, particularly this one.

  3. Web services – documented here, this would be similar to Go URL, in that it’s a way of requesting content from Presentation Services in a way that can be scripted and thus timed – but again is not necessarily reproducing the full user experience so make sure you’re aware of what you are and are not actually testing.
  4. Can anyone suggest other options?

June 11, 2010

Scripts to extract information from OBIEE NQQuery.log

Filed under: hack, nqcmd, obiee, unix — rmoff @ 09:02

Here are a couple of little unix scripts that I wrote whilst developing my performance testing OBIEE method.

They’re nothing particularly special, but may save you the couple of minutes it’d take to write them πŸ™‚

Note that some of this data is available from Usage Tracking and where it is I’d recommend getting it from there, databases generally being easier to reliably and repeatably query than a transient log file.

Extracting Logical SQL from NQQuery.log

First is how to extract logical SQL statements from NQQuery.log. This is useful if you want to build up a set of files to replay as a test load against OBIEE:

  1. Use grep to extract just the logical SQL statements
    grep SAW_SRC NQQuery.log > NQQuery.lsql_statements.txt
  2. If desired, eliminate duplicates from the file
    	sort -u NQQuery.lsql_statements.txt > NQQuery.lsql_statements.deduped.txt
  3. Use split to the lsql statements into separate files:
    split -l 1 NQQuery.lsql_statements.txt replay.

    This creates a set of files with a replay. prefix and xx suffix, eg.

    • replay.aa
    • replay.ab
    • etc

Extracting query metrics from NQQuery.log

The next snippet will parse the end of the NQQuery.log and output query execution details:

  • Number of database queries
  • How many rows were returned
  • How long it took
# get_nq_stats.sh
# https://rnm1978.wordpress.com
#
# Outputs query details of the most recently executed query on BI Server
# Make sure OBIEE_HOME environment variable is set, or update this script to hardcode its location
#
# Usage 
#     get_nq_stats.sh <testref>
#
# Examples:
#   Append to file: 
#     get_nq_stats.sh testrep01 >> nq_stats.csv
#   Output to screen:
#     get_nq_stats.sh testrep01
#
tail -n12 $OBIEE_HOME/server/Log/NQQuery.log|awk -v ref=$1 'BEGIN {physical=""
rows=""
elapsed=""
} {
if ($8=="physical") {gsub(/,/,"",$10) ;physical= $10}
if ($2=="Rows" ) {rows= $6}
if ($2=="Logical") {gsub(/,/,"",$8) ; elapsed= $8}
}
END {
# print "DB Queries,Rows,Elapsed sec"
print ref "," physical "," rows "," elapsed
}
'

The usage for this is on an isolated sequential test environment where you run one BI query, then run this against NQQuery.log, then another query, then this against NQQuery.log etc. Each time you call this script you use a reference (that of the BI Query you’ve just run), and this will be output along with the data from NQQuery.log.
If you call this script and pipe the output to append to a CSV file you can build up a file that looks like this:

Report reference,DB Queries,Rows,Elapsed sec
test_report_001,1,2171,165
test_report_002,1,12,143
test_report_003,2,10,6
test_report_004,1,1890,5
test_report_005,1,615,7
test_report_006,4,893,70
test_report_007,4,1407,77
test_report_008,1,148,126
test_report_009,1,4,48
test_report_011,1,3,152
test_report_012,1,15,430
test_report_013,8,452,141
test_report_014,1,21015,390

OBIEE Replay

These snippets form part of a set of Unix and Oracle scripts that I’ve developed under the title OBIEE Replay. The idea of it is to build a harness through which Logical SQL statements can be run against the BI Server and various metrics collected, all in a repeatable manner.
As and when I get time, I plan to post these scripts up here, so watch this space… πŸ™‚

June 3, 2010

OBIEE 11g launch date – 7th July 2010

Filed under: obiee, obiee 11g — rmoff @ 09:08

OBIEE 11g is going to be officially launched on 7th July (this year!) in London:
Launch Event: Introducing Oracle Business Intelligence Enterprise Edition 11g

h/t to Twitter and blogs (including an Oracle-branded one) this morning. Good news travels fast! πŸ™‚

Looks like John Minkjan won the bet, did he know something we didn’t? πŸ˜‰ (although is “Launch” the same as “GA”?)

Blog at WordPress.com.