ASO Agg Views QuestionMay 11 2017 at 1:35 PM
No score for this post
from IP address 220.127.116.11
If the MDX report run after query tracking is enabled returns no data such as Period was DEC and Year was _2017, does it still aggregate?
I was under impression that you just choose 1 member from each dimension. Then it aggs that dimension at same generation of members selected.
Or am I missing something?
Thanks in advance.
Re: ASO Agg Views QuestionNo score for this post
|May 12 2017, 9:29 AM |
There was a presentation at KScope15 about how to "game" the aggregations. You might want to search for that. I think it's a bit more complicated than how you spelled it out but sounds like you're on the right track. Are you running aggregations based upon query results after the MDX report is run? Are any of the members in your MDX report from a dynamic member?
Using this methodNo score for this post
|May 12 2017, 1:14 PM |
Appreciate the response greatly.
We were doing based of MDX Report by turning on Query Tracking. Wasn't really working. I think because the particular reports were based on Dec and 2017. However no data there thus report was empty. Not sure it would still aggregate.
I just did that tool on a 30 dim database. I ran 360 combinations(Agg Views) through it. It took 3hrs or so, which is ok as the 6 MDX reports we run takes 8hrs as the reports have to run each time. It build the 360 Agg Views. The .dat file went from 25,001,984 to 135,208,960. Yes it's big but we aren't backing the .dat file up each night so no issues there.
I' doing some retrievals now to test performance. A few questions I have are:
1-If there is a formula on a member can it still be in Agg View? Or is it skipped as it always must calculate on retrieval.
2-There are a max of 2048 Agg Views posssible. I wouldn't dare do that, but what if I end up at 200 or so. Is there any issues there? Or if our RAM and CPU can handle it, don't worry.
3-On that website there is some code in back ground that is done to create the .csc file to run. Is that code anywhere to be found or published in Oracle docs anywhere? I was thinking of re-creating it on a Excel sheet using VBA and making it more dynamic. But have no idea how the .csc is created.
Re: Using this methodNo score for this post
|May 12 2017, 1:36 PM |
Re the MDX formulae - you're quite right that the formulae are calculated at query time, but internally that query decomposes into one or more regular "stored queries" that *will* drive view selection with query tracking enabled.
per my understandingNo score for this post
|May 12 2017, 1:57 PM |
I would liek to share my view for your questions,
Ans1. dynamic hierarchies are excluded from agg. views. That's the reason adding or deleting members to dynamic hierarchies do not break existing agg. views definitions.
Ans2. ASO needs/relies on more RAM than CPU power. not sure how much RAM /disk space you have allocated for Essbase server.
Ans3. creating .csc file (see sample below) is advantageous as you can run agg.views via MAXL after data refresh. process of creating .csc file is simple,
step1 - build your agg. views
step2 - run command query database appname.dbname list existing_views; (try via EAS)
you should get output with columns like "view id", "outline id" etc..
First line indicates total number of views (20 in my example)
Second line – Outline ID (4142187941 in my example)
All subsequent pairs of lines (3rd and 4th line onward) – aggregate views. where 1st line in pair is "view id" and second is size of the aggregate view as a factor of the input dataset.
all of the above has to match system output except "size of aggreagte view", which you can define too (example 0.1 for all).
contents of .csc file:-
once file is created run this command from MAXL to build ag. views. execute aggregate process on app.db;
note - .csc file contents needs to be updated every time agg. views definition changes
AhhhNo score for this post
|May 14 2017, 8:43 AM |
So in the example below, how is the 17 and 0.980087482822801 calculated? Or is that some secret sauce?
Agree with TimGNo score for this post
|May 15 2017, 1:02 PM |
Agree with TimG there is a way to reverse engineer it but I haven't tried it. So I would say "17 ..." is more of some secret sauce generated by Essbase when you have query tracking ON and build agg views. So I would say easiest way to know is from EAS like I mentioned.
Secret SauceNo score for this post
|May 24 2017, 12:29 AM |
TimG and I have worked out most of the secret sauce but never finished it for cubes with multiple hierarchies enabled or attribute dimensions (which are in essence alternate hiearchies).
In short the 17 represents the aggregated view number out of ALL POSSIBLE aggregated views. There is a numbering system which encodes each of the possible views into an aggregated view number. This formula is based on the order of the dimensions in the outline and for each dimension: the hierarchy number (in the case of multiple hierarchies) and the generation/level at which the aggregation is to be performed.
NOTE: the generation/level is counted in a slightly modified way from what you might normally expect - see my chapter in Developing Essbase applications if you really want to know.
The 0.980087482822801 is a measure of the "cost" of performing the aggregation and can be thought of as roughly equal to the percentage of the level 0 data that would have to be read to compute the aggregated view. Note that the actual cost may be less due to the preexistance of other aggregated views.
Also note that this "cost" is not equal to the cost seen on the aggregation screen when run using the wizard. That "Cost" is based on the cost as defined here and a measure of how useful the view will be. Usefulness is roughly (again) equal to the percentage of all possible queries (i.e. views) that could possibly use this view as opposed to the level zero view. I must admit that I have not fully worked out the calculation to recreate the costs shown in the wizard but I have tested enough to know that my understanding is a rough analogue for that calculation.
In short with the above knowledge you should arrive at the same conclusion that TimG and I arrived at regarding the manual specification of aggregated views. It is not quite worth the trouble. Better to redesign your cube to minimize the need for aggregation.
Dan.pressman @ NTUPLE.NET
Re: ASO Agg Views QuestionNo score for this post
|May 12 2017, 1:35 PM |
I also spoke about this at Kscope in 2011 (and not much has changed since then). You can find the presentations Tim Faitsch and I mention at ODTUG.com; you would need to sign up for a (free) associate membership with your email address to get to them.
I've never known the answer to the question about whether query tracking is driven by queries that don't return data so I just went and tested it. The short answer is "yes". Query tracking still changes the views chosen even if the query used returned only #Missing values.
How can you create the .cscNo score for this post
|May 15 2017, 9:15 AM |
If you have the file:
Can you create the .csc file in MaxL or is only way by manually performing Design Aggregation via EAS? Trying to figure out if it's possible to automate this without running the Query Tracking and just executing the .csc file. But need a way to dynamically create the .csc file.
Re: How can you create the .cscNo score for this post
|May 15 2017, 12:45 PM |
There *is* an algorithm that connects the number of stored levels in each dimension with those level selections you show and the value in the .csc file. It's not documented but a number of people have reverse-engineered it.
In most cases using query tracking (or accepting the default views up to an experimentally-determined size) is perfectly adequate for good query performance.
Creating your own .csc file is pretty advanced. Unless you *know* the 'standard' methods won't get you good query performance, I wouldn't go there.
Afraid of thatNo score for this post
|May 15 2017, 1:34 PM |
That's what I was afraid of. I was hoping that if the process is completed via EAS manually, it could be replicated via script/MaxL combination. We want to have this fully automated without actually running the MDX reports as some take a long time. Currently we have a list of troublesome reports/retrievals that we converted to MDX. Then we script it up and by turning on query tracking, running all MDX reports and then aggregating the database by the query tracking results. It works, just adds time as the reports have to finish.
Re: Afraid of thatNo score for this post
|May 15 2017, 1:37 PM |
Ah, OK. I don't think I understood your question. All of the items in your list *can* definitely be fully automated through MaxL.
However, if you aren't changing the number of stored levels in your database (i.e. you aren't adding or deleting dimensions, or changing the number of levels in stored hierarchies) then you don't necessarily *need* to recreate the .csc file on a regular basis.
Once you have saved the .csc file you can keep re-using it.
Re: Afraid of thatNo score for this post
|May 25 2017, 12:52 PM |
You may also want to look at slimming down your MDX reports. One tip for using MDX to "warm up" a cube with query tracking is to make the MDX as small as possible. Query tracking does not recognize members, it only recognizes levels. So if I have an MDX that runs Jan - Dec and all of my months are on the same level, I can make the MDX query smaller by only running Jan. From a query tracking perspective I will get the same results.
Try to isolate the specific level intersections you want to pre-aggregate and write your MDX scripts accordingly. Running multiple, small and precisely focused, MDX queries will be more effective than running large queries.
I would also try to eliminate members with formulas from my "warm up" queries, as well as any upper level aggregated members in dynamic hierarchies.
I think there is a better approachNo score for this post
|May 24 2017, 12:09 AM |
You are trying to speed up your retrivals by doing aggregations. Why not focus on speeding up your cube so that you do not need any (or very few)?????
You will be much happeir with the results.