Hello world!

Welcome to WordPress.com. This is your first post. Edit or delete it and start blogging!

Posted in Uncategorized | 1 Comment

Regular Expressions for MDX


Nearly two years ago, Darren Gosbell put online a project to help parsing calculated measures in a MDX script. The implementation was based on the use of regular expressions.

Regular expressions can be useful to decompose a MDX query (with elements explicitly defined, i.e. not using MDX functions) and retrieve involved objects (i.e. dimensions, levels, members, etc.). I often used them in performance improvement processes or for MDX pre-validation purpose.

Parsing MDX Script

MDX Script is made of commands. Each command can be multi-lined, but is always terminated by a semicolon.

MDX script can also contain single-line (starting with //) and multiple-lines comments (enclosed by /* and */).


Regular expression

Single-line comment


Multiple-lines comment


MDX Script command



Script Sample1.vbs prints commands from MDX script:

It starts be removing comments from retrieved script, and then isolates each MDX command.


The MDX Script command won’t parse correctly if it contains a MDX object or a string with a semicolon.

This is the case for the following command in Adventure Works cube MDX script:

If IsEmpty(([Measures].[Amount],[Scenario].[Scenario].[Budget]))

Then Format_String([Scenario].[Scenario].[Budget Variance %]) = "#;;;Not Budgeted"

End If;

Parsing MDX Command

We will focus here in retrieving MDX objects:

  • Cubes
  • Dimensions
  • Hierarchies
  • Levels
  • Members
  • Measures


Regular expression

MDX object



The purpose is to check object existence before running a MDX command. At some stage, we might need to classify MDX object: is it a dimension, a level or a member? To do so, we will query metadata.

Script Sample2.vbs extends Sample1.vbs by parsing each MDX Script command, and guessing MDX objects type.


These regular expressions will only work for "well formed" MDX, where objects are enclosed in square brackets.


If you are interested in regular expressions, here are some useful links:



Regular expression

MDX set


MDX tuple







While regular expression will never replace the use of a true parser, they can still be very handy in any situation where MDX command extraction is required.

Posted in MDX | 100 Comments

UPDATE CUBE in subcubes


Excerpt from BOL: "subcube is a subset of a cube on representing a filtered view of the underlying data".

Subcubes can be used for different purposes:

  • Simplify user navigation by displaying only needed cube items.
  • Improve query performance by limiting query evaluation space.

Implementing security

Subcubes can be interesting for implementing application-level security, similar to dimension security (I talked about this in a recent article dedicated to SSAS 2008 personalization extensions).

For example, one user can have different positions in a company:

  • As a head of the IT department, he will need to see IT related data for all the branches of the enterprise.
  • As a head of a branch, he will need to have only access to his branch’s data.

This requirement could be achieved with SSAS security by defining two roles (where role is specified at connection time with Roles connection property), but would require SSAS administration operations each time a filter needs to be modified, and could lead to a huge amount of roles in SSAS database.

Note: Indeed, application-level security will never replace server-side security. We are more in a situation where cube filters needs to be changed dynamically, as in a Budget and Planning application for instance.

Now that we decided to use subcubes for handling application-level security, what about UPDATE CUBE use in such a context?

Case study

For simplicity, we will use a very simple write-enabled cube called Organization, containing just one dimension. This dimension (also called Organization) represents company structure:


In our case, our user has access to all divisions in USA Operations, but only wants to see east divisions (northeast and southeast).

Creating subcube

The following MDX command will be used:

CREATE SUBCUBE Organization AS SELECT {[Organization].[Northeast Division],[Organization].[Southeast Division]} ON 0 FROM Organization

Now if we query dimension members, we will only see our two divisions (and its ascendants):



Let’s issue the following command to spread a value on our two divisions:

UPDATE CUBE Organization SET ([Organization].[USA Operations])=1000

We expect our value to be evenly distributed on our two divisions (i.e. 500 on each). This is not what we get:


We are the rest of the values gone?

If we temporarily drop our subcube, and query again, here is what we see:


What happened is UPDATE CUBE distributed the amount 1000 on [USA Operations] descendants, including out of subcube members (highlighted in blue).

What have we learned?

UPDATE CUBE can’t be used in a subcube when it’s issued on a cell having filtered descendants.


Regarding what we’ve previously learnt, we will now work on individual divisions.

Let’s issue the following commands to update Northeast division (at this stage, we are back in a subcube context):


UPDATE CUBE Organization SET ([Organization].[Northeast Division])=500

The command works fine:


At some time, we will want to validate (i.e. persist) our updates. The following command will do so:


The command seems to work properly, as a success message is returned. For a write-enable cube, COMMIT TRAN statement as the effect of inserting records in a writeback table. Here we should get one record, as our UPDATE CUBE command was applied on a leaf cell.

If we check the content of the writeback table, we see no records were inserted: COMMIT TRAN has been ignored!

We notice the same effect with a ROLLBACK TRAN command: updated values are not discarded!

Does this mean we will never be able to persist our updates?

The solution

If we recall the previous scenario, here is what we did:

  1. Create subcube
  2. Start transaction
  3. Update cube
  4. Commit transaction

Now, let’s slightly change our scenario:

  1. Start transaction
  2. Create subcube
  3. Update cube
  4. Drop subcube
  5. Commit transaction

This time it works, and records are inserted in writeback table.

What have we learned?

UPDATE CUBE commands can be sent in the context of the subcube, but transaction must be handled outside of the subcube.

This means that each time we need to validate or cancel our updates, we will have to temporarily drop our subcube.

Note the behavior is the same with SSAS 2005 and SSAS 2008.


While there are some limitations with UPDATE CUBE in a subcube context (updates only on cells not having filtered descendants, transaction outside subcube), there is still an interest at using it, when business logic such as application-level security needs to be implemented.

Posted in Analysis Services | 9 Comments

Simulating Parent-Child dimension with data on leaf members only


Parent-child dimensions

When SQL Server introduced Parent-Child dimensions with Analysis Services 2000, it came with different options regarding members associated data, also called Data Members.

This option could be set via Members With Data property.

Excerpt from SSAS 2000 BOL: "The Members With Data property indicates whether nonleaf members of a parent-child dimension are allowed to have associated fact table data. By default, nonleaf members are not allowed to have associated fact table data, so the property is initially set to Leaf members only. The related Data Member Caption Template property controls the names of data members when the Members With Data property is set to Nonleaf data visible. For more information about these properties, see Properties Pane (Cube Editor Data View) and Properties Pane (Dimension Editor Data View)".

There were three possible values for Members With Data property: Leaf members only, Nonleaf data visible and Nonleaf data hidden.

Analysis Services 2005 changes

With the coming of SSAS 2005, Leaf members only value for Parent-Child dimensions isn’t implemented anymore.

This shouldn’t be a real issue, as Nonleaf data hidden can be used as a workaround to give the illusion only leaf members have data.

This workaround is satisfying for most of the situations, except for the following case.

When Leaf members only is required

Let’s consider Organization Parent-Child dimension in SSAS 2005 Adventure Works DW sample database.


Let’s assume we have data only for leaf members (i.e. non leaf members are aggregating leaf ones; even if this is not the exact business meaning in the sample).

If we execute the following UPDATE CUBE command at [European Operations] member level:

UPDATE CUBE [Organization] SET ([Organization].[AdventureWorks Cycle].[European Operations])=60

We expect to have an equal distribution of 60 on [France] and [Germany], 30 each. Instead, we get 20 on [France], and 20 on [Germany] (but still an aggregated value of 60).


Where are the missing 20?

Remember our dimension is set with Non leaf data hidden option set (as a replacement to the disappeared Leaf member only option). This means member [European Operations] has its own data, so aggregated [European Operations] has in reality three children (and not two, as shown in query result). That’s where the missing 20 is gone: on [European Operations] Data Member (20+20+20=60).

This is a serious issue in a Budget and Planning context.

One could say the problem shouldn’t occur if the cube is pre-filled with fact data for all Leaf members. An UPDATE CUBE command combined with the USE_WEIGHTED_ALLOCATION option would then distribute data as expected.

The point here is we don’t always have fact data. If a company creates a new division (a new member in the dimension), it will start a budget from scratch, i.e. with no fact data.

So what can we do?


Obviously, we must avoid Parent-Child dimensions, and come back to standard ones.

The hierarchy level HideMemberIf property can be used to give the illusion of an unbalanced structure (as with Parent-Child).

Data source

The difficult thing is to get the dimension data source fit the new dimension type.

Our Organization Parent-Child dimension data source had a structure based on a key field (OrganizationKey) and a parent field (ParentOrganizationKey).



With a standard dimension, we need a table structure with as many fields as levels.

This structure change can be handled quite easily if we have control on the data source.

This is not always the case:

  • Data source can already exist when this change is required.
  • Parent-Child table structure must be kept for compatibility and usability reasons.

On the fly transformation

There are different techniques to transform a Parent-Child structure to a hierarchical one.

Many stored procedures have been published on the internet by T-SQL gurus. One of these stored procedures could be executed by a SQL Server Integration Services (SSIS) package before dimension processing.

For some reasons, we may need/want to keep all the logic at SSAS level, without relying on external packages. In this case, our standard dimension data source can be linked to a named query in the Data Source View. Our named query will be defined using a recursive query, which is possible since SSQL 2005 with the use of Common Table Expressions.

Using such recursive query on Parent-Child table structure will result in the following structure and dataset for our Organization dimension data source:



If we analyze in detail the transformed table structure, we notice it’s organized in groups of three fields, one group corresponding to a level:

  • k_Level<N>: OrganizationKey for level <N>, the member key.
  • n_Level<N>: OrganizationName for level <N>, the member name.
  • PercentageOfOwnership_Level<N>: for level <N>, a member property.

For leaf members with a smaller depth, values are duplicated on descendant levels.

Dimension design

Dimension design is straight forward. Each group of fields will be used to create an attribute. Then, attributes will be used to build a user hierarchy.

Finally, the HideMemberIf property will be set for each hierarchy level with value OnlyChildWithParentName. This will make those descendant (see before) levels invisible to the client.


Using it

If we browse the new dimension, it will look the same as the original Parent-Child dimension.

If we execute the same UPDATE CUBE command at [European Operations] member level:

UPDATE CUBE [Organization] SET ([Organization].[AdventureWorks Cycle].[European Operations])=60

We get the expected result, with value 60 equally distributed on [France] and [Germany]:



The lack of a Leaf members only value for Members With Data property in SSAS 2005 can be solved with unbalanced standard dimensions.


  • Query performance: it is recommended to avoid Parent-Child dimension, as values are not aggregated.


  • Process performance: depending on the dimension data source size and depth, recursive query execution can be time consuming when dimension needs to be processed.
  • Display: SSAS doesn’t properly estimate member children count for unbalanced standard dimensions. This can be seen when browsing dimension. On screen shot below, highlighted leaf members not being at the deepest level have an associated [+] symbol, indicating they can be drilled-down (which is not the case).


Posted in Analysis Services | 7 Comments

Analysis Services 2008 Personalization Extensions


SQL Server Analysis Services 2008 introduces a new feature called Personalization Extensions.

As said in book online, "Analysis Services personalization extensions enable developers to create new Analysis Services objects and functionality and to provide these objects and functionality dynamically in the context of the user session".

What’s behind this functionality?

Running the sample

The first step to have a better understanding is obviously running the sample provided by Microsoft to illustrate this feature.

The sample can be downloaded here (SqlServerSamplesAS.msi).

Once downloaded and installed, the project dedicated to Personalization Extensions is located in folder \Programmability\Extensibility\PersonalizationExtensions.

It contains a C# project and a ReadMe.htm file, which contains a very precise step-by-step procedure to setup the sample.

Step-by-step guide

While there are few mistakes in the ReadMe.htm file regarding context verification for each user (which items are visible or not), I have to say the guide is very precise and straight forward: we immediately understand the overall implementation of Personalization Extensions.


  • I deployed AdventureWorks for SQL Server 2005 to my SQL Server 2008 as I didn’t manage to restore SQL Server 2008 samples (SSQL didn’t recognize FILESTREAM activation).
  • Metadata ISV_1 folders were supposed to expand to sub-folders such as Common, Finance, and so on. This didn’t work in my case. Instead, I got ISV_1/Common, ISV_1/Finance, etc. Fortunately, this didn’t disturb the good working of the sample.


Debugging the Personalization Extension (PE) sample really helps understanding its mechanism.

Everything works via event registration. There are two levels of event registration:

  • Server: when PE is "mounted", SSAS calls it, and exposes a context object (class Microsoft.AnalysisServices.AdomdServer.Context) containing a reference to the server (Microsoft.AnalysisServices.AdomdServer.Server). The server object exposes two public events: SessionOpened and SessionClosing.
  • Session: when a user connects, SessionOpened event is called (SessionClosing on disconnect). The context object contains a reference to the current connection (class Microsoft.AnalysisServices.AdomdServer.AdomdConnection) which exposes two public events: CubeOpened and CubeClosing.

Subscribing to the four events allows us to trigger MDX commands such as ALTER CUBE, CREATE SET, CREATE KPI, etc.

Usage scenarios

Personalization Extensions will give us the ability to extend a cube, depending on the user context, without updating the cube itself.

A question that comes immediately to my mind is: can it be used for security?

Can it be used for security?

SSAS authentication strongly relies on Windows accounts. If we want to browse the same cube with two security strategies, we will need to have two different Windows accounts.

This can be annoying, for example in a Budget and Planning application, where a same user can have different positions in the company:

  • As a head of the IT department, he will need to see IT related data for all the branches of the enterprise.
  • As a head of a branch, he will need to have only access to his branch’s data.

Those requirements could of course be combined into one security strategy, but user may want to only see data related to the position he currently "incarnates" when he connects to a cube.

As we’ve seen before, we can trigger MDX commands from a PE at connection time:

  • CREATE SUBCUBE MDX statement can be used to alter cube space, and hide unwanted dimension member. This would make a good candidate for dimension security substitution.
  • CREATE CELL CALCULATION statements can be used to change/hide cell data (even if it’s not its primary purpose). This would make a good candidate for cell Read security substitution, but not for Read/Write security, which is used in a Budget and Planning scenarios.

Feature request

Back to the OLAP Services 7 and Analysis Services 2000 times, cell security was evaluated on client side.

While messing with cell security, I discovered it was applied at connection time by Pivot Table Services, via GRANT CELL statements.

The syntax looked like: GRANT CELL READ|WRITE ON CURRENTCUBE FOR ‘<MDX formula returning 0 or 1>’.

Of course, those commands were reserved to PTS, in a way that it wasn’t possible to execute them once connected.

That would be nice if we could be able to execute such statements in the context of a PE.


Even if Personalization Extensions are not an obligatory step in OLAP design, they can be very handy in situations where user oriented customization is necessary, without the need to update cube structure.

Posted in Analysis Services 2008 | 3 Comments

SSAS 2008 MOLAP Writeback (2)


In a recent post, I started investigating around a new Analysis Services 2008 feature: writeback partition with MOLAP storage.

Unfortunately, I got stopped by a bug in SQL Server 2008 CTP 5 which disallowed me from committing my updates.

SQL Server 2008 CTP 6 is now available. I can now continue from where I stopped last time.

Committing updates

After our UPDATE CUBE MDX command from SSMS, let’s run a COMMIT TRAN statement to validate our transaction.

This step works perfectly with CTP 6. We can see records inserted by SSAS into the writeback table.

What’s behind the scene?


When we fully process a MOLAP writeback partition, we notice an extra item (compared to when processing a ROLAP one) in the Process Progress dialog box. This item shows the SQL query to the writeback table used to feed MOLAP writeback partition.



If we activate SQL Server Profiler during COMMIT TRAN operation, we can see the following actions.

With ROLAP writeback:


With MOLAP writeback:


Extra WriteData events are issued, notifying SSAS 2008 feed of the MOLAP partition with writeback table data. This incremental feeding seems to be done by processing a temporary partition (named here WriteTable_Budget_INCR_UPDATE_TEMP_epn1o_), then merging it with the writeback partition.

A very interesting point is that if we profile relational database during commit operation, we can see the INSERT BULK T-SQL statement into writeback table, but no SQL query for processing temporary incremental writeback partition. It seems that SSAS takes in memory session updates and "Direct data load" them in this partition, as SQL Server Integration Services (SSIS) does with Partition Processing Data Flow items.

Performance gain

What improvement can we expect from a MOLAP writeback partition?

Test 1 (query)

To apprehend performance gain, we can do a simple test: in both cases (ROLAP and MOLAP), query the same cell, and see what’s being done in SQL Server Profiler.

Test sequence:

  1. Clear cube cache.
  2. Query cell.
  3. Query cell again (to check cache effect).

First query: data is read from measure group partition. With the ROLAP writeback partition, an extra ExecuteSQL event is triggered that reads writeback table data for corresponding cell. Data is cached.

Second query reads data from cache

The need to issue a SQL statement at query time makes MDX query processing longer. This extra step will dramatically increase query execution in situations where cube cell reading relies on many other cells (in case of calculations for example).

Test 2 (update)

The second test intends to observe cube cache consistency after committing updates.

Test sequence:

  1. Clear cache.
  2. Query cell twice.
  3. Update cell
  4. Commit transaction.
  5. Query cell.

In ROLAP mode, the UPDATE CUBE statement triggers 2 ExecuteSQL events to read writeback table data. The commit step doesn’t show a big difference between to two modes (see Profiling section earlier in the article).

In both bases, the commit step clears cube cache. As a consequence, the next cell read won’t be able to rely on cube cache.

That would be nice if SSAS could handle 2 caches: one for the fact data, and one for the writeback data. This would reduce the effect of committing updates.

Which method is faster?

I wrote a VB Script (see previous article on performance) that executes steps of update test (test 2).

I have stressed SSAS with this simple benchmark (running it 10 times), for both ROLAP and MOLAP writeback modes.

The results are shown in table below (times are in seconds):

















Querying and updating is faster with MOLAP writeback (mainly because no ExecuteSQL event is required to query writeback partition).

Committing is much slower with MOLAP writeback (probably because of the time needed to process/merge temporary partition). This shouldn’t be a major issue, as committing is usually done after a series of queries/updates, like saving a document.


MOLAP Writeback is a promising feature for a read/write OLAP context.

In a budgeting application, where users (sometimes called contributors) submit updates (i.e. commit changes) in a cycle way, query performance decreases, as SSAS permanently issues SQL statements to query writeback table. This won’t be the case anymore with MOLAP Writeback. We can expect here a huge performance improvement.

Posted in Analysis Services 2008 | 7 Comments

SSAS Performance Seeker Bag Pack


One of the things I have to deal with in an Analysis Services project is performance improvement: faster query, using minimum resources, with maximum concurrency.

Performance problems can sometimes be tricky to identify. I’ve set up a bunch of tools and techniques over the years that helped me sorting out such issues (mainly, but not exclusively, around MDX performance).


  • SQL Server Profiler: "Microsoft SQL Server Profiler is a tool that captures Microsoft SQL Server 2005 events from a server. The events are saved in a trace file that can later be analyzed or used to replay a specific series of steps when trying to diagnose a problem". An essential tool to understand what’s done by Analysis Services when it executes a MDX query.
  • Perfmon: SSAS exposes a complete set of performance counters. These counters are explained in SQL Server 2005 Analysis Services Performance Guide.
  • MDX Script Performance Analyser: a handy tool for identifying time consuming statements in a complex MDX Script. Source code didn’t evolved since April 2007, but this alpha version is fully operational.
  • MDX Studio: the dream tool by Mosha Pasumansky for tracking performance issues in MDX queries and MDX Scripts.
  • BIDS Helper: a collection of useful tools that integrate with SQL Server Business Intelligence Development Studio.
  • SQL Server 2005 Best Practices Analyzer: "The SQL Server 2005 Best Practices Analyzer (BPA) gathers data from Microsoft Windows and SQL Server configuration settings. BPA uses a predefined list of SQL Server 2005 recommendations and best practices to determine if there are potential issues in the database environment". Most of the best practices recommendations are natively included in SSAS 2008.
  • VB Script: quick (and dirty) ADO MD programming can sometimes be used to simulate query workload or cache warming.


  1. When query performance problems occur, I start by identifying long time running queries (which can be lost in multiple ones). To do so, I use SQL Server Profiler to collect all running queries. SQL Server Profiler has an option to export MDX queries from a trace (see here and here).
  2. Once I collected all the queries, I run them through a simple VB Script I developed, which executes MDX queries stored in a file, and output query number and execution time in a CSV format.
  3. Then I copy the CSV output into Excel. Excel allows me to quickly sort and highlight long time running queries.
  4. Once I have spotted critical queries, I start analyzing them one by one with MDX Studio.
  5. Sometimes, a query can rely on complex MDX Script. In such circumstances, I run the query through MDX Performance Analyzer to locate significant MDX Script parts.

Useful links and resources







This list is not exhaustive, so feel free to fulfill it by dropping a comment.

Update (18/02/2008)


  • U2U Analysis Services Best Pratice Analyser: "Analysis Services Best Practise Analyser (SqlAsBpa for short) is a tool which checks your live Microsoft Sql Server Analysis Services 2005 against some important best practises, and reports items which violate these best practises".

Usefull links and resources

Posted in Performance | 4 Comments