EducationSoftwareStrategy.com
StrategyCommunity

Knowledge Base

Product

Community

Knowledge Base

TopicsBrowse ArticlesDeveloper Zone

Product

Download SoftwareProduct DocumentationSecurity Hub

Education

Tutorial VideosSolution GalleryEducation courses

Community

GuidelinesGrandmastersEvents
x_social-icon_white.svglinkedin_social-icon_white.svg
Strategy logoCommunity

© Strategy Inc. All Rights Reserved.

LegalTerms of UsePrivacy Policy
  1. Home
  2. Topics

KB442307:How to troubleshoot and improve Cube Publication times in MicroStrategy Secure Enterprise 10.x and later


Community Admin

Placeholder •


This document provides a list of settings that Administrators can tune to help improve Cube Publication times in MicroStrategy Secure Enterprise 10.x

The first step in analyzing the performance of Intelligent Cube Publication is to look at the post-execution SQL view of the initially published Intelligent Cube and understanding the breakdown. Typically, a long duration is seen in the Data Transfer time which indicates the time it takes to fetch data from the warehouse. The data retrieval is dependent on the network bandwidth, but in most Production Enterprise environments, this is not an issue as the network capacity is high enough to handle large volumes of data effectively. 
In some instances, the slow down can with the Analytical Engine processing or Other Processing i.e., the post-data fetch operations that need to happen on the Intelligence Server to create the Intelligent Cube data. 

Query Engine Execution Start Time:        26-Nov-18 3:02:48 PM
.....

Query Generation Time:        0:00:01.07
Total Elapsed Time in Query Engine:        0:00:21.10
    Sum of Query Execution Time:        0:00:16.80
    Sum of Data Fetching and Processing Time:        0:07:10.07
        Sum of Data Transfer from Datasource(s) Time:        0:00:00.06
    Sum of Analytical Processing Time:        0:00:00.28
    Sum of Other Processing Time:        0:00:04.22

....
....

Number of Rows Returned:        1111224
Number of Columns Returned:        83


This document provides settings that can be adjusted to help decrease Intelligent Cube publication times in Strategy Secure Enterprise 10.x. Note that this document does NOT provide a value for all the given settings but provides a list of all possible settings that can help improve the Cube Publication times. Administrators are advised to test the effect of each setting change separately and see which combination of settings work most optimally for their environment. 
Partitioning the Cube and Enabling the setting to Fetch data in Parallel from the Warehouse 
The partitioning attribute should be the attribute with the highest cardinality and the number of partitions should be equal to half of the available processors on  the Strategy Intelligence Server machine. For additional details about partitioning and best practices, refer to the following Strategy Knowledge Base document:
KB221859: PRIME partitioning guidelines for MicroStrategy 10 Secure Enterprise Platform
Partitioning the Intelligent Cube and checking the option to fetch data from the warehouse in parallel as shown below allows data to be fetched in parallel from the warehouse. This allows for faster data retrieval from the warehouse; in other words, if the data transfer time seems high, this setting should be reviewed. If a cube is configured to have 4 partitions, the Intelligence Server will attempt to fetch data in parallel using 4 threads. For this setting to work as expected, configure the VLDB setting Query Optimizations - Maximum Parallel Queries Per Report for the Cube to the same value as the number of partitions. For instance, if the Cube is configured to have 8 partitions, then the value of the VLDB setting should be 8.

ka0PW0000005FEXYA2_0EM44000000RKoy.jpeg

Adjust the VLDB setting Query Optimization > Data Population for Intelligent Cubes
The default normalization option 'Normalize Intelligent Cube Data in Intelligence Server' is not guaranteed to be the best performing setting for all types of data and schema designs. Change this setting to a different option to see if the Intelligent Cube Publication performance can be improved. If applicable to the specific cube design and schema, modifying this setting can help improve the data retrieval speed and the post-data retrieval operations. For guidance on the best setting that applies to each environment, refer to  the following Strategy Community Article: 
KB32010: What are the Data Population VLDB properties in MicroStrategy Engine?
Adjust the MaxCrossProcessTableSize
The MaxCrossProcessTableSize setting controls the amount of data transferred between the Strategy Intelligence server and the database process layer.
This setting is applied at the Intelligence server level. The default value is 300 MB, with a maximum value of 4096 MB (4 GB).
Administrators can adjust this setting to potentially increase the speed of data transfer by, for instance, doubling the initial value and observing the results.
However, Strategy does not advise that you increase this setting excessively, as it may lead to performance deterioration. 
For more information about this setting, see: KB17628: How to change the maximum database table chunking size used for data transfer between the database multiprocess component (M8MulPrc) and MicroStrategy Intelligence Server
Increase the Rowset size
The

ROWSET_SIZE
determines the number of rows in a rowset. The Intelligence server uses the MaxCrossProcessTableSize setting and database column size to calculate how many rowsets should be fetched in one round. The Intelligence server fetches at least one
ROWSET_SIZE
of rows in a single round.
This setting is applied at the DBMS level. The default value is 160 rows.
Administrators can adjust this setting for big data sources that contain columns with large precision. Adjusting this setting ensures that at least a ROWSET_SIZE of rows will be fetched in a single round. However, optimizing performance is more effectively achieved by adjusting the ODBCMaxLongVarSize setting.
Strategy does not recommend that you increase this setting to high values (such as 10000) as it can lead to substantial child process memory allocation issues.
Adjust the ROWSET_SIZE by updating the
ODBCConfig.ini
file. This file is located in the installation directory on Linux installations and in the
Common Files/Strategy
folder on Windows installations.
The
ODBCConfig.ini
file contains sections for different types of databases. For example, the configuration for a SQL Server database includes sections for ODBC 2.0 and ODBC 3.x Strategy versions 9.4.x and above use ODBC 3.x.
Example of SQL Server section in
ODBCConfig.ini:

<DB_SECTION ID="300">
    <DATABASE>SQL SERVER</DATABASE>
    <ODBC20>
      <ROWSET_SIZE>160</ROWSET_SIZE>
    </ODBC20>
    <ODBC35>
      <ROWSET_SIZE>160</ROWSET_SIZE>
      <PARAM_QUERY_ROWSET_SIZE>32768</PARAM_QUERY_ROWSET_SIZE>
      <SCROLLABLE_CURSOR>DEFAULT</SCROLLABLE_CURSOR>
    </ODBC35>

Database specific settings
The following settings can help improve the speed of data retrieval from the database for the specific databases mentioned below: 
For users using Teradata, using the TPT option on the Teradata side maybe faster than using the option to fetch data in parallel from the Warehouse. Refer to the following Community articles for information about TPT: 
KB266840: FAQ on using Teradata Parallel Transporter API (TPTAPI) with MicroStrategy Secure Enterprise Platform 10.x
ODBCMaxLongVarSize
ODBCMaxLongVarSize
is considered as the max precision of data and can dramatically impact cube publishing performance. It can be defined in the registry file.
This setting can be applied at both the DBMS level (specified in Database.PDS) and the Intelligence server level (registry).
The default value of ODBCMaxLongVarSize is 32000 (for Impala, it is 1024).
Data size in a single row is calculated based on the precision. The larger precision, the larger data size for one row, and thus less rows are fetched in one round due to the traffic control between the Intelligence server and the database multiprocess component (M8MulPrc).
For big data sources such as Impala, the driver always returns a big value (such as 65535) for string columns. In order to limit the memory cost, and take into consideration the efficiency in user cases, by default we would use 1024 as the default max precision for Impala source. When handling some cases where really long string columns are needed, users can specify the value of
ODBCMaxLongVarSize
in the DATABASE.PDS.
For other data sources with large precision columns, users can reduce the ODBCMaxLongVarSize value to enhance data transfer efficiency.
Update ODBCMazLongVarSize on DBMS level
Snowflake and Google BigQuery already have the VLDB setting added to Database.PDS with a 4000 value. For other data sources, users can manually add the VLDB setting Maximum Var Size for the specific DBMS in Database.PDS and then upgrade the DBMS or MD to apply the setting.
    <PROPERTYSET NAME="VLDB Data Type">
        <PROPERTY NAME="Maximum Var Size" VALUE="4000" />
    </PROPERTYSET>

Update ODBCMaxLongVarSize on the Intelligence server level
Update registry and restart the Intelligence server to apply the setting.

ka0PW0000005FEXYA2_0EM2R000001I4cn.jpeg

Increase the Fetch size for ODBC or JDBC data retrieval
Most ODBC/JDBC drivers across various data sources have properties such as ArraySize and FetchSize that determine the number of rows fetched in a single network round trip. Administrators can adjust these parameters to optimize performance.
ODBC
The default fetch size for ODBC connections varies depending on the ODBC driver used and the driver-specific properties, which differ across databases.
For instance, in the Oracle ODBC driver, the

ArraySize
property specifies the number of bytes the driver can fetch in a single network round trip. Larger values increase throughput by reducing the frequency of data fetches across the network, while smaller values improve response time by minimizing delay. The default
ArraySize
in the Oracle ODBC driver is 60000.
This setting is applied at the DB Role level. Administrators can optimize performance by adding the parameter
ArraySize=80000000
to the Oracle ODBC connection string.

ka0PW0000005FEXYA2_0EMPW00000KjJHX.jpeg

Similar optimizations can be made for other drivers with properties like

Fetch
for PostgreSQL and
MaxRespSize
for Teradata.
JDBC
The default fetch size for JDBC connections varies depending on the JDBC driver. For example, the Oracle JDBC driver default rowset size is 100. The option to change the rowset size for JDBC connections is available only in Strategy 11.0 and higher.
To enable this setting, add the
FETCHSIZE=xxx
parameter to the connection string of a dbrole, as shown below. This parameter serves as a hint to the JDBC driver and must be supported by the specific driver in use. Some drivers may ignore it if not implemented.
Setting the fetch size to an excessively high value (in the range of tens of thousands) can lead to performance issues due to the JVM's garbage collection. The optimal value depends on the hardware and JVM settings and should be determined through testing.

ka0PW0000005FEXYA2_0EMPW00000KiuHY.jpeg

To improve data fetching performance for JDBC connections, administrators can also increase the JNI Bridge Heap Space. For detailed instructions on how to adjust this settings, refer to the following article: KB439951: How to set the maximum Java Heap Size value for JDBC source connectivity in MicroStrategy Developer and Web 2021.
KB442307


Comment

0 comments

Details

Knowledge Article

Published:

November 27, 2018

Last Updated:

March 31, 2025