EducationSoftwareStrategy.com
StrategyCommunity

Knowledge Base

Product

Community

Knowledge Base

TopicsBrowse ArticlesDeveloper Zone

Product

Download SoftwareProduct DocumentationSecurity Hub

Education

Tutorial VideosSolution GalleryEducation courses

Community

GuidelinesGrandmastersEvents
x_social-icon_white.svglinkedin_social-icon_white.svg
Strategy logoCommunity

© Strategy Inc. All Rights Reserved.

LegalTerms of UsePrivacy Policy
  1. Home
  2. Topics

Sample REST API 11.1 with Python: push data to Multi-Table cubes


Robert Prochowicz

Manager, Sales Engineering • MicroStrategy


MicroStrategy makes it possible for organizations to push external data in JSON format into in-memory cubes that are on the MicroStrategy Intelligence Server to readily consume, process, and analyze the corresponding data. The Push API enables third-party vendors to generate MicroStrategy datasets, thus, unlocking new end-user workflows and opportunities for partner integrations. This sample contains Python code that can be used to push data to Multi-Table cubes (new feature in MicroStrategy 11.1)

A new feature in Strategy 11.1 (2019) REST API is the ability to push data to multi-table cubes. New set of APIs is enabled that consitutes a new process:

  • Create the definition of a multi-table dataset
  • Create a multi-table dataset upload session
  • Upload data to a multi-table dataset
  • Publish a multi-table dataset

Note - This example covers APIs needed to push data to multi-table Intelligent Cubes. A separate example covers the older process of pushing data to single-table cubes.
Note - After selecting option (2) the program will display Cube Id parameter. You should save it back to the script in Parameters section if you want to reuse the same script for multiple uploads to the same cube
Note - The Dataset API, by default, will create and publish the cube in 'My Reports' folder of the user who invoked the API call. Optionally you can provide destination folder ID.
Note - 'request' is a Python library that you need to install in your Python environment (pip install requests) 
Note - It is recommended to use 'pandas' Python library to format data chunks for pushing to a cube.
Note - I've added color formatting in the code below (http://hilite.me/). Code copied from here might not work due to indentation issues. In that case, use attached file with sample code.

# Strategy REST API - Multi-Table Dataset Creation and Update# POST /datasets/models (Create the definition of a multi-table dataset)# POST /datasets/{datasetId}/uploadSessions (Create a multi-table dataset upload session)# PUT /datasets/{datasetId}/uploadSessions/{uploadSessionId} (Upload data to a multi-table dataset)# POST /datasets/{datasetId}/uploadSessions/{uploadSessionId}/publish (Publish a multi-table dataset)# Tested with MSTR 11.1# Robert Prochowicz / 2019-01-07 import requestsimport base64import pandas as pd#import json ### Parameters ###api_login = 'XXXX'api_password = 'XXXX'project_id = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXX'base_url = 'http://YOURSERVER_ADDRESS/StrategyLibrary/api/'mtdi_id = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXX' # Cube ID; remember to copy& paste this parameter from step 2 mtdi_structure_json = '{"name":"Categories Cube","description":"Cube with categories data","folderId":"D3C7D461F69C4610AA6BAA5EF51F4125","tables":[{"data": "e30=","name":"LU_Category","columnHeaders":[{"name":"category_desc","dataType":"STRING"},{"name":"category_id","dataType":"DOUBLE"}]},{"data": "e30=","name":"Fact_Category","columnHeaders":[{"name":"category_id","dataType":"DOUBLE"},{"name":"day","dataType":"DATETIME"},{"name":"revenue","dataType":"DOUBLE"}]}],"metrics":[{"name":"Revenue","expressions":[{"tableName":"Fact_Category","columnName":"revenue"}]}],"attributes":[{"name":"Category","attributeForms":[{"category":"ID","expressions":[{"tableName":"LU_Category","columnName":"category_id"},{"tableName":"Fact_Category","columnName":"category_id"}]},{"category":"DESC","expressions":[{"tableName":"LU_Category","columnName":"category_desc"}]}]},{"name":"Day","attributeForms":[{"category":"ID","expressions":[{"tableName":"Fact_Category","columnName":"day"}]}]}]}'mtdi_upload_session_body = ('{"tables": ['                            '{"name": "LU_Category","updatePolicy": "REPLACE","orientation": "ROW"},'                            '{"name": "Fact_Category","updatePolicy": "REPLACE","orientation": "ROW"}'                            ']}') sample1_data_json = '[[1,"Books"],[2,"Music"]]'sample1_encoded = base64.b64encode(bytes(sample1_data_json, 'utf-8')).decode('ascii') sample2 = [[1,'2019-01-07',207.12],[1,'2019-01-08',17.4],[2,'2019-01-07',17.98]]df = pd.DataFrame(sample2,columns=['category_id','day','revenue'])sample2_data_json = df.to_json(orient='values', date_format='iso')sample2_encoded = base64.b64encode(bytes(sample2_data_json, 'utf-8')).decode('ascii') json_data1 = {"tableName": "LU_Category","index": 1,"data": sample1_encoded}json_data2 = {"tableName": "Fact_Category","index": 1,"data": sample2_encoded} # I guess, that above, you can pass a string and not dictionary/json:# json_data2 = '{"tableName": "Fact_Category","index": 1,"data": sample2_encoded}'# or even upload both samples at once:# '{"tableName": "LU_Category","index": 1,"data": sample1_encoded},{"tableName": "Fact_Category","index": 1,"data": sample2_encoded}'# just remember to change push_data() function: <json=json_data> to <data=json_data> #### FUNCTIONS ###def login(base_url,api_login,api_password):    print("Getting token...")    data_get = {'username': api_login,                'password': api_password,                'loginMode': 1}    r = requests.post(base_url + 'auth/login', data=data_get)    if r.ok:        authToken = r.headers['X-MSTR-AuthToken']        cookies = dict(r.cookies)        print("\nToken: " + authToken)        return authToken, cookies    else:        print("HTTP %i - %s, Message %s" % (r.status_code, r.reason, r.text)) def set_headers(authToken,project_id):    headers = {'X-MSTR-AuthToken': authToken,               'Content-Type': 'application/json',#IMPORTANT!               'Accept': 'application/json',               'X-MSTR-ProjectID': project_id}    return headers def create_cube_mtdi(base_url, authToken, cookies, project_id, mtdi_structure_json):    headers = set_headers(authToken,project_id)    print("Creating new cube...")    r = requests.post(base_url + "datasets/models", headers=headers, data=mtdi_structure_json, cookies=cookies)    if r.ok:        print("\nCube CREATED successfully")        print("Cube ID:     " + r.json()['id'])        print("Cube Name:   " + r.json()['name'])        print("HTTP Status Code: " + str(r.status_code) + "    ||    Error: " + str(r.raise_for_status()))        print("Remember to copy and note down Cube ID (dataset ID). Enter this value in 'Parameters' section")        return r.json()['id']    else:        print("HTTP %i - %s, Message %s" % (r.status_code, r.reason, r.text))       def upload_session(base_url, authToken, cookies, project_id, mtdi_id):    upload_session_url = base_url + "datasets/" + mtdi_id + "/uploadSessions"    headers = set_headers(authToken,project_id)    r = requests.post(upload_session_url, headers=headers, data=mtdi_upload_session_body, cookies=cookies)    print("Creating new upload session...")    if r.ok:        UploadSession = r.json()['uploadSessionId']        print("\nUpload Session ID:     " + UploadSession)        print("HTTP Status Code: " + str(r.status_code) + "    ||    Error: " + str(r.raise_for_status()))        return UploadSession    else:        print("HTTP %i - %s, Message %s" % (r.status_code, r.reason, r.text))  def push_data(authToken, cookies, project_id, mtdi_id, uploadSession, json_data):    #encoding moved to 'Parameters'    #dataBody = base64.b64encode(bytes(dataBody, 'utf-8')).decode('ascii')    push_url = base_url + "datasets/" + mtdi_id + "/uploadSessions/" + uploadSession    headers = set_headers(authToken,project_id)    r = requests.put(push_url, headers=headers, json=json_data, cookies=cookies)    print("Pushing data...")    if r.ok:        print("\nData PUSHED successfully...")        print("HTTP Status Code: " + str(r.status_code) + "    ||    Error: " + str(r.raise_for_status()))    else:        print("HTTP %i - %s, Message %s" % (r.status_code, r.reason, r.text)) def publish_cube(authToken, cookies, project_id, mtdi_id, UploadSession):    publish_url = base_url + "datasets/" + mtdi_id + "/uploadSessions/" + UploadSession + "/publish"    headers = set_headers(authToken,project_id)    r = requests.post(publish_url, headers=headers, cookies=cookies)    print("Publishing cube...")    if r.ok:        print("\nCube PUBLISHED successfully...")        print("HTTP Status Code: " + str(r.status_code) + "    ||    Error: " + str(r.raise_for_status()))          else:        print("HTTP %i - %s, Message %s" % (r.status_code, r.reason, r.text))     def main(mtdi_id):    authToken, cookies = login(base_url,api_login,api_password)    choice = None    while choice != "0":        print \        ("""        ---MENU---                0 - Exit        1 - Generate and Print Token        2 - Create a MTDI Cube        3 - Create Upload Session        4 - Upload data        5 - Publish MTDI Cube        """)         choice = input("Your choice: ")        if choice == "0":            print("Good bye!")          elif choice == "1":            authToken, cookies = login(base_url,api_login,api_password)        elif choice == "2":            mtdi_id = create_cube_mtdi(base_url, authToken, cookies, project_id, mtdi_structure_json)        elif choice == "3":            UploadSession = upload_session(base_url, authToken, cookies, project_id, mtdi_id)        elif choice == "4":            push_data(authToken, cookies, project_id, mtdi_id, UploadSession, json_data1)            push_data(authToken, cookies, project_id, mtdi_id, UploadSession, json_data2)        elif choice == "5":            publish_cube(authToken, cookies, project_id, mtdi_id, UploadSession)        else:            print(" ### Wrong option ### ") ### Main program    main(mtdi_id) ####### JSON Examples for Swagger site: # POST /datasets/models (Create the definition of a multi-table dataset)# {"name":"Categories Cube Swagger","description":"cube with categories data created from Swagger site","folderId":"B4536012461F82D80F147E9440B8DC27","tables":[{"data": "e30=","name":"LU_Category","columnHeaders":[{"name":"category_desc","dataType":"STRING"},{"name":"category_id","dataType":"DOUBLE"}]},{"data": "e30=","name":"Fact_Category","columnHeaders":[{"name":"category_id","dataType":"DOUBLE"},{"name":"day","dataType":"DATETIME"},{"name":"revenue","dataType":"DOUBLE"}]}],"metrics":[{"name":"Revenue","expressions":[{"tableName":"Fact_Category","columnName":"revenue"}]}],"attributes":[{"name":"Category","attributeForms":[{"category":"ID","expressions":[{"tableName":"LU_Category","columnName":"category_id"},{"tableName":"Fact_Category","columnName":"category_id"}]},{"category":"DESC","expressions":[{"tableName":"LU_Category","columnName":"category_desc"}]}]},{"name":"Day","attributeForms":[{"category":"ID","expressions":[{"tableName":"Fact_Category","columnName":"day"}]}]}]} # POST /datasets/{datasetId}/uploadSessions (Create a multi-table dataset upload session)# {"tables": [{"name": "LU_Category","updatePolicy": "ADD","orientation": "ROW"},{"name": "Fact_Category","updatePolicy": "ADD","orientation": "ROW"}]} # PUT /datasets/{datasetId}/uploadSessions/{uploadSessionId} (Upload data to a multi-table dataset)# sample1_data_json => [[1,"Books"],[2,"Music"]] - this must be encoded# sample2_data_json => [[1,'2019-01-07',17],[1,'2019-01-08',17.4],[2,'2019-01-07',17.98]] -  this must be encoded# {"tableName": "LU_Category", "index": 1, "data": "W1sxLCJCb29rcyJdLFsyLCJNdXNpYyJdXQ=="},{"tableName": "Fact_Category","index": 1,"data": "W1sxLCIyMDE5LTAxLTA3IiwxNy4wXSxbMSwiMjAxOS0wMS0wOCIsMTcuNF0sWzIsIjIwMTktMDEtMDciLDE3Ljk4XV0="} ### Update Policy'''Add - Adds new data if it does not exist in the current dataset table. Does not replace data if it already exists in the current dataset table.Update - Replaces data if it does exist in the current dataset table. Does not add new data if it does not exist in the current dataset table.Upsert - Adds new data if it does not exist in the current dataset table. Replaces data if it does exist in the current dataset table.Replace - Replaces all the existing data in the dataset table with new imported external data.'''


Comment

0 comments

Details

Example

Published:

January 7, 2019

Last Updated:

October 19, 2022