Monday, October 24, 2016

SAP HANA: On which base columns do my information views depend?

For one of Just-BI's customers, we're currently working to productize a custom-built proof-of-concept SAP/HANA application.

This particular application is quite typical with regard to how we use SAP/HANA features and techniques: it is a web-application for desktop and tablet devices, that we serve through SAP/HANA's XS engine. For database communication, we mostly use OData (using XS OData Services), as well as the odd xsjs request (in this case, to offer MS Excel export using ODXL)

The OData services that our application uses are mostly backed by SAP/HANA Calculation views. These, in turn, are built on top of a mixed bag of objects:

  • Some of these are custom base tables that belong to just our application;
  • Some are base tables that collect output from an advanced analytics recommendations algorithm that runs in an external R server
  • Some are information views (analytic views, attribute views and calculations views) that form a virtual datamart (of sorts) on top of base tables replicated from various SAP ERP source systems to our SAP/HANA database.

One of the prerequisites to productize the current solution is a re-design of the backend. Redesign is required because the new target system will be fed from even more ERP source systems than our proof-of-concept environment, and the new backend will need to align the data from all these different ERP implementations. In addition, the R algorithm will be optimized as well: in the proof-of-concept environment, the advanced analytics algorithm passes through a number of fields for convenience that will need to be acquired from elsewhere in the production environment.

To facilitate the redesign we need to have accurate insight into which base columns are ultimately used to implement our application's data services. As it turns out, this is not so easily obtainable using standard tools. So, we developed something ourselves. We think this may be useful for others as well, which is why we'd like to share it with you through this blog.

Information View Dependencies

The standard toolset offers some support to obtain dependencies for information views (analytic views, attribute views and calculation views):

As it turns out, these standard tools do not give us the detailed information that we need. The HANA studio features are mainly useful when designing and modifying information views, but do not let us obtain an overview of all dependencies, and not in a way that we can easily use outside of HANA Studio. The usefulness of querying the OBJECT_DEPENDENCIES system view is limited by the fact that it only reports objects - that is, base tables or information views - but not the columns contained therein.

It looks like we're not the only ones struggling with this issue.

Getting the information view's definition as XML from _SYS_REPO.ACTIVE_OBJECT

To get the kind of information we need, we're just going to have to crack open the definition of the information view and look what's inside. As it turns out, HANA stores this as XML in the CDATA column of the _SYS_REPO.ACTIVE_OBJECT system table, and we can query it by package name, object name and object suffix (which is basically the extension of the file containing the definition that is stored in the repository):

SELECT  CDATA
FROM    _SYS_REPO.ACTIVE_OBJECT
WHERE   PACKAGE_ID    = 'my.package.name'
AND     OBJECT_NAME   = 'CA_MY_CALCULATION_VIEW'
AND     OBJECT_SUFFIX = 'calculationview'

With some effort, _SYS_REPO.ACTIVE_OBJECT can be joined to OBJECT_DEPENDENCIES to discover the objects on which the information view depends:

SELECT      od.BASE_SCHEMA_NAME
,           od.BASE_OBJECT_NAME
,           od.BASE_OBJECT_TYPE
FROM        _SYS_REPO.ACTIVE_OBJECT ao
INNER JOIN  OBJECT_DEPENDENCIES     od
ON          '_SYS_BIC'                          = od.DEPENDENT_SCHEMA_NAME
AND         ao.PACKAGE_ID||'/'||ao.OBJECT_NAME  = od.DEPENDENT_OBJECT_NAME
AND         'VIEW'                              = od.DEPENDENT_OBJECT_TYPE
WHERE       ao.PACKAGE_ID                       = 'my.package.name'
AND         ao.OBJECT_NAME                      = 'CA_MY_CALCULATION_VIEW'
AND         ao.OBJECT_SUFFIX                    = 'calculationview'

(Note: OBJECT_DEPENDENCIES reports all dependencies, not just direct dependencies)

Or we can query the other way around, and find the corresponding model for a dependency we found in OBJECT_DEPENDENCIES:

SELECT      ao.PACKAGE_ID
,           ao.OBJECT_NAME
,           ao.OBJECT_SUFFIX
,           ao.CDATA
FROM        object_dependencies     od
INNER JOIN  _SYS_REPO.ACTIVE_OBJECT ao
ON          SUBSTR_BEFORE(od.base_object_name, '/') = ao.package_id
AND         SUBSTR_AFTER(od.base_object_name, '/')  = ao.object_name
AND         ao.object_suffix in (
              'analyticview'
            , 'attributeview'
            , 'calculationview'
            )
WHERE       od.DEPENDENT_SCHEMA_NAME = '_SYS_BIC'
AND         od.DEPENDENT_OBJECT_NAME = 'my.package.name/CA_MY_CALCULATION_VIEW'
AND         od.DEPENDENT_OBJECT_TYPE = 'VIEW'

NOTE: It turns out that querying OBJECT_DEPENDENCIES fails at reporting dependencies between analytic views and the attribute views they use. To capture those dependencies, you need to query _SYS_REPO.ACTIVE_OBJECTCROSSREF.

Parsing the information view's XML definition with stored procedure p_parse_xml

Once we obtained the XML that defines the information view, we still need to pull it apart so we can figure out how it is tied to our base table columns. To do that, we first apply a general XML parser that turns the XML text into a (temporary) table or table variable, such that each row represents a distinct, atomic element inside the XML document. For this purpose I developed a HANA stored procedure called p_parse_xml. Here is its signature:

create PROCEDURE p_parse_xml (
  -- XML string to parse
  p_xml nclob
  -- Parse tree is returned as a table variable
, out p_dom table (
    -- unique id of the node
    node_id           int
    -- id of the parent node
  , parent_node_id    int
    -- dom node type constant: 1=element, 2=attribute, 3=text, 4=cdata,
    --                         5=entityref, 6=entity, 7=processing instruction, 
    --                         8=comment, 9=document, 10=document type, 
    --                         11=document fragment, 12=notation
  , node_type         tinyint
    -- dom node name: tagname for element, attribute name for attribute, 
    --                target for processing instruction,
    --                document type name for document type,
    --                "#text" for text and cdata, "#comment" for comment, 
    --                "#document" for document, "#document-fragment" for document fragment.
  , node_name         nvarchar(64)  
    -- dom node value: text for text, comment, and cdata nodes, data for processing instruction node, null otherwise.
  , node_value        nclob
    -- raw token from the parser
  , token_text        nclob         
    -- character position of token
  , pos               int           
    -- lenght of token.
  , len               int           
  )
  -- flag whether to strip text nodes that only contain whitespace from the parse tree
, p_strip_empty_text  tinyint default 1
) 

Note that you can download the source dode for the entire procedure from github. The p_parse_xml procedure depends on p_decode_xml_entities, so if you want to run it yourself, be sure to install that first.

To see how you can use this, consider the following, simple example:

call p_parse_xml(
'<parent-element attribute1="value1">
  <child-element attribute2="value2" attribute3="value3">
    text-content1
  </child-element>
  <child-element att="bla">
    text-content2
  </child-element>
</parent-element>', ?);

This gives us the following result:

+---------+----------------+-----------+----------------+-----------------------------+-------------------------------------------------------------+-----+-----+
| NODE_ID | PARENT_NODE_ID | NODE_TYPE | NODE_NAME      | NODE_VALUE                  | TOKEN_TEXT                                                  | POS | LEN |
+---------+----------------+-----------+----------------+-----------------------------+-------------------------------------------------------------+-----+-----+
| 0       | ?              | 9         | #document      | ?                           | ?                                                           | 1   | 221 |
| 1       | 0              | 1         | parent-element | ?                           | <parent-element attribute1=\"value1\">                      | 1   | 36  |
| 2       | 1              | 2         | attribute1     | value1                      |  attribute1=\"value1\"                                      | 2   | 20  |
| 3       | 1              | 1         | child-element  | ?                           | <child-element attribute2=\"value2\" attribute3=\"value3\"> | 41  | 55  |
| 4       | 3              | 2         | attribute2     | value2                      |  attribute2=\"value2\"                                      | 42  | 20  |
| 5       | 3              | 2         | attribute3     | value3                      |  attribute3=\"value3\"                                      | 62  | 20  |
| 6       | 3              | 3         | #text          | text-content1               | text-content1                                               | 96  | 23  | 
| 7       | 1              | 1         | child-element  | ?                           | <child-element att=\"bla\">                                 | 139 | 25  |
| 8       | 7              | 2         | att            | bla                         |  att=\"bla\"                                                | 140 | 10  |
| 9       | 7              | 3         | #text          | text-content2               | text-content2                                               | 164 | 23  |
+---------+----------------+-----------+----------------+-----------------------------+-------------------------------------------------------------+-----+-----+

The result is a tabular representation of the XML parse tree. Each row essentially represents a DOM Node, and the column values represent the node's properties:

  • The NODE_TYPE column tells us what kind of node we're dealing with. Values in this column conform to the w3c standard document object model (DOM) enumeration of node type values. The most important ones are 1 for element nodes ("tags"); 2 for attributes, and 3 for text. The entire parse tree is contained in a document node, which has node type 9.
  • The NODE_ID is the unique identifier of the node while PARENT_NODE_ID points to whatever node is considered the parent node of the current node. The parent node is basically the container of the node. As you can see, the element with NODE_ID=3 has the element node with NODE_ID=1 as parent. These correspond to the first <child-element> and <parent-element> elements in the document. Attribute nodes are also marked as children of the element to which they belong. The DOM standard does not consider attributes children of their respective element node, but p_parse_xml does, mainly to keep the result table as simple as possible.
  • The NODE_NAME column is a further characterization of what kind of node we're realing with. For most node types, the node name is a constant value which is essentially a friendly name for the node type.For example, document nodes (NODE_TYPE=9 always have #document as NODE_NAME, and text nodes (NODE_TYPE=3) always have #text as NODE_NAME. For element nodes and attribute nodes (NODE_TYPE is 1 and 2 respectively), the NODE_NAME is not constant. Rather, their node name conveys information about the meaning of the node and its contents. In other words, element and attribute names are metadata.
  • The NODE_VALUE column contains actual data. For element and document nodes, it is alway NULL. For attributes, the NODE_VALUE column contains the attribute value, and for text nodes, it is the text content.
  • The POS lists the position where the current element was found; the LEN column keeps track of the length of current item as it appears in the doucment. Typically you won't need these columns, except maybe for debugging purposes. The TOKEN_TEXT column is also here mostly for debugging purposes.

Extracting Base Columns from Analytic and Attribute views

If you examine the XML definition of Analytic and/or Attribute views, you'll notice that table base columns are referenced by <keyMapping> and <measureMapping>-elements like this:

<keyMapping schemaName="...database schema..." columnObjectName="...table name..." columnName="...column name..."/>

So, assuming we already parsed the model of an analytic or attribute view using p_parse_xml and captured its result in a table variable called tab_dom, we can run a query like this to obtain all <keyMapping> and <measureMapping>-elements:

select     mapping.*
from       :tab_dom mapping
where      mapping.node_type = 1            -- get us all elements
and        mapping.node_name in ('keyMapping' -- with tagnames 'keyMappping' or 'measureMappping'
                                   ,'measureMapping') 

While this gives us the actual elements, the actual data we're interested in is buried in the attributes of the <keyMapping> and <measureMapping>-elements. You might recall that in the p_parse_xml result, attribute nodes have NODE_TYPE=2 appear as childnode of their respective element. So, we can extract all attributes of all <keyMapping> and <measureMapping>-elements with a self-join like this:

select      mapping_attributes.*
from        :tab_dom            mapping
inner join  :tab_dom            mapping_attributes
on          mapping.node_id   = mapping_attributes.parent_node_id -- find all nodes that have the keymapping element node as parent 
and         2                 = mapping_attributes.node_type      -- but only if their node type indicates they are attribute nodes
where       mapping.node_type = 1                                    
and         mapping.node_name in ('keyMapping', 'measureMapping')                         

Since we are interested in not just any attribute node, but attribute nodes having specific names like schemaName, columnObjectName and columnName, we should put a further restriction on the NODE_NAME of these attribute nodes. Also note that this query will potentially give us multiple rows per <keyMapping> or <measureMapping>-element (in fact, just as many as there are attributes). Since we'd like to have just one row for each <keyMapping> or <measureMapping>-element having the values of its schemaName, columnObjectName and columnName attributes in separate columns, we should rewrite this query so that each attribute gets its own self-join.

Thus, the final query becomes:

select     mapping_schemaName.node_value       as schema_name
,          mapping_columnObjectName.node_value as table_name
,          mapping_columnName.node_value       as column_name
from       :tab_dom               mapping
inner join :tab_dom               mapping_schemaName                      -- get the attribute called 'schemaName'
on         mapping.node_id      = mapping_schemaName.parent_node_id
and        2                    = mapping_schemaName.node_type 
and        'schemaName'         = mapping_schemaName.node_name 
inner join :tab_dom               mapping_columnObjectName                -- get the attribute called 'columnObjectName'
on         mapping.node_id      = mapping_columnObjectName.parent_node_id 
and        2                    = mapping_columnObjectName.node_type 
and        'columnObjectName'   = mapping_columnObjectName.node_name 
inner join :tab_dom               mapping_columnName                      -- get the attribute called 'columnName'
on         mapping.node_id      = mapping_columnName.parent_node_id       
and        2                    = mapping_columnName.node_type 
and        'columnName'         = mapping_columnName.node_name 
where      mapping.node_type = 1
and        mapping.node_name in ('keyMapping', 'measureMapping')                         

Extracting base columns from Calculation views

Getting the base columns used in calculation views is a bit more work. However, the good news is that in terms of the queries we need to write, it does not get much more complicated than what we witnessed for analytic and attribute views in te previous section. Querying the xml parse tree almost always boils down to finding elements and finding their attributes, and then doing something with their values.

The reason why it is more work to write queries against the model underlying calculation views is that the XML documents that define calculationviews use an extra level of mapping between the objects that represent the source of the columns and the way these columns are used inside the view. The following snippet might illustrate this:

<Calculation:scenario ...>
  ...
  <dataSources>
    ...
    <DataSource id="...some id used to refer to this datasource..." type="DATA_BASE_TABLE">
      ...
      <columnObject schemaName="...db schema name..." columnObjectName="...table name..."/>
      ...
    </DataSource>
    ...
  </dataSources>
  ...
  <calculationViews>
    ...
    <calculationView>
      ...
      <input node="#...id of a DataSource element...">
        ...
        <mapping source="...name of a column used as input..." ... >
        ...
      </input>
      ...
    </calculationView>
    ...
  </calculationViews>
  ...
</Calculation:scenario>

The method for finding the base columns can be summarized as follows:

  1. Get all <DataSource>-elements having a type-attribute with the value "DATA_BASE_TABLE". These elements represent all base tables used by this view. Other types of objects used by this view will have another value for the type-attribute.

    To obtain the schema and table name of the base table, find the <columnObject>-childelement of the <DataSource>-element. Its schemaName and columnObjectName-attributes respectively contain the database schema and table name of the base table.

    The <DataSource>-elements have an id attribute, and its value is used as unique identifier to refer to this data source.

  2. Find all instances where the base table datasources are used.

    A calculation view is essentially a graph of data transformation steps, each of which takes one or more streams of data as input, turning it into a stream of output data. In the XML document that defines the calculation view, these transformation steps are represented by <calulationView>-elements. These <calulationView>-elements contain one or more <input>-child elements, each of which represents a data stream that is used as input for the transformation step.

    The <input>-elements have a node-attribute. The value of the node-attribute is the value of the id-attribute of whatever element it refers to, prefixed by a hash-sign (#).

    Note that this is a general technique to reference elements within the same XML document. So, in order to find where a <DataSource>-element is used, it is enough to find all elements in the same XML document that reference the value <DataSource>-element's id-attribute in the value of their node-attribute.

  3. Once we have the elements that refer to our <DataSource>-element, we can find out which columns from the data source are used by looking for <mapping>-child elements.

    The <mapping>-elements have a source-attribute, which holds the column-name.

With these steps in mind, the SQL query we need to do on the calculation view parse tree becomes:

select     distinct
           ds_co_schemaName.node_value         schema_name
,          ds_co_columnObjectName.node_value   table_name
,          ds_usage_mapping_source.node_value  column_name
--
-- ds: DataSource elements (Note the WHERE clause)
-- 
from       :tab_dom                            ds
--
-- ds_type: demand that the value of the type-attribute of the DataSource elements equal 'DATA_BASE_TABLE'
--          this ensures we're only looking at base tables.
-- 
inner join :tab_dom                            ds_type
on         ds.node_id                        = ds_type.parent_node_id
and        2                                 = ds_type.node_type
and        'type'                            = ds_type.node_name
and        'DATA_BASE_TABLE'                 = cast(ds_type.node_value as varchar(128))
--
-- ds_co: get the columnObject childelement of the DataSource element.
--        Also, get the schemaName and columnObjectName attributes of that columnObject-element. 
--
inner join :tab_dom                            ds_co
on         ds.node_id                        = ds_co.parent_node_id
and        1                                 = ds_co.node_type
and        'columnObject'                    = ds_co.node_name
inner join :tab_dom                            ds_co_schemaName
on         ds_co.node_id                     = ds_co_schemaName.parent_node_id
and        2                                 = ds_co_schemaName.node_type
and        'schemaName'                      = ds_co_schemaName.node_name
inner join :tab_dom                            ds_co_columnObjectName
on         ds_co.node_id                     = ds_co_columnObjectName.parent_node_id
and        2                                 = ds_co_columnObjectName.node_type
and        'columnObjectName'                = ds_co_columnObjectName.node_name
--
-- ds_id: get the id-attribute of the DataSource element.
--
inner join :tab_dom                            ds_id
on         ds.node_id                        = ds_id.parent_node_id
and        2                                 = ds_id.node_type
and        'id'                              = ds_id.node_name
--
-- ds_usage: find any attributes that refer to the id of the DataSource 
--
inner join :tab_dom                            ds_usage
on         'node'                            = ds_usage.node_name
and        2                                 = ds_usage.node_type
and        '#'||ds_id.node_value             = cast(ds_usage.node_value as nvarchar(128))
--
-- ds_mapping: find any mapping child elements of the node that references the DataSource 
--
inner join :tab_dom                            ds_usage_mapping
on         'mapping'                         = ds_usage_mapping.node_name
and        1                                 = ds_usage_mapping.node_type
and        ds_usage.node_id                  = ds_usage_mapping.parent_node_id
--
-- ds_mapping_source: get the source of the mapping elements. These are our base column names.
--
inner join :tab_dom                            ds_usage_mapping_source
on         'source'                          = ds_usage_mapping_source.node_name
and        2                                 = ds_usage_mapping_source.node_type
and        ds_usage_mapping.node_id          = ds_usage_mapping_source.parent_node_id
where      ds.node_type                      = 1
and        ds.node_name                      = 'DataSource'

Putting it all together

To recapitulate, we discussed
  1. How to do general queries for dependencies using OBJECT_DEPENDENCIES, but that you need to query _SYS_REPO.ACTIVE_OBJECTCROSSREF to find out which Attribute views are used by Analytic views.
  2. How to find the model XML code underlying our information views from the _SYS_REPO.ACTIVE_OBJECT table.
  3. How to parse XML, and how to query the parse tree for elements and attributes
  4. How the XML documents for information views are structured, and how to find base columns used in their models

With all these bits and pieces of information, we can finally create a procedure that fullfills the original requirement to obtain the base columns used by our information views. This is available as the p_get_view_basecols stored procedure. Here is its signature:

create PROCEDURE p_get_view_basecols (
  -- package name pattern. Used to match packages containing analytic, attribute or calculation views. Can contain LIKE wildcards.
  p_package_id    nvarchar(255)
  -- object name pattern. Used to match name of analytic, attribute or calculation views. Can contain LIKE wildcards.
, p_object_name   nvarchar(255) default '%'
  -- object suffix pattern. Can be used to specify the type of view. Can contain LIKE wildcards.
, p_object_suffix nvarchar(255) default '%'
  -- flag to indicate whether to recursively analyze analytic, attribute or calculation views on which the view to be analyzed depends. 
  -- 0 means only look at the given view, 1 means also look at underlying views.
, p_recursive     tinyint default 1
  -- result table: base columns on which the specified view(s) depends.
, out p_cols table (
    -- schema name of the referenced base column
    schema_name nvarchar(128)
    -- table name of the referenced base column
  , table_name  nvarchar(128)
    -- column name of the referenced base column
  , column_name nvarchar(128)
    -- list of view names that depend on the base column
  , views       nclob
  )
)

Obtaining the list of base columns on which our application depends is now as simple as calling the procedure, like so:

call p_get_view_basecols(
  -- look in our application package (and its subpackages)
  'our.application.package.%' 
  -- consider all information views
, '%'
  -- consider all types of information views
, '%'
  -- consider also information views upon which our information views depend
, 1
  -- put the results into our output table
, ?
);

Finally

I hope you enjoyed this post! Feel free to leave a comment to share your insights or to give feedback.

Please note that all source code for this topic is freely available as open source software in our just-bi/hades github reposiory. You are free to use, modify and distribute it, as long as you respect the copyright notice.

We welcome contributions! You can contribute in many ways:

  • Simply use the procedures. Give us feedback. You can do so by leaving a comment on this blog.
  • Spread the word: tell your colleagues, and maybe tweet or write blog post about it. Please use hashtag #justbihades
  • Share your requirements. Create an issue to ask for more features so we can improve our software.
  • Fork it!. Send us pull requests. We welcome your contribution and we will fully attribute you!

Thursday, September 22, 2016

SAP UI5: Internationalization for each view - Addendum for Nested Views

After writing my previous post on SAP UI5: Per-view Internationalization, I found out that the solution does not work completely as intended when using nested views.

If you're using nested views, each view would still have its own set of unique texts that are entirely specific to just that view, and for those cases, the solution as described still works. But there might also be a number of texts that are shared by both the outer and one or more of the inner views. It would make sense to be able to define those texts in the i18n model at the level of the outer view, and have the i18n models of the nested view pick up and enhance the i18n model of the outer view.

Problem: onInit() is not the right place to initialize the i18n model

The problem with the original solution is that the onInit() method of the nested views gets called before that of the outer view. It makes sense - the whole can be initialized only after its parts have been initialized. But this does mean that the onInit() method is not the right place to initialize the i18n model.

Please consider these lines from the _initI18n() method that I proposed to initialize the i18n model:

      //Use the bundledata to create or enhance the i18n model
      var i18nModel = this.getModel(i18n);
      if (i18nModel) {
        i18nModel.enhance(bundleData);
      }
      else {
        i18nModel = new ResourceModel(bundleData);
      }

      //set this i18n model.
      this.setModel(i18nModel, i18n);

Suppose this code runs as part of a nested view's onInit(). The call to getModel() will try to acquire the i18n model that is already set, or else the i18n model of the owner component. That's how the getModel() method in the base controller works (please see my previous blog post to review that code).

Now, at this point, no i18n model has been set for the view, and so the owner component's i18n model will be picked up. The i18n model of the outer view will however never be found, since the onInit() of the controller of the outer view has not been called yet (and therefore, its _initI18n() has not been called either).

Solution: Use onBeforeRendering() rather than onInit()

It turns out that this can be solved by calling the _initI18N() method in the onBeforeRendering() method rather than in the onInit() method. While nested views are initialized before initializaing the outer view, it's the other way around for the rendering process. This makes sense: as the outer view is being rendered, this requires rendering of its containing views. So the onBeforeRendering() method of the outer view will be called before the onBeforeRendering() method of its nested views. (It's the other way around for onAfterRendering(): outer views will be done rendering after its containing views are rendered).

Ensure i18n initialization occurs only once

There is one extra consideration in moving the i18n initialization from the onInit() to onBeforeRendering(). The reason is that views may go trough multiple rendering cycles, whereas the onInit() will only run once. If there are repeated rendering cycles, we do not want to reinitialize the i18n model, so we add a lock that ensures the i18n model is initialized only once:

    ...
    onInit: function(){
      this._initI18n();
    },
    onBeforeRendering: function(){
      this._initI18n();
    },
    _i18nInitialized: false,
    _initI18n: function(){
      if (this._i18nInitialized === true) {
        return;
      }
      var i18n = "i18n";

      //create bundle descriptor for this controllers i18n resource data
      var metadata = this.getMetadata();
      var nameParts = metadata.getName().split(".");
      nameParts.pop();
      nameParts.push(i18n);
      nameParts.push(i18n);
      var bundleData = {bundleName: nameParts.join(".")};

      //Use the bundledata to create or enhance the i18n model
      var i18nModel = this.getModel(i18n);
      if (i18nModel) {
        i18nModel.enhance(bundleData);
      }
      else {
        i18nModel = new ResourceModel(bundleData);
      }

      //set this i18n model.
      this.setModel(i18nModel, i18n);
      this._i18nInitialized = true;
    },
    ...

Overriding onBeforeRendering() in extensions of the base controller

And of course, when extending the base controller, you'll need to remember to call the onBeforeRendering() method of the ascendant when overriding the onBeforeRendering() method:

sap.ui.define([
  "just/bi/apps/components/basecontroller/BaseController"
], function(Controller){
  "use strict";
  var controller = Controller.extend("just.bi.apps.components.mainpanel.MainPanel", {
    onBeforeRendering: function(){
      Controller.prototype.onBeforeRenderingcall(this);
      ...
    }
  });
  return controller;
});

Finally

I hope you enjoyed this addendum. Feel free to share your insights if you think there is a better way to do handle i18n.

Sunday, September 18, 2016

SAP UI5: Per-view Internationalization

NOTE: There is an addendum to this blog post that suggests a number of improvements. You can check out the addendum here: SAP UI5: Internationalization for each view - Addendum for Nested Views.

Quite recently, I dove into SAP UI5 development. To educate myself, I followed a workshop and I used the Walkthrough.

During my explorations, I ran into a particular issue which I didn't see very readily addressed. I also found a solution for this particular issue, and even though I still have a ton to learn, I think it is worth sharing. So, here goes:

The Feature: Translatable texts and the i18n model

One of the SAP UI5 features highlighted in the Walkthrough is treatment of translatable texts. In the walkthrough this is realized by setting up a resource model, the i18n model.

The i18n model is sourced form i18n .properties files, which are essentially lists of key/value pairs, one per line, and separated by an equals sign:

# Each line is a key=value pair.
greetingAction=Say Hello
greeting=Hello {0}!

To actually setup a i18n model using these texts, you can explicitly instantiate a sap.ui.model.resource.ResourceModel:

sap.ui.define([
    "sap/ui/core/mvc/Controller",
    "sap/ui/model/resource/ResourceModel"
], function(Controller, ResourceModel){
    "use strict";
    return Controller.extend("my.app.App", {
        onInit: function(){
            var i18nModel = new ResourceModel({
                bundleName: "just.bi.apps.JustBiApp.i18n.i18n"
            });
            this.getView().setModel(i18nModel, "i18n");
        }
    });
});

Or, you can have your application instantiate the model by listing it in the models property of the sap.ui5 entry in the manifest.json application descriptor file:

"models": {
 "i18n": {
   "type": "sap.ui.model.resource.ResourceModel",
    "settings": {
      "bundleName": "just.bi.apps.JustBiApp.i18n.i18n"
    }
  }
}

In many cases, the text is required for the static labels of ui elements like input fields, menus and so on. Inside a view, static texts may be retrieved from the i18n model through special data binding syntax, like so:

<-- Button text will read "Say Hello" -->
<Button text="{i18n>greetingAction}"/>

Texts may also be retrieved programmatically inside controller code by calling the .getText() method on the resource bundle object. The resource bundle object is may be obtained from the i18n resource model with the getResourceBundle() getter method:

var bundle = this.getModel("i18n").getResourceBundle();
var text = bundle.getText("greeting", ["World"]);      // text has value "Hello, World!"

Now, the cool thing is that you can write a separate i18n .properties file for each locale that you want to support. The framework discovers which locale is required by the client and use that to find the best matching i18n files appropriate for the client's locale.

The file name is used to identify to which language and/or locale the texts inside the file apply. For example, you'd put the German texts in a i18n_de.properties file, and the English texts in a i18n_en.properties file, and if you want to distinguish between British and American English, you'd create both a i18n_en_GB.properties and i18n_en_US.properties file.

(I haven't found out to exactly which standard the sap ui i18n .properties files apply, but from what I've seen so far I think it's safe to assume that you can use the two-letter lowercase ISO 639-1 code for the language and the two-letter uppercase ISO 3166-1 code for the country)


The Problem: One i18n Model for entire application

Now, the walkthrough demonstrates the feature by adding one i18n model for the entire application so that it becomes available in any of the views that make up the application. I appreciate that the walkthrough is not the right place to cover all kinds of more advanced scenarios, so I can understand why it settles for just one application-wide i18n model.

However, I can't help but feeling this is not an ideal approach. Main reason is that it seems at odds with the fact that many texts are specific to just one particular view. This challenges both the development workflow as well as the reusability of our application components:


  • Each time you create or modify a particular view, you also have to edit the global i18n .properties files. To keep things manageable, you will probably invent some kind of view-specific prefix to prefix the keys pertaining to that view, and you'll probably end up creating a view-specific block in that i18n file. At some point, you'll end up with a lot of lines per i18n file, which is not so maintainable
  • Suppose you want to reuse a particular view in another application. Contrary to the practice used in the Walthrough, I like to keep view and associated Controller together, and in a folder separate from any other view and controller. This way I can easily copy, move or remove the things that belong together. Except that the texts, which also belong to that view/controller, are in the global i18n .properties file, and need to be managed separately.

The Solution: Keep per-view i18n files near view and controller code

The solution I found is to create a i18n subfolder beneath the folder that contains my Controller and View. Since I already keep each associated view and controller together, and separate from the other views and controllers, this approach makes sense: It's just one step further in keeping code and resources that depend directly on each other physically together.


So, this is what my file and folder structure looks like:

FolderStructure.png

So, the webapp folder is the root of the sap ui5 project. The components folder is where I keep subfolders for each functional unit (i.e. View+Controller+resources) of the application. In the picture, you see two such subfolders, basecontroller (more about that below) and mainpanel.

The mainpanel folder is the one that contains an actual component of my application - a MainPanel View, and its controller (in MainPanel.view.xml and MainPanel.controller.js respectively). Here we also find the i18n folder specific to this view, and inside are the i18n .properties files (one for each locale we need to support).

In order to load and apply the view-specific i18n .properties files, I'm using generic extension of sap.ui.core.mvc.Controller which loads the "local", view-specific i18n resource bundle. This extension is called BaseController and is in the basecontroller folder. Here's the code:

sap.ui.define([
  "sap/ui/core/mvc/Controller",
  "sap/ui/model/resource/ResourceModel"
], function(Controller, ResourceModel){
  "use strict";
  var controller = Controller.extend("just.bi.apps.components.basecontroller.BaseController", {
    onInit: function()
      this._initI18n();
    },
    _initI18n: function(){
      var i18n = "i18n";

      //create bundle descriptor for this controllers i18n resource data
      var metadata = this.getMetadata();
      var nameParts = metadata.getName().split(".");
      nameParts.pop();
      nameParts.push(i18n);
      nameParts.push(i18n);
      var bundleData = {bundleName: nameParts.join(".")};

      //Use the bundledata to create or enhance the i18n model
      var i18nModel = this.getModel(i18n);
      if (i18nModel) {
        i18nModel.enhance(bundleData);
      }
      else {
        i18nModel = new ResourceModel(bundleData);
      }

      //set this i18n model.
      this.setModel(i18nModel, i18n);
    },
    getModel: function(modelname){
      var view = this.getView();
      var model = view.getModel.apply(view, arguments);
      if (!model) {
        var ownerComponent = this.getOwnerComponent();
        if (ownerComponent) {
          model = ownerComponent.getModel(modelname);
        }
      }
      return model;
    },
    setModel: function(model, modelName){
      var view = this.getView();
      view.setModel.apply(view, arguments);
    },
  });
  return controller;
});

Note how the BaseController initializes the i18 model by calling the _init118n() method. In this method we extract the className of the this object from its metadata (using the .getName() getter on the metadata obtained using the .getMetadata() getter), and we pop off the unqualified classname to obtain its namespace. We then add the string "i18n" twice - once for the folder, and once for the files inside it. We use this to create the bundlename which we use to instantiate the actual ResourceModel.

Before setting that model to the controller's view, we check if there is already an i18n model set using getModel(). This is a utility method that gets the model from this controller's associated view, or of the component that "owns" the view and this controller.

If a i18n model is already available, we enhance that by calling the .enhance() method on it, rather than replacing it. This way, any texts defined at a higher level are still available in this controller and view. This gives us a functional i18n model, which we then set using setModel(), which simply calls setModel() on the view associated with this controller.

To actually use this BaseController, we extend it when creating a "real" controller:

sap.ui.define([
  "just/bi/apps/components/basecontroller/BaseController"
], function(Controller){
  "use strict";
  var controller = Controller.extend("just.bi.apps.components.mainpanel.MainPanel", {
    onInit: function(){
      Controller.prototype.onInit.call(this);
      ...
    }
  });
  return controller;
});

Note that if that real controller has its own onInit() method, we need to first call the onInit() method of BaseController, or rather, of whatever class we're extending. Fortunately, since we are extending it we already have a reference to it (in the snippet above, it's the Controller parameter injected into our definition), so we call its onInit() method via the prototype while using the controller that is currently being defined (this) as scope.

Finally

I hope you enjoyed this article! I hope it will be of use to you. If you have an alternative solution - quite possiby, a better one - then please feel free to leave a comment and point out your solution. I'm eager to hear and learn so don't hesitate to share your opinion and point of view.

ADDENDUM: Nested views

It turns out the approach described in this post does not work well when using nested views. Fortunately, a simple improvement of the ideas in this post solves that problem. The approach is described in my next blog post, SAP UI5: Internationalization for each view - Addendum for Nested Views.

Wednesday, May 04, 2016

ODXL - A generic Data Export Layer for SAP/HANA based on OData

I'm very pleased to be able to announce the immediate availability of the Open Data Export Layer (ODXL) for SAP/HANA!

Executive summary

ODXL is a framework that provides generic data export capabilities for the SAP/HANA platform. ODXL is implemented as a xsjs Web service that understands OData web requests, and delivers a response by means of a pluggable data output handler. Developers can use ODXL as a back-end component, or even as a global instance-wide service to provide clean, performant and extensible data export capabilities for their SAP/HANA applications.

Currently, ODXL provides output handlers for comma-separated values (csv) as well as Microsoft Excel output. However, ODXL is designed so that developers can write their own response handlers and extend ODXL to export data to other output formats according to their requirements.

ODXL is provided by Just BI to the SAP/HANA developer community as open source software under the terms of the Apache 2.0 License. This means you are free to use, modify and distribute ODXL. For the exact terms and conditions, please refer to the license text.

The source code is available on github. Developers are encouraged to check out the source code and to contribute to the project. You can contribute in many ways: we value any feedback, suggestions for new features, filing bug reports, or code enhancements.

If you require professional support for ODXL, please contact Just-BI for details.

What exactly is ODXL?

ODXL started as an in-house project at the Just-BI department of custom development. It was borne from the observation that the SAP/HANA web applications that we develop for our customers often require some form of data export, typically to Microsoft Excel. Rather than creating this type of functionality again for each project, we decided to invest some time and effort to design and develop this solution in such a way that it can easily be deployed as a reusable component. And preferably, in a way that feels natural to SAP/HANA xs platform application developers.

What we came up with, is a xsjs web service that understands requests that look and feel like standard OData GET requests, but which returns the data in some custom output format. ODXL was designed to make it easily extensible so that developers can build their own modules that create and deliver the data in whatever output format suits their requirements.

This is illustrated in the high-level overview below:



For customers of Just-BI, there is an immediate requirement to get Microsoft Excel output. So, we went ahead and implemented output handlers for .xlsx and .csv formats, and we included those in the project. This means that ODXL supports data export to the .xlsx and .csv formats right out of the box.

However, support for any particular output format is entirely optional and can be controlled by configuration and/or extension:
  • Developers can develop their own output handlers to supply data export to whatever output format they like.
  • SAP/HANA Admins and/or application developers can choose to install only those output handlers they require, and configure how Content-Type headers and OData $format values map to output handlers.

So ODXL is OData? Doesn't SAP/HANA suppport OData already?

The SAP/HANA platform provides data access via the OData standard. This facility is very convenient for object-level read- and write access to database data for typical modern web applications. In this scenario, the web application would typically use asynchronous XML Http requests, and data would be exchanged in either Atom (a XML dialect) or JSON format.

ODXL's primary goal is to provide web applications with a way to export datasets in the form of documents. Data export tasks typically deal with data sets that are quite a bit larger than the ones accessed from within a web application. In addition, an data export document might very well compromise multiple parts - in other words, it may contain multiple datasets. The typical example is exporting multiple lists of different items from a web application to a workbook containaing multiple spreadsheets with data. In fact, the concrete use case from whence ODXL originated was the requirement to export multiple datasets to Microsoft Excel .xlsx workbooks.

So, ODXL is not OData. Rather, ODXL is complementary to SAP/HANA OData services. That said, the design of ODXL does borrow elements from standard OData.

OData Features, Extensions and omissions

ODXL GET requests follow the syntax and features of OData standard GET requests. Here's a simple example to illustrate the ODXL GET request:
GET "RBOUMAN"/"PRODUCTS"?$select=PRODUCTCODE, PRODUCTNAME& $filter=PRODUCTVENDOR eq 'Classic Metal Creations' and QUANTITYINSTOCK gt 1&$orderby=BUYPRICE desc&$skip=0&$top=5
This request is build up like so:
  • "RBOUMAN"/"PRODUCTS": get data from the "PRODUCTS" table in the database schema called "RBOUMAN".
  • $select=PRODUCTCODE, PRODUCTNAME: Only get values for the columns PRODUCTCODE and PRODUCTNAME.
  • $filter=PRODUCTVENDOR eq 'Classic Metal Creations' and QUANTITYINSTOCK ge 1: Only get in-stock products from the vendor 'Classic Metal Creations'.
  • $orderby=BUYPRICE desc: Order the data from highest price to lowest.
  • $skip=0&$top=5: Only get the first five results.
For more detailed information about invoking the odxl service, check out the section about the sample application. The sample application offers a very easy way to use ODXL for any table, view, or calculation view you can access and allows you to familiarize yourself in detail with the URL format.

In addition, ODXL supports the OData $batch POST request to support export of multiple datasets into a single response document.

The reasons to follow OData in these respects are quite simple:
  • OData is simple and powerful. It is easy to use, and it gets the job done. There is no need to reinvent the wheel here.
  • ODXL's target audience, that is to say, SAP/HANA application developers, are already familiar with OData. They can integrate and use ODXL into their applications with minimal effort, and maybe even reuse the code they use to build their OData queries to target ODXL.
ODXL does not follow the OData standard with respect to the format of the response. This is a feature: OData only specifies Atom (an XML dialect) and JSON output, whereas ODXL can supply any output format. ODXL can support any output format because it allows developers to plug-in their own modules called output handlers that create and deliver the output.

Currently ODXL provides two output handlers: one for comma-separated values (.csv), and one for Microsoft Excel (.xlsx). If that is all you need, you're set. And if you need some special output format, you can use the code of these output handlers to see how it is done and then write your own output handler.

ODXL does respect the OData standard with regard to how the client can specify what type of response they would like to receive. Clients can specify the MIME-type of the desired output format in a standard HTTP Accept: request header:
  • Accept: text/csv specifies that the response should be returned in comma separated values format.
  • Accept: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet specifies that the response should be returned in open office xml workbook format (Excel .xlsx format).
Alternatively, they can specify a $format=<format> query option, where <format> identifies the output format:
  • $format=csv for csv format
  • $format=xlsx for .xlsx format
Note that a format specified by the $format query option will override any format specified in an Accept:-header, as per OData specification.

ODXL admins can configure which MIME-types will be supported by a particular ODXL service instance, and how these map to pluggable output handlers. In addition, they can configure how values for passed for the $format query option map to MIME-types. ODXL comes with a standard configuration with mappings for the predefined output handlers for .csv and .xlsx output.

On the request side of things, most of OData's features are implemented by ODXL:
  • The $select query option to specify which fields are to be returned
  • The $filter query option allows complex conditions restricting the returned data. OData standard functions are implemented too.
  • The $skip and $top query options to export only a portion of the data
  • The $orderby query option to specify how the data should be sorted
ODXL currently does not offer support for the following OData features: The features that are currently not supported may be implemented in the future. For now, we feel the effort the implement them and adequately map their semantics to ODXL may not be worth the trouble. However, an implementation can surely be provided should there be sufficient interest from the community.

Installation

Use ODXL presumes you already have a SAP/HANA installation with a properly working xs engine. You also need HANA Studio, or Eclipse with the SAP HANA Tools plugin installed. The steps are a little bit different, depending on whether you just want to use ODXL, or whether you want to actively develop the ODXL project.

Here are the steps if you just want to use ODXL, and have no need to actively develop the project:
  1. In HANA Studio/Eclipse, create a new HANA xs project. Alternatively, find an existing HANA xs project.
  2. Find the ODXL repository on github, and download the project as a zipped folder. (Select a particular branch if you desire so; typically you'll want to get the master branch)
  3. Extract the project from the zip. This will yield a folder. Copy its contents, and place them into your xs project directory (or one of its sub directories)
  4. Activate the new content.
After taking these steps, you should now have a working ODXL service, as well as a sample application. The service itself is in the service subdirectory, and you'll find the sample application inside the app subdirectory.

The service and the application are both self-contained xs applications, and should be completely independent in terms of resources. The service does not require the application to be present, but obviously, the application does rely on being able to call upon the service.

If you only need the service, for example, because you want to call it directly from your own sample application, then you don't need the sample application. You can safely copy only the contents of the service directory and put those right inside your project directory (or one of its subdirectories) in that case. But even then, you might still want to hang on to the sample application, because you can use that to generate the web service calls that you might want to do from within your application.

If you want to hack on ODXL then you might want to fork or clone the ODXL github repository. If you do this inside a SAP/HANA xs project, or if you create a project pointing to that location, you can then deploy that to SAP/HANA and use that to send pull requests in case you want to contribute your changes back into the project.

Getting started with the sample application

To get up and running quickly, we included a sample web application in the ODXL project. The purpose of this sample application is to provide an easy way to evaluate and test ODXL.

The sample application lets you browse the available database schemas and queryable objects: tables and views, including calculation views (or at least, their SQL queryable runtime representation). After making the selection, it will build up a form showing the available columns. You can then use the form to select or deselect columns, apply filter conditions, and/or specify any sorting order. If the selected object is a calculation view that defines input parameters, then a form will be shown where you can enter values for those too.

In the mean while, as you're entering options into the form, a textarea will show the URL that should be used to invoke the ODXL service. If you like, you can manually tweak this URL as well. Finally, you can use one of the download links to immediately download the result corresponding to the current URL in either .csv or .xlsx format.

Alternatively, you can hit a button to add the URL to a batch request. When you're done adding items to the batch, you can hit the download workbook button to download as single .xlsx workbook, containing one worksheet for each dataset in the batch.

What versions of SAP/HANA are supported?

We initially built and tested ODXL on SPS9. The initial implementation used the $.hdb database interface, as well as the $.util.Zip builtin.

We then built abstraction layers for both database access and zip support to allow automtic fallback to the $.db database interface, and to use a pure javascript implementation of the zip algorithm based on Stuart Knightley's JSZip library. We tested this on SPS8, and everyting seems to work fine there.

We have not actively tested earlier SAP/HANA versions, but as far as we know, ODXL should work on any earlier version. If you find that it doesn't, then please let us know - we will gladly look into the issue and see if we can provide a solution.

Why Open Source? What's the Business Model? What's the catch?

For Just BI, Open Source software is not a business model, but a development model. While some companies build a successful business model around selling custom code code, this is currently not Just-BI's primary goal. Rather, Just-BI is a consulting company that focuses mainly on Business Intelligence solutions around the SAP ecosystem. Our areas of expertise include Business Objects, SAP BW, SAP HANA, as well as custom BI (web) Helping customers by providing solutions for their business problems is Just-BI's primary concern - not selling code.

However, we do acknowledge that sometimes, custom code plays an essential role in building a business solution for our customers. In these cases, we will gladly help our customers to design, build and deploy such solutions. But even in these cases we will try to look for standard component toolkits, like SAP UI5, or frameworks like Angular as a basis for our work.

The urge to standardize on familiar, well known toolkits and libraries hardly needs justification. In the end, customers don't have the end goal of acquiring and owning too many custom coded solutions, because today's hot new custom solution is tomorrow's legacy. The more a customer relies on custom code, the harder it will become to maintain and to move forward.

Sometimes, a particular building block that we need for applications may not be publicly available already. If such a building block is sufficiently generic (i.e., not bound to any particular customer) then we have every reason to want that to become a standard. For a generic and reusable component like ODXL, we believe that an open source model is the right way to do that.

We think that an open source development model will help maintain and advance ODXL. By using an open source release and development model, we have potentially more eyes to scrutinize our code, find bugs, suggest features, etc. In addition we hope our customers will feel more confident to embrace an open source solution, since they need not be locked into only our company for support and ongoing development.

How to Contribute

If you want to, there are many different ways to contribute to ODXL.
  1. If you want to suggest a new feature, or report a defect, then please use the github issue tracker.
  2. If you want to contribute code for a bugfix, or for a new feature, then please send a pull request. If you are considering to contribute code then we do urge you to first create an issue to open up discussion with fellow ODXL developers on how to best scratch your itch
  3. If you are using ODXL and if you like it, then consider to spread the word - tell your co-workers about it, write a blog, or a tweet, or a facebook post.
Thank you in advance for your contributions!

Finally

I hope you enjoyed this post! I hope ODXL will be useful to you. If so, I look forward to getting your feedback on how it works for you and how we might improve it. Thanks for your Time!

Sunday, March 20, 2016

Installing the Open Source Xavier XML/A client on the Jedox Premium OLAP Suite

Jedox is a software vendor that specializes in OLAP services and solutions. The company has been around quite a while and is probably best known for their PALO MOLAP engine and the matching add-in for Microsoft Excel.

Jedox' flagship product, Jedox Premium comprises the Palo MOLAP engine, API's, a REST server, and ETL server, and client tools. It also comes with a MDX interpreter and a XML for Analysis server. An interesting tidbit is that the MDX layer is not considered native, and Jedox' own clients use a lower level API, or address it via the REST service.

In this blog post I will explain how to install and configure the Open Source browser-based ad-hoc query and analysis tool Xavier to use it with Jedox. A video of the process is embedded below:


Here's a written list of instructions to get up and running with Xavier and Jedox:
  1. Download Jedox Premium. Run the downloaded installer to actually install the product. By default, it will be installed in C:\Program Files (x86)\Jedox\Jedox Suite. In the remainder of this post, I will refer to this directory as "the Jedox Suite directory".
  2. Download xavier.zip. Unpack the zip. A xavier directory will be extracted.
  3. Stop the JedoxSuiteHttpdService. If you don't know about windows services, then look here.
  4. Copy the xavier directory that you extracted from xavier.zip into the Jedox Suite\httpd\app\docroot directory.
  5. Open the Jedox Suite/httpd/conf/httpd.conf file in a text editor. You should probably make a backup copy of the httpd.conf file before editing it so you can always revert your changes.
  6. Add a line to load the HTTP proxy module. To do that, search the httpd.conf file for a bunch of lines that start with LoadModule. Look for a line that reads:

    LoadModule proxy_http_module modules/mod_proxy_http.so

    In my installation, the line is already present, like this:
    <IfDefine JDX_DEV>
    LoadModule log_config_module modules/mod_log_config.so
    LoadModule proxy_http_module modules/mod_proxy_http.so
    LoadModule setenvif_module modules/mod_setenvif.so
    </IfDefine>
        
    Now, what you'll want to do is to cut this line out of the <IfDefine JDX_DEV> block, and put it outside that block, for example, right before it, like this:
    LoadModule proxy_http_module modules/mod_proxy_http.so
    <IfDefine JDX_DEV>
    LoadModule log_config_module modules/mod_log_config.so
    LoadModule setenvif_module modules/mod_setenvif.so
    </IfDefine>
        
  7. Add a proxy configuration so that web applications deployed on the Apache HTTP server can access the Jedox XML/A service as if it lives in the same domain as the web application. To do that, add a Location directive at the end of the httpd.conf file, like this:
    <Location /xavier/Xmla>
      ProxyPass http://localhost:4242/xmla/
      ProxyPassReverse http://localhost:4242/xmla/
      SetEnv proxy-chain-auth
    </Location>
        
    This allows a web application on the Apache HTTP server to access the XML/A service via the URL /xavier/Xmla. By default, the place where the Jedox XML/A service lives is http://localhost:4242/xmla. You can verify this by crosscecking this with the configation in Jedox Suite\odbo\config.ini: the values for the MDXAddress and MDXPort should match the server and port in the URLs configured for ProxyPass and ProxyPassReverse
  8. Save the changes to your httpd.conf file, and start the JedoxSuiteHttpdService. If the service starts, you should be good to go. If it doesn't, check the Jedox Suite/log/apache_error.log file and see if you can find some information there that can help you troubleshoot your problem.
If all went well, you should now be able to navigate to http://localhost/xavier/resources/html/index.html and you should see the xavier welcome screen. Note that this assumes the Jedox HTTP server is running on its default port (80). If you chose another port for the HTTP server when installing Jedox, the URL for xavier would have to be amended respectively. For example, I chose port 8181, and hence my URL would be http://localhost/xavier/resources/html/index.html instead.

If you're in doubt what port you chose for your Jedox HTTP server, you can look it up in the Jedox Suite/httpd/conf/httpd.conf file. Look for a line that starts with Define JDX_PORT_HTTP. The port is specified right after that, enclosed in double quotes.

Saturday, March 19, 2016

Installing the Open Source Xavier XML/A client on icCube OLAP suite

IcCube is a relatively young Swiss company specialized in creating OLAP software products. Their flagship product is the icCube Suite, providing an OLAP server, an XML for Analysis service, and a bunch of browser-based client tools for modeling cubes and dimensions, and for querying, reporting and visualizing OLAP data.

When you download the icCube trial edition, you'll get a java web server with an already deployed instance of the icCube java web application. This set-up makes it super-easy to install any browser-based third party XML/A clients for icCube.

In this blog post I want to provide you with the instructions to deploy the ad-hoc OLAP query tool called Xavier directly on your icCube server. This allows you to do ad-hoc, drag 'n drop style reporting on any data exposed by the icCube server.

If you like, you can watch the video below to see how it's done. Detailed, step-by-step instructions appear in writing right below the video.



  • Go to the icCube download site. Find the appropriate version for your operating system, and click it to download the installer.
  • After downloading the icCube installer, run it. This will install icCube for you.
  • Download xavier.zip.
  • Unpack xavier.zip. Move the extracted xavier directory to web directory below the icCube home directory
  • Start icCube.
If all goes well you should now be able to navigate to xavier. If you installed icCube locally and kept the default ports, this would be the link: http://localhost:8282/xavier/resources/html/index.html

I hope this was useful for you!

Need a Mondrian .WAR? Check out XMondrian.

To whom it may concern, this is a quick note to bring the xmondrian project to your attention.

Introduction: Open Source OLAP, Mondrian, Pentaho, and JasperSoft

Mondrian is the open source OLAP engine. Mondrian provides:
  • a multi-dimensional view of a relational database (ROLAP)
  • a MDX query engine
  • Clever, advanced caching layers to speed up OLAP query performance (making it a MOLAP/ROLAN hybrid i.e., HOLAP)
  • Standards compliant OLAP data access by providing XML for Analysis (XML/A) and OLAP4J access APIs
Mondrian was designed and invented by Julian Hyde, who acted as technical and architectural lead of the Mondrian project for many years.

Mondrian was adopted by Pentaho, and is included in the Pentaho BI Stack as Pentaho Analysis Services. Mondrian is also the OLAP engine that ships with the Tibco/JasperSoft Reporting server, and with Meteorite BI's Saiku product.

Running Mondrian Standalone

While Pentaho, Jaspersoft and Meteorite all do a good job of integrating Mondrian inside their respective BI servers, some people would like to run only Mondrian directly in their java servers. The Mondrian project used to make that quite easy, since it shipped a .WAR (web-archive) file containing Mondrian itself, documentation, sample cubes, and the JPivot mondrian client.

Unfortunately, the Mondrian project stopped supporting the .WAR and sample content. This happpened a while ago already, but there are still people that are finding out about it only now. This might have to do with the fact that the Mondrian documentation has not been very well maintained and still refers to the .WAR as if it is part of the Mondrian project.

Introducing XMondrian

I felt the need to have a Mondrian .WAR myself. Main reason is that I created a couple of OLAP client tools myself, and I want to provide potential users with a quick and easy path to check them out. So, I decided to pack them all in a .WAR, together with Mondrian, the Foodmart Sample cube, and an embedded dataset.

The result is called xmondrian which you can find on github.

Getting started with XMondrian

Getting started with XMondrian is easy:
  • Download the .WAR file
  • Deploy to your java server. In theory, the process to do that will be dependent upon which webserver you are running. I tried with Apache Tomcat, Jetty, and Tiny Java Web Server, and for all these products you can simply copy the .WAR to the webapps directory
  • Find the XMondrian homepage by navigating your browser to the xmondrian webapp. For example, suppose you installed Tomcat or Jetty locally, using the default port of 8080, then http://localhost:8080/xmondrian will bring you there.

What's inside XMondrian

Once you're on the XMondrian homepage, you can find more information about what's inside, but I'll summarize below:
  • Mondrian 3.12
  • A web.xml to instantiate and hook up the MondrianXmlaServlet. After installation of xmondrian, your webserver can receive XML/A requests via /xmondrian/xmla
  • HSQLDB embedded database engine
  • Sample Datasets and Schemas Both the Foodmart and Steelwheels datasets are included as embedded hsqldb database in a .jar file. There are predefined Mondrian Schema files for each dataset as well, which specify how these databases are mapped to cubes, measures, dimensions, etc. Finally, there are datasource files that tell mondrian to connect to the sample database and use the respective schema file
  • xmla4js - A javascript XML/A client library. You can use this in browser-based web applications to communicate with Mondrian via the XML/A protocol. Xmla4js ships with code samples as well as API documentation
  • Client Applications
    • XMLash - XML/A Shell: an interactive MDX command line interface for inspecting Mondrian schema objects, an for creating and running MDX queries. (See a demonstration )
    • Xavier - XML/A Visualizer: an interactive OLAP ad-hoc reporting and charting tool with a graphical user interface

The XML/A Shell Application

:

The XML/A Visualizer Application

:

Finally

I hope this post was useful to you. Please let me know how you get along with the xmondrian .WAR. I'm open to suggestions and I would love to collaborate to make xmondrian better. Please use the github issue tracker to provide your feedback. Thanks for your time and interest.

DuckDB bag of tricks: Processing PGN chess games with DuckDB - Rolling up each game's lines into a single game row (6/6)

DuckDB bag of tricks is the banner I use on this blog to post my tips and tricks about DuckDB . This post is the sixth installment of a s...