MicroStrategy Architecture

September 13, 2010 Leave a comment

MicroStrategy has the following 3 types of architecture:

  • 2-Tier Architecture

In 2-tier architecture, the MicroStrategy Desktop itself queries against the Data warehouse and the Metadata without the Intermediate tier of the Intelligence server.

  • 3-Tier Architecture

The 3-Tier architecture comprises an Intelligence server between MicroStrategy Desktop and the data Warehouse and the Metadata.

  • 4-Tier Architecture

The 4-tier architecture is same as 3-tier except it has an additional component of MicroStrategy Web.

MicroStrategy Intelligence Server

September 12, 2010 Leave a comment

Before we get into the bits and pieces of MicroStrategy Architecture, we need to know a little bit of Intelligence Server. MicroStrategy Intelligence Server™ is an analytical server that is optimized for enterprise querying and reporting as well as OLAP analysis. It processes report requests from all users of the MicroStrategy Business Intelligence platform through windows, web, and wireless interfaces. These reports range from simple performance indicators such as quarterly sales by product, to sophisticated hypothesis testing using a chi-square test. The results are then returned to the users, who can further interact with the data and run more reports. Folloiwng are the benefits of the Intelligence Server:

Features:

Dynamic SQL Generation: MicroStrategy Intelligence Server stores information about the database tables in metadata. MicroStrategy Intelligence Server uses this metadata to generate optimized SQL for the database. Because the metadata is schema independent, these reports, queries and analyses are generated from your current physical schema without any modifications.

Advanced Caching: MicroStrategy Intelligence Server caches all user requests. Not only are reports cached, but the individual report pages requested by users are also cached. As a result, no redundant processing occurs on the MicroStrategy Intelligence Server or on the database.

Built-in Software-level Clustering and Failover: MicroStrategy Intelligence Server lets you cluster many different individual servers together without any additional software or hardware components. Built-in failover support ensures that if a server experiences a hardware failure, the remaining MicroStrategy Intelligence Servers will pick up failed jobs.

Integrated Aggregations, OLAP, Financial and Statistical Analysis: MicroStrategy Intelligence Server provides simple analysis such as basic performance indicators, as well as more sophisticated analyses such as market basket, churn, retention and deciling analyses. Other analyses include hypothesis testing, regressions, extrapolations and bond calculations.

Business Intelligence Architecture

September 8, 2010 Leave a comment

A business intelligence architecture using MicroStrategy is shown in the following diagram:

The Architecture has the following components:

  • Source System (OLTP):

Source systems are typically databases or mainframes that store transaction processing data. As such, they are an Online Transaction Processing System (OLTP). Transaction Processing involves simple recording of transactions like sales, inventory, withdrawals, deposits and so forth.

A well designed and robust data warehouse lies at the heart of the business intelligence system and enables its users to leverage the competitive advantage that business intelligence provides. A data warehouse is an example of Online Analytical Processing System (OLAP).

Analytical Processing involves manipulating transactional records to calculate sales trends, growth patterns, percent to total contributions, trend reporting, profit analysis etc.

  • ETL Processes:

The extraction, transformation and loading (ETL) process contains information that facilitates the transfer of the data from the source systems to the data warehouse. We have discussed about this in details in my previous post.

The metadata database contains information that facilitates the retrieval of data from the data warehouse when using MicroStrategy applications. It stores MicroStrategy object definitions and information about the data warehouse in proprietary format and maps MicroStrategy objects to the data warehouse structures and content.

  • MicroStrategy Application:

The MicroStrategy applications allow you to interact with the business intelligence system. They allow you to logically organize data hierarchically to quickly and easily create, calculate, and analyze complex data relationships. They also provide the ability to look at the data from different perspective.

A variety of grid and graph formats are available for superior report presentation. You can even build documents, which enable you to combine multiple reports with text and graphics.

An Intro to MicroStrategy

September 5, 2010 Leave a comment

What is basically MicroStrategy and how is it related to Data Warehousing? I guess this post will explain it.

As per we previously discussed, we need ETL tools (e.g. Informatica) to build a Data Warehouse. The ETL (Extract-Transform-Load) extract the data from OLTP and different other data sources, transform the data in the staging area according to the business need and finally load the data to Data Warehouse. Now one common question would be that how will you segregate between a database and a data warehouse. And a compact answer can be like this – “Data warehouse is also a database. When a database stores historical data (data from the same system, taken at different time period), then it becomes a Data Warehouse.”

So we have historical data in the data warehouse. Now what is the use of these data? These data can be used for analysis of business and for that they need to be represented in different format according to the business need. Suppose a business owner wants to have the trend of his last 10 year revenue – represented in Bar graph. Now you can fetch the data from database using SQL for the last 10 year. But can you represent the same data graphically? Here comes the reporting tools and MicroStrategy is a powerful leader of those.

The purpose of reporting tools is to fetch the data from the data warehouse and to represent those data according to business requirements. MicroStrategy has huge number of powerful features to support this. Hopefully we will come to know about those in the upcoming posts. The following snapshot is of MicroStrategy Desktop window.

Fig1: Snapshot of MicroStrategy Desktop

Fig2: Differenct Types of Project in MicroStrategy

As we can see from the above snapshot, we have 2 types of projects in MicroStrategy – 3 Tier and 2 Tier project. We will discuss about this in detail in the next post. As well as I will try to give some information about the architecture of MicroStrategy.

Automation – An overview

August 25, 2010 Leave a comment

Any software product, no matter how accomplished by today’s testing technologies, has bugs. Some bugs are detected and removed at the time of coding. Others are found and fixed during formal testing as software modules are integrated into a system. However, all software manufacturers know that bugs remain in the software and that some of them have to be fixed later.So, After every release we have to do the regression testing to check the functionality of the software i.e. we have to execute same set of simple test cases each and every release.Here the software automation comes into play.
We can automate those regression test cases by writing scripts. But not all test cases can be automated by the technology we have now,still it is very useful and reduce the time and cost comparte to manual testing and a tester can devote himself more into testing of complex functionalties.

Categories: 1

Understanding ETL

November 29, 2009 Leave a comment

The Extract-Transform-Load (ETL) system is the foundation of the data warehouse. A properly designed ETL system extracts the data from source systems, performs transformations and cleansing and delivers the data in a presentation ready format after which the data will be loaded to the ware house. The following figure is the schematic description of the ETL process.

image

The 4 steps of ETL process are explained below:

Extracting: In this phase, data from different types of source systems are fetched into the staging area. The source systems can be mainframe, production sources or any other OLTP sources. Source files can also be different. Data can be stored in relational tables as well as flat files (e.g. Notepad files). The first job of ETL is to fetch data from these different sources.

Cleansing: In most cases level of data quality in an OLTP source system is different from what is required in a data warehouse. To achieve those data quality, cleansing has to be applied. Data cleansing consists of many discrete steps, including checking for valid values, ensuring consistency across values, removing duplicates and checking whether complex business rules and procedures have been applied.

Conforming: Data conformation is required whenever 2 or more data sources are merged in the data warehouse. Separate data sources cannot be queried together unless some or all the textual labels in these sources have been made identical and unless similar numerical measures have been rationalized.

Delivering: The main motto of this step is to make the data ready for querying. It includes physical structuring the data into a set of simple, symmetric schemas known as dimensional models, or star schemas (we will discuss star schema and dimensional model later). These schemas are a necessary basis for building an OLAP system.

Technorati Tags: ,

Data Warehousing… What is it all about?

November 18, 2009 2 comments

We are all familiar with the term “Database”. What is a Database? We can say that a database is a storage of some meaningful data specific to one’s business/personal need. A database can store any type of data. But if a database is storing the data then what’s the need of a Data Warehouse? So at first we have to know the basic difference between a Database and a Data Warehouse.

A Database is that which is storing only the most current data. In a more technical term, it is known as OLTP source. OLTP stands for Online Transactional Processing. But whenever we use Oracle/DB2/Teradata etc. to store historical data, it becomes a Data Warehouse. By the term Historical data, we mean “Snapshots” of data. That is data of same process taken at different instant of time. Technically its known as OLAP (Online Analytical Processing) system.  A simple example can be given at this context. Lets consider a bank. A database of the bank will store the latest transactional data only. But each day the bank have these types of data. So they need to archive these. So they are storing it in a daily basis, i.e. they are taking snapshots of data and consequently the are creating a data warehouse.

So am I able to draw the line clearly between a database and a data warehouse? Maybe I am ( thinking positively 🙂 ). Now another question comes into the picture. So why do one have to maintain a data warehouse. Because maintaining a data warehouse is a costly job. Day by day the size of the warehouse will grow. The answer is Business. One have to maintain the historical data to analyse the trend of his business and making decisions based on those data. This decisions can be a big factor in case of the present and upcoming performance of his business.

The job of a data warehouse specialist can be in many phase. He/she has to extract the data from different OLTP source systems, apply the modifications or cleansing according to the requirements and then loading this data to the data warehouse. And in another phase he have to fetch the data from the data warehouse, analyse it and develop the report. The end users can see those reports and based on those he/she can make the business decisions.

Ok… lets end it here. I will discuss more about this in depth from the next post. Take care..see you soon friends.

Technorati Tags:
Categories: Data warehousing