Introduction to R


R is not for “Rishu” as I made it out to be when I heard of this data mining tool. Initially, I assumed R to be yet another tool as Pentaho. But my assumptions fall apart when I clicked on http://www.r-project.org/ which says up front its definition:

“R is a language and environment for statistical computing and graphics. It is a GNU project which is similar to the S language and environment”

So here it is R is a language. R is more of a data mining tool as it seems to me. Well if you ever worked on MATLAB, the format and syntax would look the same. What makes R special is its ability to handle complex mathematical queries and computation simple and easier. Creating graphs and plots are never too easy. For example, let take the below image (Screen shot from code I wrote):

 Image

The code is pretty simple. I have assigned certain values (which are in vector format) into two separate variables – “a” and “‘b”. The value of variable “b” is the square of variable “a”. And as you can see the computation of mathematical function is done by using simple commands. I calculated the “MEAN” and “VARIANCE” of the variable b using two simple commands – mean (b) and var (b). The variable “c_lm” shows the linear regression model of variable b and a.

Well there are loads more. People have gone ahead and created something like “Google Trends”.  Though Google has its own GUI built over R, but nothing is stopping us from creating one either.

Sources: http://www.r-project.org/; Google Trends

Advertisements

Big Data – Hadoop HDFS and MapReduce

September 27, 2012 4 comments

The big data buzz is increasing day by day. So here is a more detailed look at the Hadoop – HDFS and MapReduce.

HDFS or the Hadoop Distributed File System is designed to store a large amount of data in various servers/clusters. The definition of large data needs no explanation (especially when we are talking Big Data).  Data in a Hadoop cluster is broken down in small blocks (default is 64MB) and distributed across the clusters.

The blocks in the cluster are placed based on a block placement algorithm – rack aware. Rack aware algorithm basically determines which block is to be placed in clusters based on the replication factor, which is generally 3x by default.

The basic architecture of HDFS cluster consists of two major nodes namely:

1. Name Node:

This is almost like the Master Node in Greenplum database and the “master” as per the master-slave concept.  The name node manages the file system namespace. It maintains the file system tree and the metadata for all the files and directories in the tree. This information is stored persistently on the local disk in the form of two files: the namespace image and the edit log.

Now the question arises what if the single name node crashes down (as we have only one primary name node). So, in order to maintain this data, Hadoop provides a secondary name node or Backup Name node. As primary name node is the Single Point of Failure (SPOF), the secondary name node copies the FsImage and EditLog from the Name Node at a particular time.

2. Data Node:

These are the major working blocks of the HDFS. They store and retrieve blocks when they are told to (by the name node), and they report back to the name node periodically with lists of blocks that they are storing. These data nodes are the places where the majority of the data resides.

 

Map Reduce is the second major portion of Hadoop architecture. Map Reduce is the programming logic or the brain as I would like to say. Map Reduce was created by Google which was based on the parallel processing programming logic, written in Java.

The Map Reduce programming model works on two parts – The Mapping part(done by the Mapper) and The Reduction part (done by the Reducer).

The Mapper works on the blocks of data available in the data nodes and tries to get the job done. You can think of Mapper as an individual worker (in the master-slave concept), working to get the data required from the client.

Now the major task remains is to get the aggregate count of the results done by each Mapper. This work is done by the Reducer. The Reducer iterates over the entire result data and sends back a single output value.

Map Reduce programming undergoes through various intermediate stages. Now let’s have a look at the following diagram:

From the diagram above we can see that the user give something as the input. In this case the input is a question and its subsequent answer. These files are stored in the data nodes of the HDFS. The Map-Reduce program looks into given data and breaks the data into an intermediate stage. The intermediate stage consists of a key/value pair, which breaks the file data into many key- value pair data. [If you have studied Compiler Design during your college days, then a look at the key-value stage just reminds me of the lexical analysis, semantic analysis, etc.]. Now after this stage, the sorting or the shuffling of the data takes place. It’s vague to understand from the diagram, but if you look into the second part of the above picture, you will understand the requirement of the sorting phase. The major reason is the availability of various servers or nodes. The Map Reduce makes sure that the shuffling and sorting of the data takes place using the key. Now come the reducer phase, which accepts the data coming from the sorting / shuffling phase and combines the data into a smaller set of values. This data is sent back to the user/client.

The above entire process is controlled by a JobTracker, which coordinates the job run and makes sure everything goes fine. The TaskTracker runs the tasks that the job has been split into.

So this is a brief description of the HDFS and the MapReduce. I didn’t go much deep into the core functionality of Map Reduce as it requires a full scale knowledge of the Java Programming Language. So I guess am able to give a short but detailed explanation on Hadoop. Thanks and take care.

Big Data : Parallelism and Hadoop:Basics


 

Let me start this blog by putting up two scenarios in front you:

Scenario I: You are given a bucket full of mixed fruits. There are 3 different kinds of fruits say apple mango and banana. Now how would you calculate the total number of apple, mango and banana in the bucket?

The simplest answer would be to count the fruits taking one by one and in the end getting the required result.

Scenario II: Now suppose instead of a bucket of fruits, you are given a Truck full of mixed fruits. How would you count the total number of individual fruit this time?

The most feasible approach would be to divide the work (instead of count the entire fruit truck one by one). We would take up one basket each full of fruits [mixed up fruits] and give it to different people[WORKER/SLAVE]. Each people count their own basket (irrespective of any communication between the two) and in the end we [MASTER] sum the results of each basket to get the result. Using this approach we would save time and effort [if you would agree].

Well, if you are still wondering why I started off with this scenario, then I have to say that HADOOP is built on this simple basic principle. The above scenario describes as something in technical terminology called as Parallel processing or distributed system programming. There is concept of Master – Worker in parallel processing system. Master divides the work and the worker does the allotted work. The work done by each worker is sent back to the Master.

Similar is the situation with BIG DATA. There is plenty of data available (just like the truck of fruits) which one cannot handle alone and most importantly the 3-V [volume, variety and velocity] factor of the BIG DATA. So to handle such a situation Apache came up with HADOOP – a high performance distributed data and processing system that can store any kind of data from any source at a very large scale and can do very sophisticated analysis of the BIG DATA.

Hadoop architecture is mainly based on the following two components:

1.       HDFS [Hadoop Distributed File System]:

It is more of a storage area for Hadoop. Whenever a data arrives at the cluster*, the HDFS software breaks it into pieces and distributes to the participating servers in the cluster.

2.       MapReduce:

 As the data is stored as fragments across various servers, MapReduce uses its programming logic to compute the required job on these server data and later return the result back to the Master Server. The computation happens locally and parallel across all servers in the cluster [Master – Worker concept].

The picture above describes the Hadoop Ecosystem, which will be explained in details in my later blogs. I hope I am clear with the parallel distributed concept. This concept will be useful in understanding the architecture of Hadoop.

[A bit of History on Hadoop: Hadoop was created by Doug Cutting, who named it after his son’s elephant toy. Hadoop was derived from Google’s MapReduce and Google File System (GFS) papers. Hadoop is a top-level Apache project being built and used by a global community of contributors, written in the Java programming language. Yahoo! has been the largest contributor to the project, and uses Hadoop extensively across its businesses.]

FAQ:

*cluster A computer cluster consists of a set of loosely connected computers that work together so that in many respects they can be viewed as a single system. The components of a cluster are usually connected to each other through fast local area networks, each node (computer used as a server) running its own instance of an operating system. Computer clusters emerged as a result of convergence of a number of computing trends including the availability of low cost microprocessors, high speed networks, and software for high performance distributed computing

[Source: Wikipedia [Hadoop History] and Google]

 

 

 

 

Big Data : An Introduction


Hey guys, I am back to blogging after a pretty long gap. Since my last blog I have been going through data warehousing stuffs. In the midst of my learning data warehousing techniques, I came to know about a bigger issue which is troubling IT companies. It’s called BIG DATA. So I thought to share my knowledge on this advanced business analytic with you guys.

If you are thinking BIG DATA deals with “data which are big in nature”, then I have to say you are perfectly correct. But if your brain is limited to the database tables with 1000 rows to 100K rows; then I fear BIG DATA is something bigger and messier than this. Well, a formal definition on BIG DATA would go as:

Big data is a term applied to data sets both structured and unstructured, whose volume is more than the capacity of commonly used software tools to capture, manage, and process the data with usual database and software techniques within an acceptable time.

 Today, companies face a serious issue. They have access to lots and lots of data and they have no idea what to do with those data. An IBM survey shows that over half of the business leaders today realize that they don’t have access to insights they need to do their jobs. These data normally are generated from the log files, IM Chats, Facebook chats, emails, sensors, etc. These data are raw in nature and is something you won’t find in database table (row-column) format. It’s accumulated from the day to day activity from the work of each and every associate. Companies are trying to access these data store to derive some business intelligence and strategies. BIG DATA is not about relational database but of the data which has got no relations to each other.

 BIG DATA can be classified basically into three different categories based on data characteristics:

1.      VOLUME:

There is huge amount of data that are being stored in the world. In the year 2000, there is around 800,000 petabytes (1 PB = 1015 bytes) of data stored in the world. The volume of data is growing rapidly. Companies have no idea what to do and how to process these data. Twitter alone generates more than 7 petabytes of data everyday and Facebook generates around 10PB of data alone. This value is growing exponentially companies. Some Enterprises generate terabytes of data every hour of every day of the year. It won’t be wrong to think that we are drowning deep in the ocean of data. By 2020, it is expected to reach 35 zettabytes (1 ZB= 1021 bytes).

 2.       VARIETY:

With huge volume of data comes another problem i.e. Variety. With the onset of rapid technology usage, data is not only limited to just relational database, but it has grown to the raw un-structured and semi-structured data mainly coming from web pages, log files, emails, chats, etc. Traditional systems struggle to store and perform required analytics to gain intelligence because most of the information generated doesn’t lend itself to traditional database technologies.

 3.       VELOCITY:

Velocity is one characteristic of BIG DATA that deals with how fast a data is being stored and used for analytics. In BIG DATA terminology, we are looking at a volume and variety aspect also. So, thinking on the rate of arrival of data along with the volume and variety, is something a traditional database technology could hardly handle. As per the survey is concerned, around 2.9 million of emails are sent every second, 20 hrs of video is uploaded every minute in YouTube and around 50 million tweets per day in Twitter. So I think you can imagine the velocity of data come at you.

There is also another characteristic of BIG DATA, which is VALUE. A value aspect of big data is something all companies are looking forward to. Unless you are able to derive some business intelligence and value of these data present, then there is no use of such data. In simple terms, Value deals with what the present unstructured raw data can get a meaningful statistics so that it can be useful in taking proper business decisions.

Companies are trying to extract all the information possible and derive better intelligence out of it and to gain a better understanding of the customers, marketplace and the business. Few technical solutions like HADOOP (which I will explain in my next blog), NoSQL, DKVS databases, etc. are combating BIG DATA problems.

For now all I could conclude is that the right use of BIG DATA will allow analysts to spot trends and give niche insights that help create value and innovation much faster than the conventional methods. It would also help in better meeting consumer demand and facilitating growth.

Cloud Computing : Architecture


Hey guys !!! i hope everyone is clear with the overview on cloud computing ,which i had already discussed in my previous blog. Our entire discussion on cloud computing will not end until and unless we discuss about the architectures and the technical side of this system. So, without wasting much time on “bakwasss” lets begin our discussion on the architecture of cloud computing.

Cloud architecture, the systems architecture of the software systems involved in the delivery of cloud computing, typically involves multiple cloud components communicating with each other over a loose coupling mechanism such as a messaging queue. When talking about a cloud computing system, it’s helpful to divide it into two sections:

1. The Front End or the Intercloud:
The front end includes the client’s computer (or computer network) and the application required to access the cloud computing system. Not all cloud computing systems have the same user interface. Services like Web-based e-mail programs leverage existing Web browsers like Internet Explorer or Firefox. Other systems have unique applications that provide network access to clients.

Cloud Computing Architecture

Cloud Computing Architecture

2. The Back End or The Cloud Engineering :
On the back end of the system are the various computers, servers and data storage systems that create the “cloud” of computing services. In theory, a cloud computing system could include practically any computer program you can imagine, from data processing to video games. Usually, each application will have its own dedicated server.

[N.B: Cloud engineering is the application of engineering disciplines to cloud computing. It brings a systematic approach to the high level concerns of commercialisation, standardisation, and governance in conceiving, developing, operating and maintaining cloud computing systems. It is a multidisciplinary method encompassing contributions from diverse areas such as systems, software, web, performance, information, security, platform, risk, and quality engineering.]

If a cloud computing company has a lot of clients, there’s likely to be a high demand for a lot of storage space. Some companies require hundreds of digital storage devices. Cloud computing systems need at least twice the number of storage devices it requires to keep all its clients’ information stored. That’s because these devices, like all computers, occasionally break down. A cloud computing system must make a copy of all its clients’ information and store it on other devices. The copies enable the central server to access backup machines to retrieve data that otherwise would be unreachable. Making copies of data as a backup is called Redundancy.

The architecture of cloud is evolving rapidly. Hopefully in the upcoming future of computing we can say “we build our home in the cloud”. There are also many issues such as privacy, data maintenance, etc, but still there are loads of advantages too. We will discuss it in the later blogs. Stay tuned for more !!!

Cloud Computing : Overview


I guess everyone is now all aware of the cloud computing. Its been on news everywhere in all the IT sectors of the world. Its been in huge demand these days and is also said to change the entire computer industry. So, now the question still stays – What is Cloud computing ?? [only for those who dont know about it]. Lets me state the basic overview about cloud computing !!!

The term “cloud” is used as a metaphor for the Internet, based on the cloud drawing used in the past to represent the telephone network, and later to depict the Internet in computer network diagrams as an abstraction of the underlying infrastructure it represents.Cloud computing is a natural evolution of the widespread adoption of virtualisation, service-oriented architecture, autonomic, and utility computing. Details are abstracted from end-users, who no longer have need for expertise in, or control over, the technology infrastructure “in the cloud” that supports them.

Let’s say you’re an executive at a large corporation. Your particular responsibilities include making sure that all of your employees have the right hardware and software they need to do their jobs. Buying computers for everyone isn’t enough — you also have to purchase software or software licenses to give employees the tools they require. Whenever you have a new hire, you have to buy more software or make sure your current software license allows another user. It’s so stressful that you find it difficult to go to sleep on your huge pile of money every night. And this is where the concept of Cloud computing come into play.
Now all you need to do is just to load only one application instead of installing a suite of software for each computer. That application would allow workers to log into a Web-based service which hosts all the programs the user would need for his or her job. Remote machines owned by another company would run everything from e-mail to word processing to complex data analysis programs. This is what CLOUD COMPUTING is all about.

In a cloud computing system, there’s a significant workload shift. Local computers no longer have to do all the heavy lifting when it comes to running applications. The network of computers that make up the cloud handles them instead. Hardware and software demands on the user’s side decrease. The only thing the user’s computer needs to be able to run is the cloud computing system’s interface software, which can be as simple as a Web browser, and the cloud’s network takes care of the rest.

If you guys are still wondering, just take another simple example of Gmail accounts that is provided by Google. Instead of running an e-mail program on your computer, you log in to a Web e-mail account remotely. The software and storage for your account doesn’t exist on your computer — it’s on the service’s computer cloud.

Cloud computing is all the rage. “It’s become the phrase du jour,” says Gartner senior analyst Ben Pring.

Hence my small overview about cloud computing is over. I guess soon i will be able to provide you with more details about this new phenomenon.

Categories: 1 Tags:

Automation testing contd

September 16, 2010 Leave a comment

Before going further lets understand the testing steps:
Unit testing
—————–
This type of testing tests individual application objects or methods in an isolated environment. It verifies the smallest unit of the application to ensure the correct structure and the defined operations. Unit testing is the most efficient and effective means to detect defects or bugs. The testing tools are capable of creating unit test scripts.
Integration testing
——————-
This testing is to evaluate proper functioning of the integrated modules (objects, methods) that make up a subsystem. The focus of integration testing is on cross-functional tests rather than on unit tests within one module. Available testing tools usually provide gateways to create stubs and mock objects for this test.
System testing
——————
System testing should be executed as soon as an integrated set of modules has been assembled to form the application. System testing verifies the product by testing the application in the integrated system environment.
Regression testing
—————–
Regression testing ensures that code modification, bug correction, and any postproduction activities have not introduced any additional bugs into the previously tested code. This test often reuses the test scripts created for unit and integration testing. Software testing tools offer harnesses to manage these test scripts and schedule the regression testing.

Usability testing
—————–
Usability testing ensures that the presentation, data flow, and general ergonomics of the application meet the requirements of the intended users. This testing phase is critical to attract and keep customers. Usually, manual testing methods are inevitable for this purpose.
Stress testing
————–
Stress testing makes sure that the features of the software and hardware continue to function correctly under a predesigned set and volume of test scenarios. The purpose of stress testing is to ensure that the system can hold and operate efficiently under different load conditions. Thus, the possible hardware platforms, operating systems, and other applications used by the customers should be considered for this testing phase.
Performance testing
—————–
Performance testing measures the response times of the systems to complete a task and the efficiency of the algorithms under varied conditions. Therefore, performance testing also takes into consideration the possible hardware platforms, operating systems, and other applications used by the customers.