The performance of MAPREDUCE across geographically distributed environments is highly dependent upon the amount of network utilization and the quality of the network bandwidth and latency. Different MapReduce configurations are best suited for different data distribution models. In order to select the appropriate model, it is important to understand the characteristics of the workload of the MapReduce job which you are attempting to complete.
Cardosa, et.al describe three different workload data aggregation schemes for MapReduce jobs (Michael Cardosal, 2011). The first, “High Aggregation”, occurs when the output of the MapReduce process is magnitudes of order smaller than the input. Jobs like these are where input data is categorized and counted and large amounts of matches will be reduced to simple category counts. Examples include MapReduce grep, where a word or HREF counts are performed across large amounts of distributed data. The input is a large list of files, and the output is a much smaller list of word counts. The second MapReduce workload scheme mentioned, “Net Zero Aggregation”, occurs when the output from a MapReduce process is approximately equal to the input. Sort is a good example of “Net Zero Aggregation”. With a sort job, the output file structure is typically the same size as the input. The final MapReduce workload scheme discussed, “Ballooning Data”, occurs when the output of the MapReduce function produces more records and data than what was input. An example of this would be a MapReduce job which converts compact formats such as GIF to larger data formats such as JPEG . The amount of data produced is an important factor to consider when architecting a MapReduce solution. In their study, Michael Cardosa, et. al. found that when workloads are highly aggregated, a geographically distributed environment works well. For zero aggregation or ballooning data, centralizing the data before applying map reduce is preferred.
MapReduce is gaining attention from the scientific community in the area of natural language processing (Atilla Soner Balkir, 2011). Natural language processing models often involve optimization algorithms across large amounts of data. A constraint with natural language processing has always been high-speed access to large members of frequently changing parameter values. In information retrieval, the number of times an index term occurs in a document is called its term frequency. The discovery of recurrent phrases automatically from the text in a quick turnaround is key to the natural language application as a key phrase can help identify the intent of the user. Balkir, et al. used MapReduce, implemented with Hadoop, to develop a model for chunking up sentences into smaller phrases to help identify recurrent phrases. Their approach speeds up by 6 times the performance required to identify and match these phrases against large, distributed data banks. The use of MapReduce will continue to drive advances in natural language processing, speech recognition, and other forms of artificial intelligence.
Frequently Asked MapReduce Interview Questions
Another application of MapReduce involves analytics of software repositories. Weiyi et al. proposed a study on the use of the model in mining software repositories (Weiyi Shang, 2010). The field of Mining Software Repositories involves analyzing source code, deployment logs, and bug repositories, to find statistical correlations that can be used to identify and address issues in the code such as potential security flaws. To illustrate their approach, they created a MapReduce program to count the number of source code lines. This process was required to evaluate each line of a program to determine if it was an actual instruction or a comment. This type of program had a 20 fold improvement in performance on extremely large repositories over traditional approaches. They conclude that automated software engineering tools play an important role in the analysis of software repositories.
One of the most well-known graph computation problems is computing personalized page ranks (Haveliwala, 2002). Personalized page rank algorithms are used by search providers such as Microsoft, Yahoo, and Google to provide the most attractive search results for a given user based on their observed preferences and tendencies. They are also used in link prediction and recommendation engines. The two approaches for computing personalized page ranks are linear algebraic techniques and Monte Carlo simulations. MapReduce can handle this scenario when there are lower-level issues such as job distribution, data storage, and flow, fault tolerance and computational abstraction. In their research, Haveliwala, et. al. showed how MapReduce combined with advanced statistical modeling techniques could provide end search users a personalized experience, considering their intent to reply back with results that were relevant. For example, user who enters the phrase “Recommended Restaurants” will want results that are relevant to their general local.
As the cost of data farms and utility compute environments decreases, the amount of data that organizations collect and analyze for insights will continue to rise exponentially. In order to process large amounts of data, the simplified MapReduce programming model provides an easy approach. MapReduce is a programming pattern that takes advantage of large utility to compute farms with many distributed nodes. MapReduce greatly simplifies the programming paradigm for developers to process large amounts of distributed data. While many proprietary MapReduce implementations exist, The Apache Hadoop provides an open-source implementation that supports MapReduce, distributed file system, and a variety of other functions.
We’ve shown how MapReduce functions, its pros and cons and performance considerations, and some popular implementations, GFS and Hadoop. We’ve also shown how organizations are using MapReduce for problems such as personalized page rank, software quality assurance and text mining. As more data is collected, MapReduce will continue to provide a method of analyzing these large data in a simplified manner.
Hadoop Administration | MapReduce |
Big Data On AWS | Informatica Big Data Integration |
Bigdata Greenplum DBA | Informatica Big Data Edition |
Hadoop Hive | Impala |
Hadoop Testing | Apache Mahout |
Our work-support plans provide precise options as per your project tasks. Whether you are a newbie or an experienced professional seeking assistance in completing project tasks, we are here with the following plans to meet your custom needs:
Name | Dates | |
---|---|---|
MapReduce Training | Dec 28 to Jan 12 | View Details |
MapReduce Training | Dec 31 to Jan 15 | View Details |
MapReduce Training | Jan 04 to Jan 19 | View Details |
MapReduce Training | Jan 07 to Jan 22 | View Details |
Yamuna Karumuri is a content writer at Mindmajix.com. Her passion lies in writing articles on IT platforms including Machine learning, PowerShell, DevOps, Data Science, Artificial Intelligence, Selenium, MSBI, and so on. You can connect with her via LinkedIn.