Исследована эффективность использования различных языков для фреймворка Apache Hadoop с целью обработки больших коллекций данных на базе модели MapReduce. Акцент сделан на анализе скорости выполнения программ в Hadoop-кластере. Проведено сравнение различных проектов по экосистеме Hadoop для распределенных вычислений. Описанные эксперименты подтвердили преимущество использования Apache Spark. Установлено, что преимущество в скорости MapReduce-программ, написанных на Java- или другом JVM-языке, существенны.
Досліджено ефективність використання різних мов програмування у фреймворку Apache Hadoop для обробки великих колекцій даних з використанням моделі MapReduce. Акцент зроблено на аналізі швидкості виконання програм у Hadoopкластері. Проведено порівняння різних проектів із екосистеми Hadoop для розподілених обчислень. Описано експерименти, які підтвердили переваги використання Apache Spark. Встановлено, що перевага у швидкості MapReduce-програм, написаних на Java- або іншій JVM-мові над іншими, є суттєвою.
The effectiveness of the different languages for Apache Hadoop framework to process large data collections based on the MapReduce model is discussed. Apache Hadoop is used in many industrial projects all over world such as Facebook and Yahoo!. It provides the ability to process different tasks effectively and reliably on the cluster to handle the huge amounts of data. MR model allows the developers to ignore the complex architectures by cluster management, and immediately to develop a program. This work investigates the influence of the programming language on the speed of the program in the Apache Hadoop framework. The subject of comparison is the execution of programs in Java, Scala and Python that implements the solution of the simple problem: how long each word in the input collection of text documents is searched. All three programs, in spite of the language, is written in the same style, so that the comparison results are objective. For the experiments, we have chosen the image of ClouderaQuickstart VM virtual machine. The easy use of this virtual machine is that it is already established Hadoop, HDFS, and other services. Also, a cluster of three nodes is created for the study. CDH is elected as the distribution of Apache Hadoop and related projects. The desired configuration on each node is set. Each program is ran for the different size input: 8Mb, 34Mb, 61Mb, 106Mb and 203Mb. During the experiments, the best results is showed by the program that is written in the Apache Spark. In addition, it is found that the MR program in the Apache Hadoop is better to write in Java or any other JVM languages than Python. An advantage in speed is obvious. Also, experiments shows that the processing speed is larger at higher input collections. So, it is not necessary to use Hadoop to work with small data.