Wir verwenden Cookies und Analyse-Tools, um die Nutzerfreundlichkeit der Internet-Seite zu verbessern und für Marketingzwecke. Wenn Sie fortfahren, diese Seite zu verwenden, nehmen wir an, dass Sie damit einverstanden sind. Zur Datenschutzerklärung.
Large scale data processing in Hadoop MapReduce scenario
Details
Cloud Computing has brought a huge impact in IT industry. Computing resources are easier to get in Cloud Computing. Briefly speaking, Cloud Computing is a resource pool, which contains a masssive amount of interconnected computers. Under such background, in order to make full use of the network, Google initiated MapReduce model. This model is an implementation of Parallel Computing, which aims at processing large amount of data. Given certain computing resources and MapReduce model, this book gives some thinking about how to estimate the time consumption of a huge computation task. Based on classical Parallel Computing theories, this book proposed two models to estimate the time consumption. It also gives conclusions about what type of computation task is estimatable. The experiments in this book are easy to implement, which are very suitable references for Cloud Computing fans.
Autorentext
Li Jian is a master student from Agder University(UiA), who is currently working as a technology consultant in NEVER.NO AS, Norway. His main research area is Cloud Computing. He is now applying Amazon Cloud to broadcasting industry.
Weitere Informationen
- Allgemeine Informationen
- GTIN 09783659155161
- Anzahl Seiten 68
- Genre Allgemein & Lexika
- Auflage Aufl.
- Herausgeber LAP LAMBERT Academic Publishing
- Gewicht 119g
- Untertitel Time estimation and computation models
- Größe H220mm x B150mm x T5mm
- Jahr 2012
- EAN 9783659155161
- Format Kartonierter Einband
- ISBN 3659155160
- Veröffentlichung 11.07.2012
- Titel Large scale data processing in Hadoop MapReduce scenario
- Autor Li Jian
- Sprache Englisch