Hadooping it Up!
What is Hadoop and what is its role in Big Data
Issue: 14.5 (September/October 2016)
Author: Craig Boyd
Author Bio: Craig Boyd is currently a data architect and senior consultant for a growing business intelligence consultancy. But in his 20 years of IT experience, he has been everything from a PC Technician, iSeries System Administrator, iSeries Programmer, Sr. Technical Lead, Data Modeler, Data Architect, Oracle DBA and BI Consultant. He lives in the great state of Texas with his wife and two kids.
Article Description: ion>No description availa
Article Length (in bytes): 11,629
Starting Page Number: be
Article Number: 14510
Related Web Link(s):
Excerpt of article text...
In the last column we talked about big data and some of the terms surrounding it. In this column we are going to specifically talk about Hadoop, some of the related projects, and where it fits in the big data world.
What is Hadoop?
Hadoop is a free Java-based programming framework that supports the processing of large data sets in a distributed computing environment. It is part of the Apache project sponsored by the Apache Software Foundation. [From WhatIs.com]
Basically Hadoop is a means by which massive amounts of data is stored and accessed. It is important to realize that Hadoop is a layer of software used to store massive amounts of data across N-number of commodity (cheap) hardware. Therefore, compared to other data storage facilities it has a lower cost per gigabyte. When we talk about Hadoop storing massive amounts of data, we are typically talking about petabytes of data being stored in a single cluster.
...End of Excerpt. Please purchase the magazine to read the full article.