Information for build hadoop-0.20-0.20.2+737-16
ID | 694 | |||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Package Name | hadoop-0.20 | |||||||||||||||||||||||||||||||||||||||||||||||||
Version | 0.20.2+737 | |||||||||||||||||||||||||||||||||||||||||||||||||
Release | 16 | |||||||||||||||||||||||||||||||||||||||||||||||||
Epoch | ||||||||||||||||||||||||||||||||||||||||||||||||||
Summary | Hadoop is a software platform for processing vast amounts of data | |||||||||||||||||||||||||||||||||||||||||||||||||
Description | Hadoop is a software platform that lets one easily write and run applications that process vast amounts of data. Here's what makes Hadoop especially useful: * Scalable: Hadoop can reliably store and process petabytes. * Economical: It distributes the data and processing across clusters of commonly available computers. These clusters can number into the thousands of nodes. * Efficient: By distributing the data, Hadoop can process it in parallel on the nodes where the data is located. This makes it extremely rapid. * Reliable: Hadoop automatically maintains multiple copies of data and automatically redeploys computing tasks based on failures. Hadoop implements MapReduce, using the Hadoop Distributed File System (HDFS). MapReduce divides applications into many small blocks of work. HDFS creates multiple replicas of data blocks for reliability, placing them on compute nodes around the cluster. MapReduce can then process the data where it is located. | |||||||||||||||||||||||||||||||||||||||||||||||||
Built by | Jeffrey Michael Dost 694 | |||||||||||||||||||||||||||||||||||||||||||||||||
State | complete | |||||||||||||||||||||||||||||||||||||||||||||||||
Volume | DEFAULT | |||||||||||||||||||||||||||||||||||||||||||||||||
Started | Fri, 21 Oct 2011 01:24:02 CDT | |||||||||||||||||||||||||||||||||||||||||||||||||
Completed | Fri, 21 Oct 2011 01:38:17 CDT | |||||||||||||||||||||||||||||||||||||||||||||||||
Task | build (el5-osg, hadoop-0.20-0.20.2+737-16.src.rpm) | |||||||||||||||||||||||||||||||||||||||||||||||||
Tags | No tags | |||||||||||||||||||||||||||||||||||||||||||||||||
RPMs |
|
|||||||||||||||||||||||||||||||||||||||||||||||||
Logs |
|
|||||||||||||||||||||||||||||||||||||||||||||||||
Changelog | * Thu Oct 20 2011 Jeff Dost <jdost@ucsd.edu> 0.20.2+737-16 - Ensure 0.19 backups are created only if previous installation was 0.19. * Thu Oct 06 2011 Jeff Dost <jdost@ucsd.edu> 0.20.2+737-15 - Patch hadoop-fuse-dfs to follow symlinks when finding libjvm.so. * Tue Aug 23 2011 Jeff Dost <jdost@ucsd.edu> 0.20.2+737-14 - Release bump because of missing noarch rpms on previous build. * Sun Jun 05 2011 Jeff Dost <jdost@ucsd.edu> 0.20.2+737-13 - Add hadoop-config.sh to alternatives to prevent hadoop-daemon from failing - Add triggerpostun section to fix alternatives links that break when removing hadoop 0.19 * Fri Apr 01 2011 Brian Bockelman <bbockelm@cse.unl.edu> 0.20.2+737-12 - Fix mkdir. - Fix race issues in connect. * Sat Mar 19 2011 Brian Bockelman <bbockelm@cse.unl.edu> 0.20.2+737-10 - Slightly better FS stability in testing. * Tue Mar 15 2011 Brian Bockelman <bbockelm@cse.unl.edu> 0.20.2+737-9 - Found another case where we improperly passed around a FS object. * Mon Mar 14 2011 Brian Bockelman <bbockelm@cse.unl.edu> 0.20.2+737-8 - Update FUSE to keep a per-user FS object cache. Should alleviate slowness with operating on large directories. * Tue Mar 01 2011 Brian Bockelman <bbockelm@cse.unl.edu> 0.20.2+737-6 - Require -fuse to be aligned with the exact release of the parent package. * Sat Feb 26 2011 Brian Bockelman <bbockelm@cse.unl.edu> 0.20.2+737-5 - FUSE-DFS adds $HADOOP_CONF to the $CLASSPATH. * Tue Feb 22 2011 Brian Bockelman <bbockelm@cse.unl.edu> 0.20.2+737-4 - Fix memory leaks in fuse-dfs. * Thu Dec 23 2010 Brian Bockelman <bbockelm@cse.unl.edu> 0.20.2+737-2 - Moved away the directories causing alternatives to choke - Fixed ownership in the conf.empty directory. - Removed the Cloudera init scripts. - Add an Obsoletes line in order to allow for a clean upgrade from OSG RPMs. |