//honeypot demagogic

 Forum DhammaCitta. Forum Diskusi Buddhis Indonesia

Author Topic: How to Scale Hadoop for MapReduce Applications (by SUN)  (Read 1119 times)

0 Members and 1 Guest are viewing this topic.

Offline hatRed

  • KalyanaMitta
  • *****
  • Posts: 7.400
  • Reputasi: 138
  • step at the right place to be light
How to Scale Hadoop for MapReduce Applications (by SUN)
« on: 22 July 2009, 02:28:07 PM »
Watch this on-demand webinar about Apache Hadoop, (https://dct.sun.com/dct/forms/reg_us_2005_941_0.jsp) a distributed computing platform that can use thousands of networked nodes to process vast amounts of data. In this webinar, you will learn how Sun's chip multithreading (CMT) technology-based UltraSPARC T2 Plus processor can process up to 256 tasks in parallel within a single node.

We will also share how we evaluated CPU and I/O throughput, memory size, and task counts to extract maximal parallelism per single node.

You will learn about:

    * Scale
      How to use Hadoop to store and process petabytes of data
    * Performance
      How to maximize parallelism per node, and the results of tests varying the number of nodes, and integrating Flash memory drives
    * Virtualization
      How we created multiple virtual nodes using Solaris Containers
    * Reliability
      How Hadoop automatically maintains multiple copies of data and redeploys tasks based on failures
    * Deployment Options
      How Hadoop can be run in the "cloud" on Amazon EC2/3 services and in compute farms and high-performance computing (HPC) environments

If you have any questions or feedback, please send a message to newheights [at] sun.com.

Thank you,
Sun Microsystems, Inc.

P.S. Check out over 125 system configs available for free trial. Get our free catalog. ( http://communications1.sun.com/r/c/r?2.1.3J1.2U2.14QzVc.CLK%2aSS..N.GIqy.2Pte.aT1zZW5kbWFpbDJpcnZhbkB5YWhvby5jby5pZCZtbz0xBXMCIYf0 )
« Last Edit: 22 July 2009, 04:48:32 PM by Kemenyan »
i'm just a mammal with troubled soul



 

anything