Keynote Speech 1How New Technologies and Big Data Are Expanding the Impact of Supercomputing on Science and Society
Director, Texas Advanced Computing Center (TACC), The University of Texas at Austin, USA
The transformational impact of supercomputing in science and engineering has been remarkable, especially in simulation-based science. As supercomputing systems have reached tera-scale and now peta-scale, these orders-of-magnitude increases in computational capabilities have enabled higher resolution simulations, more exploration of wide parameter spaces, and even multi-scale, multi-physics simulations—all leading to more realism, and more understanding. However, many important problems require yet more orders of capability, and are driving new technologies such as GPUs, MICs, FPGAs, and advances in storage and networking. The supercomputing community is again rich in component technologies being used and explored, with more than a dozen processor technologies alone now being used or developed for supercomputing systems.
Concurrently, the explosion in digital data creation—from computing systems and networks, but also from new generations of digital detectors in all domains—is driving a corresponding explosion in data-driven science. Bio-informatics, environmental science, health care, and social sciences are relatively new communities using massive computing systems, with new usage models and requirements influencing the design, configuration, and operation of large-scale systems.
In this talk, we will examine the emerging technology trends that are changing the face of supercomputing, and discuss expanding role of supercomputing systems in a world dealing with ‘big data’ as well as ‘big simulations.’ We will thus explore how supercomputing is not becoming less relevant but even more so due to ‘big data,’ and how its very definition, as well as scope and impact, will grow in the years ahead.
John (“Jay”) R. Boisseau is the director of the Texas Advanced Computing Center (TACC) at The University of Texas at Austin. Boisseau came to UT to create TACC in 2001, and under his leadership TACC has grown into one of the leading advanced computing centers in the world. TACC develops, deploys, and operates powerful high performance computing, scientific visualization, massive data storage, and grid computing technologies for scientific research. Boisseau provides the vision and strategy that guide the overall resources & services, research & development, and education & outreach programs of the center. He has led several successful proposals and projects for providing world-class HPC systems and other cyber infrastructure for the U.S. high-end computational science community, including the the National Science Foundation (NSF) funded project to provide Ranger as a “path to peta-scale” HPC system in 2008, and the new NSF award to deploy Stampede, a 10 petaflop system using Intel’s forthcoming MIC processor technology, to be deployed in early 2013. He was the lead for TACC’s role in the NSF-funded National Partnership for Advanced Computational Infrastructure (NPACI) from 2001 to 2005, and for UT’s role in the NSF-funded TeraGrid from 2005 to 2011. He is a now a co-principal investigator on the new NSF-funded eXtreme Science & Engineering Discovery Environment (XSEDE) national cyber-infrastructure project, which succeeds the TeraGrid, and also on the NSF-funded eXtreme Digital (XD) Technology Insertion Service project.
Boisseau started his training at the University of Virginia, where he received a bachelor’s degree in astronomy and physics in 1986, while working in various scientific computing positions. He continued his education at The University of Texas at Austin, where he received his master in astronomy in 1990, then took a position at the Arctic Region Supercomputing Center in 1994 while conducting computational research on Type Ia explosion mechanisms, which he completed in 1996. He then moved to the San Diego Supercomputer Center, where he eventually founded and became the Associate Director of the Scientific Computing Department, initiating and leading several major activities of the center in HPC and grid computing.
Keynote Speech 2The Exascale Challenge
Director, Extreme-scale Research, Intel Architecture Group, USA
Compute performance increased by orders of magnitude in the last few decades, made possible by continued technology scaling, increasing frequency, providing integration capacity to realize novel architectures, and reducing energy to keep power dissipation within limit. The technology treadmill will continue, and one would expect to reach Exa-scale level performance this decade; however, it’s the same Physics that helped you in the past will now pose some barriers—Business as usual will not be an option. The energy and power will pose as a major challenge—an Exa-scale machine would consume in excess of a Giga-watt! Memory & communication bandwidth with conventional technology would be prohibitive. Orders of magnitude increased parallelism, let alone explosion of parallelism created by energy saving techniques, would increase unreliability. And programming system will be posed with even severe challenge of harnessing the performance with concurrency. This talk will discuss potential solutions in all disciplines, such as circuit design, test, architecture, system design, programming system, and resiliency to pave the road towards Exa-scale performance
Shekhar Borkar received M.Sc in Physics from University of Bombay in 1979, MSEE from University of Notre Dame in 1981 and joined Intel Corp, where he worked on the 8051 family of micro-controllers, and Intel’s supercomputers. Shekhar is an Intel Fellow, an IEEE Fellow, and Director of Extreme-scale Research in Intel Architecture Group.
Keynote Speech 3Data Access, Management and Storage: The Road Ahead
Parallel Scientific and Xyratex, USA
Future systems in HPC and cloud computing will likely lead to billion way parallelism, placing unprecedented demands on data handling. In this talk we will quickly review the existing paradigms in managing data and point out some of the challenges associated with scaling these up. We will discuss that a deeper integration between applications and data, discovered by dozens of scientists all around the world, all the way through the storage stack from CPU caches to disk, can eliminate many of these obstacles. The emerging architecture is quite different from current practice, but appears to have a very manageable API. Key examples involve data placement and data access patterns, failure handling, peer-to-peer decision making on collective operations, and information life cycle management with guided mechanisms. The implementation to fully adopt this architecture would lead to changes all the way from the compiler level, to storage behaviors that haven't been challenged since the emergence of Unix.
Dr Peter Braam taught mathematics and computer science at Oxford and CMU until around 2000 and then became a software entrepreneur. He has developed several storage systems in start-ups that were acquired, in the most well known of which he developed the Lustre File System, which presently powers 10 of the top 10 systems and more than half of the top 500. When not running startups he has held executive and architect positions in larger companies like Red Hat, Sun and Xyratex. He is currently running a start-up called Parallel Scientific, focussed on parallel programming and a Fellow for Storage Software at Xyratex, a large storage OEM supplier. Peter has good connections with academics and occasionally supervises graduate students and is on advisory groups to the European Community for High Performance Computing.