Today's FPGAs have become 'All Programmable SOC Platforms' that integrate in one single device multi-core CPU's, programmable DSP functions, programmable IO and programmable logic, all immersed in a rich and configurable interconnect network. These programmable platform FPGA's allow for the implementation of heterogeneous multi-core architectures that combine traditional CPU's with application-specific processing cores and dedicated data transfer and storage functions. This is enabled by tools that guide designers during the partitioning and mapping of high-level specifications onto a combination of software running on embedded processors and hardware implemented in programmable logic.
FPGAs are well placed to continue to benefit from Moore's law. Advances in process scaling will be augmented with new circuit and architectural improvements along with innovations in system-in-package technology to solve IO challenges and integrate heterogeneous technologies. These innovations will allow designers to build higher performance and lower power systems that optimally exploit the programmable FGPA architecture.
As FPGA platforms continue to deliver more performance at lower cost and lower power, they are becoming the heart of embedded applications such as complex packet processing for networks with line rates of 400+ Gbps; high performance digital signal processing in novel wireless baseband and radio functions; and future video and image processing systems.
Ivo Bolsens is senior vice president and chief technology officer (CTO), with responsibility for advanced technology development, Xilinx research laboratories (XRL) and Xilinx university program (XUP).
Bolsens came to Xilinx in June 2001 from the Belgium-based IMEC, where he was vice president of embedded systems and software. His responsibilities included the design of digital signal processing applications and wireless communication terminals. He also headed the research on design technology for high-level synthesis of DSP hardware, HW/SW co-design and system-on-chip design.
Bolsens holds a PhD in applied science and an MSEE from the Catholic University of Leuven in Belgium.
The GPU evolved from its humble beginnings as a VGA accelerator to become a massively parallel general processor for heterogeneous computing systems. Driven by an insatiable need for more realism in computer graphics, the GPU evolved into a massively parallel processor, executing extensive programs on every pixel over 60-times a second. Once a certain level of programmability emerged, the potential for general purpose computing beyond graphics became obvious.
Programming thousand core GPUs running millions of parallel threads presents some unique opportunities to change the way developers write parallel programs. In this talk, I will present some the original motivations behind the design of CUDA, the predominant general purpose parallel programming model used for GPUs today. I will also outline why directive based programming, where compilers can auto-parallelize through the use of hints from the developer, is experiencing resurgence in popularity due to the performance advantage GPUs offer to solutions like OpenACC.
In the same way there are many different ways to program CPUs, so too is true for GPU parallel programming. In this talk, with a perspective on the activity happening now in the parallel programming computing community, I will define how we will be programming HPC hybrid supercomputer in the years to come. I will also share how GPU computing is growing beyond HPC and changing fields in the area of movie making, manufacturing, automotive, and mobile devices.
Ian Buck is NVIDIA's General Manager for GPU Computing Software, responsible for all engineering, 3rd party enablement, and developer marketing activities for GPU Computing at NVIDIA. Ian joined NVIDIA in 2004 and created CUDA, which remains the established leading platform for accelerated based parallel computing. Before joining NVIDIA, Ian was the development lead on Brook which was the forerunner to generalized computing on GPUs. He holds a Ph.D. in Computer Science from Stanford University and B.S.E from Princeton University.
Looking for a Higgs boson? Peeking inside a nano-scale chemical reactor? Making clean energy from microbial soup? Turning photons into electrons? These and other challenges in science and engineering exemplify modern scientific discovery --- the traditional barriers between disciplines have been demolished, and the three pillars of science (experiment, theory, simulation) have been augmented with the ability to extract knowledge from vast data sets, and then fused into one integrated approach.
Examining these accomplishments, we ask how can we sustain and even accelerate progress given shrinking government funding, rapidly evolving and increasingly complex high-performance computers, and the increasingly multidisciplinary nature of science. Front and center are maintaining our ability to realize ever more complex theories models in robust, predictive software capable of using the largest computers available, and bringing new disciplines and researchers to benefit from the power of modern computation at any scale.
Since October 2012, Robert J. Harrison is a professor in the applied mathematics department of Stony Brook University where he also directs the new Institute of Advanced Computational Science. He is jointly appointed with Brookhaven National Laboratory where is directs the Center for Computational Science. Previously, he was director of the Joint Institute of Computational Science (JICS) at the University of Tennessee, Knoxville (UTK) and Oak Ridge National Laboratory (ORNL), with appointment in the department of chemistry at UTK. JICS is home to the National Institute of Computational Science (NICS), one of the National Science Foundation supercomputer centers. He has many publications in peer-reviewed journals in the areas of theoretical and computational chemistry, and high-performance computing. His undergraduate (1981) and post-graduate (1984) degrees were obtained at Cambridge University, England. Subsequently, he worked as a postdoctoral research fellow at the Quantum Theory Project, Univ. Florida, and the Daresebury Laboratory, England, before joining the staff of the theoretical chemistry group at Argonne National Laboratory in 1988. In 1992, he moved to the Environmental Molecular Sciences Laboratory of Pacific Northwest National Laboratory, conducting research in theoretical chemistry and leading the development of NWChem, a computational chemistry code for massively parallel computers. In August 2002, he started the joint faculty appointment with UT/ORNL, and became director of JICS in 2011. In addition to his Department of Energy (DOE) Scientific Discovery through Advanced Computing (SciDAC) research into efficient and accurate calculations on large systems he has been pursuing applications in molecular electronics and chemistry at the nanoscale, and with support from the National Science Foundation (NSF) has been working towards making general numerical computation on high-performance computers much more accessible and scientifically productive. In 1999, the NWChem team received an R&D Magazine R&D100 award, in 2002, he received the IEEE Computer Society Sydney Fernbach award, and in 2011 another R&D Magazine R&D100 award for the development of MADNESS. His research intrests include Theoretical and computational chemistry; electron correlation; electron transport; relativistic chemistry; response theory; simulation in many-dimensions; high-performance computing; high-productivity computating.