icon
Image from Google Jackets
Image from OpenLibrary

Programming massively parallel processors : a hands-on approach / David B. Kirk, Wen-mei W. Hwu.

By: Contributor(s): Publisher: ed Cambridge, MA : Morgan Kaufmann, 2017Edition: Third editionDescription: xxii, 550 pages : illustrations, charts ; 24 cmContent type:
  • text
Media type:
  • unmediated
Carrier type:
  • volume
ISBN:
  • 9780128119860
  • 0128119861
Subject(s): DDC classification:
  • 004/.35  23 K59
Contents:
Data parallel computing -- Scalable parallel execution -- Memory and data locality - Performance considerations -- Numerical considerations -- Parallel patterns : convolution -- Parallel patterns : prefix sum -- Parallel patterns : parallel histogram computation -- Parallel patterns : sparse matrix computation -- Parallel patterns : merge sort -- Parallel patterns : graph search -- CUDA dynamic parallelism -- Application case study : non-cartesian magnetic resonance imaging -- Application case study : molecular visualization and analysis -- Application case study : machine learning -- Parallel programming and computational thinking -- Programming a heterogeneous computing cluster -- Parallel programming with OpenACC -- More on CUDA and graphics processing unit computing -- Conclusion and outlook -- Appendices: An introduction to OpenCL ; THRUST : a productivity-oriented library for CUDA ; CUDA fortran ; An introduction to C++ AMP.
Summary: Programming Massively Parallel Processors: A Hands-on Approach shows both student and professional alike the basic concepts of parallel programming and GPU architecture, exploring, in detail, various techniques for constructing parallel programs. Case studies demonstrate the development process, detailing computational thinking and ending with effective and efficient parallel programs. Topics of performance, floating-point format, parallel patterns, and dynamic parallelism are covered in-depth. For this new edition, the authors have updated their coverage of CUDA, including coverage of newer libraries, such as CuDNN, moved content that has become less important to appendices, added two new chapters on parallel patterns, and updated case studies to reflect current industry practices"--Publisher's website.
Item type: كتاب
Tags from this library: No tags from this library for this title.
Star ratings
    Average rating: 0.0 (0 votes)
Holdings
Item type Current library Call number Status Notes Date due Barcode
كتاب كتاب Central Library المكتبة المركزية 004.35 K59 (Browse shelf(Opens below)) Available قاعة الكتب 45094

Previous edition: 2013.

Includes bibliographical references and index.

Data parallel computing -- Scalable parallel execution -- Memory and data locality - Performance considerations -- Numerical considerations -- Parallel patterns : convolution -- Parallel patterns : prefix sum -- Parallel patterns : parallel histogram computation -- Parallel patterns : sparse matrix computation -- Parallel patterns : merge sort -- Parallel patterns : graph search -- CUDA dynamic parallelism -- Application case study : non-cartesian magnetic resonance imaging -- Application case study : molecular visualization and analysis -- Application case study : machine learning -- Parallel programming and computational thinking -- Programming a heterogeneous computing cluster -- Parallel programming with OpenACC -- More on CUDA and graphics processing unit computing -- Conclusion and outlook -- Appendices: An introduction to OpenCL ; THRUST : a productivity-oriented library for CUDA ; CUDA fortran ; An introduction to C++ AMP.

Programming Massively Parallel Processors: A Hands-on Approach shows both student and professional alike the basic concepts of parallel programming and GPU architecture, exploring, in detail, various techniques for constructing parallel programs. Case studies demonstrate the development process, detailing computational thinking and ending with effective and efficient parallel programs. Topics of performance, floating-point format, parallel patterns, and dynamic parallelism are covered in-depth. For this new edition, the authors have updated their coverage of CUDA, including coverage of newer libraries, such as CuDNN, moved content that has become less important to appendices, added two new chapters on parallel patterns, and updated case studies to reflect current industry practices"--Publisher's website.