icon
Image from Google Jackets
Image from OpenLibrary

ReRAM-based machine learning / Hao Yu, Leibin Ni, and Sai Manoj Pudukotai Dinakarrao

By:
Series: IET professional application of computing seriesPublisher: London, United Kingdom: Institution of Engineering and Technology, ©2021Description: xiii, 243 pages : illustrations (some color) ; 24 cmContent type:
  • نص
Media type:
  • دون وسيط
Carrier type:
  • مجلد
ISBN:
  • 9781839530814
  • 1839530812
Related works:
  • Ni,Leibin , [author]
  • Dinakarrao ,Sai Manoj Pudukotai [author]
Subject(s): DDC classification:
  • 23 621.39 R428
Summary: The transition towards exascale computing has resulted in major transformations in computing paradigms. The need to analyze and respond to such large amounts of data sets has led to the adoption of machine learning (ML) and deep learning (DL) methods in a wide range of applications. One of the major challenges is the fetching of data from computing memory and writing it back without experiencing a memory-wall bottleneck. To address such concerns, in-memory computing (IMC) and supporting frameworks have been introduced. In-memory computing methods have ultra-low power and high-density embedded storage. Resistive Random-Access Memory (ReRAM) technology seems the most promising IMC solution due to its minimized leakage power, reduced power consumption and smaller hardware footprint, as well as its compatibility with CMOS technology, which is widely used in industry. In this book, the authors introduce ReRAM techniques for performing distributed computing using IMC accelerators, present ReRAM-based IMC architectures that can perform computations of ML and data-intensive applications, as well as strategies to map ML designs onto hardware accelerators. The book serves as a bridge between researchers in the computing domain (algorithm designers for ML and DL) and computing hardware designers.
Item type: كتاب
Tags from this library: No tags from this library for this title.
Star ratings
    Average rating: 0.0 (0 votes)

Includes bibliographical references and index.

The transition towards exascale computing has resulted in major transformations in computing paradigms. The need to analyze and respond to such large amounts of data sets has led to the adoption of machine learning (ML) and deep learning (DL) methods in a wide range of applications. One of the major challenges is the fetching of data from computing memory and writing it back without experiencing a memory-wall bottleneck. To address such concerns, in-memory computing (IMC) and supporting frameworks have been introduced. In-memory computing methods have ultra-low power and high-density embedded storage. Resistive Random-Access Memory (ReRAM) technology seems the most promising IMC solution due to its minimized leakage power, reduced power consumption and smaller hardware footprint, as well as its compatibility with CMOS technology, which is widely used in industry. In this book, the authors introduce ReRAM techniques for performing distributed computing using IMC accelerators, present ReRAM-based IMC architectures that can perform computations of ML and data-intensive applications, as well as strategies to map ML designs onto hardware accelerators. The book serves as a bridge between researchers in the computing domain (algorithm designers for ML and DL) and computing hardware designers.