
Advait Madhavan
Articles
-
May 14, 2024 |
nist.gov | William Borders |Advait Madhavan |Matthew Daniels |Vasileia Georgiou
, , , Vasileia Georgiou, Martin Lueker-Boden, Tiffany Santos, Patrick Braganca, , , The increasing scale of neural networks needed to support more complex applications has led to an increasing requirement for area- and energy-efficient hardware. One route to meeting the budget for these applications is to circumvent the von Neumann bottleneck by performing computation in or near memory.
-
May 14, 2024 |
link.aps.org | William Borders |Advait Madhavan |Matthew Daniels |Vasileia Georgiou
The increasing scale of neural networks needed to support more complex applications has led to an increasing requirement for area- and energy-efficient hardware. One route to meeting the budget for these applications is to circumvent the von Neumann bottleneck by performing computation in or near memory. However, an inevitability of transferring neural networks onto hardware is the fact that nonidealities, such as device-to-device variations or poor device yield impact performance.
-
Mar 15, 2023 |
nist.gov | Advait Madhavan
The human brain is an amazingly energy-efficient device. In computing terms, it can perform the equivalent of an exaflop — a billion-billion (1 followed by 18 zeros) mathematical operations per second — with just 20 watts of power. In comparison, one of the most powerful supercomputers in the world, the Oak Ridge Frontier, has recently demonstrated exaflop computing. But it needs a million times more power — 20 megawatts — to pull off this feat.
Try JournoFinder For Free
Search and contact over 1M+ journalist profiles, browse 100M+ articles, and unlock powerful PR tools.
Start Your 7-Day Free Trial →