“Energy efficiency is now a critically important design constraint for most computing systems today,” says Professor R. Iris Bahar of Brown CS, “and as applications become more and more memory- and compute-intensive, energy-efficiency and reliability become harder to manage.”
In a recent keynote address (“Energy-efficient and Sustainable Computing Across the Hardware/Software Stack”) at the International Green and Sustainable Computing Conference, Iris presented techniques across the Hardware/Software stack for energy-efficient and reliable computing, and discussed how these techniques may be used to achieve more sustainable computing in the future.
“In the 1990’s,” Iris explains, “computer architects and chip designers invested huge amounts of resources to improve processor performance by supporting aggressive speculation with superscalar out-of-order execution. Eventually, power issues caught up with designers and we had to rethink designs that were more power-aware.”
With machine learning (and learning through deep neural networks) exploding in popularity, Iris’ research has been exploring the costs of deep learning and energy-efficient neural network design. “Deep neural networks have been shown to be very accurate for many applications, but in the end there are costs of deep learning that haven’t been fully accounted for (in terms of training, data set requirements, energy usage, and others),” she states, “and we may need to think twice if deep learning is the best use of our resources. Otherwise, we may hit a wall again, just like we did in the 1990’s with processor design.”
For more information, click the link that follows to contact Brown CS Communications Outreach Specialist Jesse Polhemus.