Revolutionary Number Formats for Machine Learning
There's some new math taking root in computing. A lot of the recent gains in machine learning have come, not from Moore's-Law scaling, but from the use of lower-precision math in neural networks. Such a strategy is fundamental to the success of NVIDIA'S acclaimed, industry-leading Hopper GPU. While floating point formats like FP8 are advancing the field today, researchers are working with numerous possibilities beyond floating point. For example, in machine learning, most of the action is happening with numbers near zero, where floating point is OK, but can be strengthened. A new number format called posits, invented by John L. Gustafson, is gaining adherents who say it gives a much more accurate portrayal of what's going on near zero. Additionally, researchers at NVIDIA have shown how vector scaling and optimal clipping can achieve results using 4-bit representations that previously required 8-bits or more.
2-Day Pass, All Access Pass, Expo Pass