Revolutionary Number Formats for Machine Learning

Event Time

Originally Aired - Wednesday, February 1 3:00 PM - 3:45 PM Pacific Time (US & Canada); Tijuana

Info Alert

Create or Log in to My Show Planner to see Videos and Resources.

Videos

Resources

Create or Log in to My Show Planner to see Videos and Resources.


{{chatHeaderContent}}

{{chatBodyContent}}

Resources

Create or Log in to My Show Planner to see Videos and Resources.


Info Alert

This Session Has Not Started Yet

Be sure to come back after the session starts to have access to session resources.

Info Alert

You do not have access to videos and resources for this session.

Event Location

Location: Ballroom B


Event Information

Title: Revolutionary Number Formats for Machine Learning

Description:

There's some new math taking root in computing. A lot of the recent gains in machine learning have come, not from Moore's-Law scaling, but from the use of lower-precision math in neural networks. Such a strategy is fundamental to the success of NVIDIA'S acclaimed, industry-leading Hopper GPU. While floating point formats like FP8 are advancing the field today, researchers are working with numerous possibilities beyond floating point. For example, in machine learning, most of the action is happening with numbers near zero, where floating point is OK, but can be strengthened. A new number format called posits, invented by John L. Gustafson, is gaining adherents who say it gives a much more accurate portrayal of what's going on near zero. Additionally, researchers at NVIDIA have shown how vector scaling and optimal clipping can achieve results using 4-bit representations that previously required 8-bits or more.

Type: Educational Session

Pass Type: 2-Day Pass, All Access Pass, Expo Pass

Theme: Data Centers


Tracks


Speakers


Notes

Create or Log in to My Show Planner to see Notes