Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
madengr
on April 5, 2016
|
parent
|
context
|
favorite
| on:
The Nvidia DGX-1 Deep Learning Supercomputer in a ...
They were touting 20 Tflops, but that's only for FP16, which isn't useful for many engineering computations that use GPU. I already can hit 2 Tflop F32 with two K20. It's a nice improvement over what I have now, but nothing astronomical.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: