This is fundamental stuff. Gordon made me realize this at my NVIDIA internship.
Accelerated computing is the use of specialized hardware to dramatically speed up work, often with parallel processing that bundles frequently occurring tasks.
I think that a lot of people don’t realize the importance of accelerated computing. People realize it in the ML world, but robotics is a little behind.
- Because we think what we have is enough
What people realized in the AV industry, the people in robotics need to realize it too. You ABSOLUTELY need accelerated computing if you want to achieve important work.
- This is Pioneer thinking, seeing something that other people don’t see
Everything depends on higher and higher throughput.
GPUs: Thanks to their parallel processing architecture, GPUs can process large amounts of data simultaneously.
- I know this
Gordon gave this really good analogy, so I get to understand this at a high level, but I need to convince myself this at a low level.
Imagine traffic, and sending people. If there are always a bunch of stop signs, then GPUs aren’t good because they don’t have branching.
- They are very good for doing lots of simple operations in parallel
CPUs have all those control flows.
So it all depends on how the problem is formulated. So do you want to frame every problem such that it works on the GPU?
There’s the TPUs.
And then there are FPGAs:” By customizing their architecture to your exact needs, they’re able to implement your application more efficiently than GPUs and CPUs, which are general-purpose, fixed architecture devices”.