If you’re choosing between Quadro and GeForce, definitely pick GeForce. If you’re choosing between Tesla and GeForce, pick GeForce, unless you have a lot of money and could really use the extra RAM.
Quadro GPUs aren’t for scientific computation, Tesla GPUs are. Quadro cards are designed for accelerating CAD, so they won’t help you to train neural nets. They can probably be used for that purpose just fine, but it’s a waste of money.
Tesla cards are for scientific computation, but they tend to be pretty expensive. The good news is that many of the features offered by Tesla cards over GeForce cards are not necessary to train neural networks.
For example, Tesla cards usually have ECC memory, which is nice to have but not a requirement. They also have much better support for double precision computations, but single precision is plenty for neural network training, and they perform about the same as GeForce cards for that.
One useful feature of Tesla cards is that they tend to have is a lot more RAM than comparable GeForce cards. More RAM is always welcome if you’re planning to train bigger models (or use RAM-intensive computations like FFT-based convolutions).
See here for CUDA GPUs on NVIDA Website.
References:
- Quadro vs GeForce GPUs for neural networks (pdf)
- Choosing between GeForce or Quadro GPUs to do machine learning via TensorFlow (pdf)
- Deep Learning Frameworks (NVIDIA) (pdf)
- NVIDIA GPUs – The Engine of Deep Learning (pdf)
- Why are GPUs well-suited to deep learning? (pdf)
- Using a GPU (pdf)