Tensorflow with cuda support via docker in Gentoo
Getting tensorflow configured properly with cuda in gentoo is a bit of a mess because of versioning issues (and in my case some mystifying compile time issue with sqlite3). I decided to punt on getting everything compiled from scratch and instead try the docker route recommended on the tensorflow site…
Unfortunately, the nvidia container toolkit and its dependencies do not have official gentoo packages. After a bit of hunting I found vowstar’s overlay for nvidia-container-toolkit, so with that in hand:
|
|
A quick sanity check to make sure that your GPU is detected by docker:
|
|
And now to launch a jupyter notebook with GPU support:
|
|
Then in your jupyter notebook if you want to check that tensorflow is able to use the GPU, add a new cell in your notebook:
|
|
That should give you some output that looks something like below:
name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 17815895832028484076
name: "/device:XLA_CPU:0"
device_type: "XLA_CPU"
memory_limit: 17179869184
locality {
}
incarnation: 14145977603517205240
physical_device_desc: "device: XLA_CPU device"
name: "/device:XLA_GPU:0"
device_type: "XLA_GPU"
memory_limit: 17179869184
locality {
}
incarnation: 17195696372739247336
physical_device_desc: "device: XLA_GPU device"
name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 4544004096
locality {
bus_id: 1
links {
}
}
incarnation: 15500834465452357573
physical_device_desc: "device: 0, name: GeForce GTX 980 Ti, pci bus id: 0000:01:00.0, compute capability: 5.2"