If the error is Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR, chances are GPU is not configured properly
If the error is could not create cudnn handle: CUDNN_STATUS_ALLOC_FAILED, chances are GPU running out of memory
Set Memory Growth
This code will set set_memory_growth to true.
gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: try: # Currently, memory growth needs to be the same across GPUs for gpu in gpus: tf.config.experimental.set_memory_growth(gpu, True) logical_gpus = tf.config.experimental.list_logical_devices('GPU') print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs") except RuntimeError as e: # Memory growth must be set before GPUs have been initialized print(e)
Limit GPU Memory
This code will limit the1st GPU’s memory usage up to 3072 MB. The index of gpus and memory_limit can be changed as per requirement.
gpus = tf.config.experimental.list_physical_devices('GPU') if gpus: try: tf.config.experimental.set_virtual_device_configuration( gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=3072)]) except RuntimeError as e: print(e)
References
The official documentation can be found at: