RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward. #228
-
So I have gone through solutions already available on the forum. I’m using torchxrayvision and torchcam library. I needed Densenet121 pretrained weigths and torchcam to generate GradCAM for the model. Please help me in this the solutions available for it on pytorch forum doesn't look like it can help me in it. Minimal reproducible example I can give you is this
rescaled_output just represents an image in Tensor of shape (1, 244, 244).
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 4 replies
-
Hi @karnikkanojia 👋 Thanks for reporting this! This looks like a simple problem of cell execution in a notebook. For me to help, I'd need a minimal reproducible snippet. The one you provided is not complete (missing imports, cam definition and model definition) With my limited knowledge about the setup right now, I think what's happening is that your enumerate is going through the outputs of a single model. You're using a gradient-based CAM method, and so for each call in your list comprehension, it's doing backprop. I'd suggest looping to nullify the grad + cam computation for each pathologies preds = model(rescaled_output.unsqueeze(0))
cam_outputs = []
for idx in range(len(model.pathologies)):
model.zero_grad()
preds.zero_grad()
cam_outputs.append(cam(class_idx=idx, scores=preds)) But to confirm this, please share a complete minimal reproducible snippet 🙏 Cheers! |
Beta Was this translation helpful? Give feedback.
Thanks! So the error mentioned can be avoided as the library allows low-level PyTorch options:
This piece of code doesn't crash on my end 👍
It might be a bit slow as it will perform the backprop for each pathologies (18 apparently). One option that would use more RAM…