我想应用高斯核θ(z_i,z_j)= exp(γ|| z_i−z_j ||²)来接收PyTorch中的嵌入。该函数接收adj(2485x2485割炬张量)和emb(2485x64割炬.nn.parameter.Parameter)。目标是使用给定的内核在adj和emb之间进行二进制交叉熵损失。
def compute_loss_ber_exp1(adj,emb,b=0.1):
#Init
N,d=emb.shape
gamma=0.001
#get indices of upper triangular matrix
ind=torch.triu_indices(N,N,offset=1)
labels = adj[ind[0],ind[1]]
#compute f(z_i,z_j) = exp(-gamma||z_i-z_j||^2)
dist=F.pdist(emb,p=2)
logits=torch.exp(-gamma * dist**2)
#compute loss
loss = F.binary_cross_entropy_with_logits(logits,labels,reduction='mean')
return loss
代码应用于
max_epochs = 1000
display_step = 250
compute_loss = compute_loss_ber_exp1
for epoch in range(max_epochs):
opt.zero_grad()
loss = compute_loss(adj,b)
loss.backward()
opt.step()
# Training loss is printed every display_step epochs
if epoch == 0 or (epoch + 1) % display_step == 0:
print(f'Epoch {epoch+1:4d},loss = {loss.item():.5f}')
返回错误
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-10-1f961abb1fac> in <module>
6 opt.zero_grad()
7 loss = compute_loss(adj,b)
----> 8 loss.backward()
9 opt.step()
10 # Training loss is printed every display_step epochs
.../site-packages/torch/tensor.py in backward(self,gradient,retain_graph,create_graph)
164 products. Defaults to ``False``.
165 """
--> 166 torch.autograd.backward(self,create_graph)
167
168 def register_hook(self,hook):
.../site-packages/torch/autograd/__init__.py in backward(tensors,grad_tensors,create_graph,grad_variables)
97 Variable._execution_engine.run_backward(
98 tensors,---> 99 allow_unreachable=True) # allow_unreachable flag
100
101
RuntimeError: CUDA error: invalid configuration argument
有人知道如何解决此问题吗?我们在此错误上花费了相当多的时间,并尝试了不同的方法来进行基于距离的相似性度量,但结果却很慢。 谢谢!