使用python(numpy)更新线性回归(梯度下降)

在线性回归中更新theta(weights)时,似乎是在增加成本误差,而不是减少误差。

使用numpy时,我尝试通过设置预定义的数组,更新theta的值并将theta设置为每次迭代的数组来在每次迭代时更新theta

theta = [[0],[0]]
theta = np.array(theta)
temp = np.array([[0],[0]])
m = len(Y)
    for i in range(iteration):

    def hypothesis (theta,X,iteration):
        ''' Calculates the hypothesis by multiplying the transpose of theta with the features of X in iteration'''
        output = np.transpose(theta).dot(np.transpose(X[iteration]))
        return int(output)
    def cost_function():
        '''Calculates cost function to plot (this is to check if the cost function is converging)'''
        total = 0
        for i in range(m):
            total = pow((hypothesis(theta,i) - Y[i]),2) + total
        return total/(2*m)

    def cost_function_derivative(thetapos):
        '''Calculates the derivative of the cost function to determine which direction to go'''
        cost = 0
        for a in range(m):
            cost += ((hypothesis(theta,a) - int(Y[a])) * int(X[a,thetapos]))
        return (cost)
    alpher = alpha*(1/m)
    for j in range(len(theta)):
        temp[j,0] = theta[j,0] - float(alpher)*float(cost_function_derivative(j))
    print (cost_function())
    theta = temp
return hypothesis(theta,5),theta

我期望它输出13,θ为[1,2],但是可惜我可怜的代码给了我0,[0,0]

money000000 回答:使用python(numpy)更新线性回归(梯度下降)

暂时没有好的解决方案,如果你有好的解决方案,请发邮件至:iooj@foxmail.com
本文链接:https://www.f2er.com/3169239.html

大家都在问