是否有一种通用方法或“良好做法”可以使 requests.get 运行得更快?

我的测试项目的代码不断通过暴力发送请求,但在尝试 40-50 个请求后运行缓慢。我不确定是服务器停止请求还是代码效率低下或两者兼而有之。我尝试了多个域,但那里也很慢。 在从事其他类似项目时寻找良好的做法以及改进和学习的方法。

import requests

#the function bruteforces for subdomains through a file subdomain1k.txt.
#Tried sessions and setting a headers parameter which seems to have worked for some with requests

c = 0
def req(url):
    global c
    session = requests.Session()
    for _ in range(10):
        try:
            return session.get("https://" + url,headers = {'Connection': 'close'})
        except requests.exceptions.ConnectionError:
            c+=1
            print("--------------")
            print(c)


#I was not sure if it was requests that was taking long or reading each line so set a buffer which did nothing to help with the speed
#the subdomain1k.txt contains 1000 subdomain prefixes like www,mail,ftp etc which it appends to target site for instance https://mail.google.com
#I was thinking if the subdomain1k.txt contained a million entries what the solution would be then?

target_url = "google.com"

with open('subdomain1k.txt','r',buffering=2<<16) as file:
    for line in file:
        word = line.strip()
        test_url = word + "." + target_url
        response = req(test_url)

        if response:
            print(test_url)
            # with open('out.txt','a') as write_text:
            #   write_text.write(test_url+"\n") 
chenglong767 回答:是否有一种通用方法或“良好做法”可以使 requests.get 运行得更快?

暂时没有好的解决方案,如果你有好的解决方案,请发邮件至:iooj@foxmail.com
本文链接:https://www.f2er.com/32102.html

大家都在问