我正在做一个抓取项目,我试图从13页中抓取信息。页面的结构是相同的,唯一不同的是urls。
我能够使用for循环抓取每个页面,并且可以在终端中查看每个页面的信息。但当我将其保存到csv时,所保存的只是最后一页的信息,即第13页。
我确定我错过了什么,但似乎找不出是什么。谢谢!
我正在使用Python3.7和BeautifulSoup来抓取。
from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup
pages = [str(i) for i in range (1,14)]
for page in pages:
    my_url ='Myurl/=' + page
    uClient = uReq(my_url)
    page_html = uClient.read()
    uClient.close()
    page_soup = soup(page_html, "html.parser")
    containers = page_soup.findAll("table", {"class":"hello"})
    container = containers[0]
    filename = "Full.csv"
    f = open(filename, "w")
    headers= "Aa, Ab, Ac, Ad, Ba, Bb, Bc, Bd\n"
    f.write(headers)
    for container in containers:
        td_tags = container.find_all('td')
        A = td_tags[0]
        B=td_tags[2]
        Aa = A.a.text   
        Ab = A.span.text
        Ac = A.find('span', attrs = {'class' :'boxes'}).text.strip()
        Ad = td_tags[1].text
        Ba = B.a.text   
        Bb = B.span.text
        Bc = B.find('span', attrs = {'class' :'boxes'}).text.strip()
        Bd = td_tags[3].text
        print("Aa:" + Aa)
        print("Ab:" + Ab)
        print("Ac:" + Ac)
        print("Ad:" + Ad)
        print("Ba:" + Ba)
        print("Bb:" + Bb)
        print("Bc:" + Bc)
        print("Bd:" + bd)
        f.write(Aa + "," + Ab + "," + Ac.replace(",", "|") + "," + Ad + "," + Ba + "," + Bb + "," + Bc.replace(",", "|") + "," + Bd + "\n")
    f.close()编辑*此外,如果有人有一个好主意如何确认和记录每个容器的页码,这也是有帮助的。再次感谢!
发布于 2018-12-31 23:40:00
执行此操作可附加到文件,而不是覆盖该文件:
with open(filename, "a") as myfile:
    myfile.write(Aa + "," + Ab + "," + Ac.replace(",", "|") + "," + Ad + "," + Ba + "," + Bb + "," + Bc.replace(",", "|") + "," + Bd + "\n")https://stackoverflow.com/questions/53989074
复制相似问题