首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >用BeautifulSoup从python导出到BeautifulSoup

用BeautifulSoup从python导出到BeautifulSoup
EN

Stack Overflow用户
提问于 2020-12-11 22:17:12
回答 1查看 53关注 0票数 1

我对此很陌生,似乎无法使它正确地出口。

代码语言:javascript
运行
复制
# select document
with open('scrape1.html') as html_file:
    soup = BeautifulSoup(html_file, 'lxml')

# create/name csv
with open('speechengine_report.csv', 'w') as csv_file:
    writer = csv.writer(csv_file)
    writer.writerow(['computer', 'usagedata']) 

# tell bs4 to only look at x tags with a class of y
for licensedata in soup.find_all('div', class_='licensedata'):

    # scrape pc id
    computer = licensedata.p.b.text
    print(computer)

    # scrape usage stats for each id
    for usagedata in licensedata.find_all('td'):

        # minutes = usagedata.table.tbody
        print(usagedata.text)

    # blank line
    print()

    # writer.writerow([computer, usagedata])

    
csv_file.close()
EN

回答 1

Stack Overflow用户

回答已采纳

发布于 2020-12-11 23:05:47

要将数据写入csv文件的其余代码应该在with块中。另外,您也不需要csv_file.close(),因为它为您处理这个问题。试试下面的代码。读取python中的文件处理

代码语言:javascript
运行
复制
with open('scrape1.html') as html_file:
    soup = BeautifulSoup(html_file, 'lxml')

# create/name csv
with open('speechengine_report.csv', 'w') as csv_file:
    writer = csv.writer(csv_file)
    writer.writerow(['computer', 'usagedata']) 
    # tell bs4 to only look at x tags with a class of y
    for licensedata in soup.find_all('div', class_='licensedata'):

        # scrape pc id
        computer = licensedata.p.b.text
        print(computer)

        # scrape usage stats for each id
        for usagedata in licensedata.find_all('td'):

        # minutes = usagedata.table.tbody
            print(usagedata.text)

        # blank line
        print()

        # writer.writerow([computer, usagedata])
票数 2
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/65259370

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档