我有一个大约8GB大小的JSON文件。当我尝试使用以下脚本转换文件时:
import csv
import json
infile = open("filename.json","r")
outfile = open("data.csv","w")
writer = csv.writer(outfile)
for row in json.loads(infile.read()):
    writer.write(row)我知道这个错误:
Traceback (most recent call last):
  File "E:/Thesis/DataDownload/PTDataDownload/demo.py", line 9, in <module>
    for row in json.loads(infile.read()):
MemoryError我肯定这和文件的大小有关。是否有一种方法可以确保文件在没有错误情况下转换为CSV?
这是我的JSON代码示例:
     {"id":"tag:search.twitter.com,2005:905943958144118786","objectType":"activity","actor":{"objectType":"person","id":"id:twitter.com:899030045234167808","link":"http://www.twitter.com/NAJajsjs3","displayName":"NAJajsjs","postedTime":"2017-08-19T22:07:20.000Z","image":"https://pbs.twimg.com/profile_images/905943685493391360/2ZavxLrD_normal.jpg","summary":null,"links":[{"href":null,"rel":"me"}],"friendsCount":23,"followersCount":1,"listedCount":0,"statusesCount":283,"twitterTimeZone":null,"verified":false,"utcOffset":null,"preferredUsername":"NAJajsjs3","languages":["tr"],"favoritesCount":106},"verb":"post","postedTime":"2017-09-08T00:00:45.000Z","generator":{"displayName":"Twitter for iPhone","link":"http://twitter.com/download/iphone"},"provider":{"objectType":"service","displayName":"Twitter","link":"http://www.twitter.com"},"link":"http://twitter.com/NAJajsjs3/statuses/905943958144118786","body":"@thugIyfe Beyonce do better","object":{"objectType":"note","id":"object:search.twitter.com,2005:905943958144118786","summary":"@thugIyfe Beyonce do better","link":"http://twitter.com/NAJajsjs3/statuses/905943958144118786","postedTime":"2017-09-08T00:00:45.000Z"},"inReplyTo":{"link":"http://twitter.com/thugIyfe/statuses/905942854710775808"},"favoritesCount":0,"twitter_entities":{"hashtags":[],"user_mentions":[{"screen_name":"thugIyfe","name":"dari.","id":40542633,"id_str":"40542633","indices":[0,9]}],"symbols":[],"urls":[]},"twitter_filter_level":"low","twitter_lang":"en","display_text_range":[10,27],"retweetCount":0,"gnip":{"matching_rules":[{"tag":null,"id":6134817834619900217,"id_str":"6134817834619900217"}]}}(不好意思格式太糟糕了)
另一种选择可能是,我有大约8000个较小的json文件,我将这些文件合并成这个文件。它们都在自己的文件夹中,其中只有一个json。是否更容易将其中每一个单独转换,然后将它们合并成一个csv?
我之所以问这个问题,是因为我有非常基本的蟒蛇知识,我发现类似问题的所有答案都比我所能理解的要复杂得多。请帮助这个新的python用户阅读这个json作为一个csv!
发布于 2018-03-10 04:00:22
是否更容易将其中每一个单独转换,然后将它们合并成一个csv?
是的,当然会
例如,这将把每个JSON对象/数组(无论从文件中加载什么)放到它自己的一行CSV上。
import json, csv
from glob import glob
with open('out.csv', 'w') as f:
    for fname in glob("*.json"):  # Reads all json from the current directory
        with open(fname) as j:
            f.write(str(json.load(j)))
            f.write('\n')使用glob模式**/*.json查找嵌套文件夹中的所有json文件
由于您没有数组,所以不太清楚for row in ...在为您的数据做什么。除非您希望每个JSON键成为CSV列?
发布于 2018-03-27 03:53:24
是的,这绝对可以用一种非常容易的方式来完成。我在几秒钟内打开了一个4GB的json文件。对我来说,我不需要转换为csv。但这可以用一种非常简单的方式来完成。
https://stackoverflow.com/questions/49204735
复制相似问题