我想在一个datafarme中追加大约100万行。目前的做法需要很长时间,而且是可裂变的。以下是我所做的工作:
要在每次迭代中追加的示例行:
['Offer_5', 'Offer_4', 'Offer_12', 'Offer_8', 'Offer_10', 'Offer_2', 1000065]示例代码如下:
cols = ['OFFER_CODE_1','OFFER_CODE_2','OFFER_CODE_3','OFFER_CODE_4','OFFER_CODE_5','OFFER_CODE_6','ID']
final_lst_appened = []
for index, row in df.iterrows():
final_lst = []
#some processing to get a row as stated above
final_lst_appened.append(final_lst)
new_df = pd.DataFrame(columns=cols, data = final_lst_appened)发布于 2020-03-27 01:47:18
一个小的性能提升可能是将iterrows()更改为itertuples,如下所述:https://medium.com/swlh/why-pandas-itertuples-is-faster-than-iterrows-and-how-to-make-it-even-faster-bc50c0edd30d。否则,如果生成每一行的for-循环中的代码计算量很大,则可能需要查看多处理(https://docs.python.org/2/library/multiprocessing.html)。与…有关的东西:
from multiprocessing import Pool
from os import cpu_count
with Pool(cpu_count()) as pool:
pool.map(func, list(df.itertuples()))其中,func是应用于从原始行生成行的函数。
https://stackoverflow.com/questions/60878542
复制相似问题