我不是蟒蛇,但我偶尔也要写这样的东西。我几个月前写了这段代码,它是为了达到目的而没有任何错误。但是今天,当我需要对一些更新的csv文件使用相同的脚本时。它给了我一些我自己无法弥补的错误。请看下面的代码,后面跟着错误。
import pandas as pd
#import xlsxwriter
data_df = pd.read_excel("New2020Snap.xlsx")
data_df['MaxDate'] = data_df.groupby(['LeadId', 'LeadStatus'])['CreatedDate'].transform('max')
data_df['MinDate'] = data_df.groupby(['LeadId', 'LeadStatus'])['CreatedDate'].transform('min')
data_df['Difference'] = pd.to_datetime(data_df['MaxDate']) - pd.to_datetime(data_df['MinDate'])
agg_df = data_df.groupby(['LeadId', 'LeadStatus', 'Email']).agg(MaxDate=('CreatedDate', 'max'),
MinDate=('CreatedDate', 'min')).reset_index()
agg_df['Difference'] = pd.to_datetime(agg_df['MaxDate']) - pd.to_datetime(agg_df['MinDate'])
#data_df.to_json(orient='records')
with pd.ExcelWriter('../out/ComputedReport.xlsx', engine='XlsxWriter') as writer:
data_df.to_excel(writer, sheet_name='New Computed Data', index=False)
agg_df.to_excel(writer, sheet_name='Computed Agg Data', index=False)
print(data_df)下面是我在运行上面的脚本时遇到的错误。
Traceback (most recent call last):
File "C:\Users\w-s\IdeaProjects\PythonForEverybody\src\pandas_read_opps.py", line 6, in <module>
data_df['MaxDate'] = data_df.groupby(['OpportunityID', 'OpportunityName', 'ToStage'])['CloseDate'].transform('max')
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\groupby\generic.py", line 511, in transform
result = getattr(self, func)(*args, **kwargs)
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\groupby\groupby.py", line 1559, in max
return self._agg_general(
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\groupby\groupby.py", line 1017, in _agg_general
result = self.aggregate(lambda x: npfunc(x, axis=self.axis))
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\groupby\generic.py", line 255, in aggregate
return self._python_agg_general(
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\groupby\groupby.py", line 1094, in _python_agg_general
return self._python_apply_general(f, self._selected_obj)
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\groupby\groupby.py", line 892, in _python_apply_general
keys, values, mutated = self.grouper.apply(f, data, self.axis)
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\groupby\ops.py", line 213, in apply
res = f(group)
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\groupby\groupby.py", line 1062, in <lambda>
f = lambda x: func(x, *args, **kwargs)
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\groupby\groupby.py", line 1017, in <lambda>
result = self.aggregate(lambda x: npfunc(x, axis=self.axis))
File "<__array_function__ internals>", line 5, in amax
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy\core\fromnumeric.py", line 2705, in amax
return _wrapreduction(a, np.maximum, 'max', axis, None, out,
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy\core\fromnumeric.py", line 85, in _wrapreduction
return reduction(axis=axis, out=out, **passkwargs)
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\generic.py", line 11468, in stat_func
return self._reduce(
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\series.py", line 4248, in _reduce
return op(delegate, skipna=skipna, **kwds)
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\nanops.py", line 129, in f
result = alt(values, axis=axis, skipna=skipna, **kwds)
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\pandas\core\nanops.py", line 873, in reduction
result = getattr(values, meth)(axis)
File "C:\Users\w-s\AppData\Local\Programs\Python\Python39\lib\site-packages\numpy\core\_methods.py", line 39, in _amax
return umr_maximum(a, axis, None, out, keepdims, initial, where)
TypeError: '>=' not supported between instances of 'datetime.datetime' and 'str'
Process finished with exit code 1因此,基本上,我在处理同一代码的两个单独的副本,每个代码都有细微的变化。然而,我能够修复的是粘贴在下面。我做出了我问题下的第一个评论,纪尧姆·安萨纳-亚历克斯爵士的建议。准确的代码行是由答案建议的,我将在编辑后将其标记为正确。下面的错误出现在代码中。
所以我的代码的工作副本如下所示。
import pandas as pd
#import xlsxwriter
data_df = pd.read_excel("OppAvgStageDuration.xlsx")
#suggested by the first comment and answered by the accepted one.
data_df['CloseDate'] = pd.to_datetime(data_df['CloseDate'])
data_df['MaxDate'] = data_df.groupby(['OpportunityID', 'OpportunityName', 'ToStage'])['CloseDate'].transform('max')
data_df['MinDate'] = data_df.groupby(['OpportunityID', 'OpportunityName', 'ToStage'])['CloseDate'].transform('min')
data_df['Difference'] = pd.to_datetime(data_df['MaxDate']) - pd.to_datetime(data_df['MinDate'])
agg_df = data_df.groupby(['OpportunityID', 'OpportunityName', 'ToStage']).agg(MaxDate=('CloseDate', 'max'),
MinDate=('CloseDate', 'min')).reset_index()
agg_df['Difference'] = pd.to_datetime(agg_df['MaxDate']) - pd.to_datetime(agg_df['MinDate'])
#data_df.to_json(orient='records')
with pd.ExcelWriter('../out/ComputedReportOpps.xlsx', engine='xlsxwriter') as writer:
data_df.to_excel(writer, sheet_name='New Computed Data', index=False)
agg_df.to_excel(writer, sheet_name='Computed Agg Data', index=False)
print(data_df)发布于 2021-04-08 15:01:00
当前您正在转换派生的MaxDate和MinDate列to_datetime(),但是尝试从开始转换源CreatedDate列to_datetime():
data_df = pd.read_excel("New2020Snap.xlsx")
data_df['CreatedDate'] = pd.to_datetime(data_df['CreatedDate'])如果这不起作用,那么我认为纪尧姆的评论是,它的格式是混合的。
https://stackoverflow.com/questions/67006452
复制相似问题