它可以在没有聚合或计数的情况下遍历Pyspark groupBy数据帧吗?例如Pandas中的代码: for i, d in df2:Is there a difference in howto iterate groupby in Pyspark or have to use aggregation and count?
我尝试使用pandas dataframe来检索结果,以获得相同的结果SELECT strftime('%m', date_report) as month, count(*)where has_travel_history = 't' and age >= '50' order by total_infector desc limit 2import pandas