我有以下代码:
output = (assignations
.join(activations,['customer_id','external_id'],'left')
.join(redeemers,['customer_id','external_id'],'left')
.groupby('external_id')
.agg(f.expr('COUNT(DISTINCT(CASE WHEN assignation = 1 THEN customer_id ELSE NULL END))').alias('assigned'),
f.expr('COUNT(DISTINCT(CASE WHEN activation = 1 THEN customer_id ELSE NULL END))').alias('activated'),
f.expr('COUNT(DISTINCT(CASE WHEN redeemer = 1 THEN customer_id ELSE NULL END))').alias('redeemed'))
)
这段代码给出了以下输出:
external_id assigned activated redeemed
DISC0000089309 31968 901 491
DISC0000089428 31719 893 514
DISC0000089283 2617 60 39
我的想法是将
分成一个更有Pythonic/Pyspark风格的代码。这就是我尝试以下代码的原因:
output = (assignations
.join(activations,['customer_id','external_id'],'left')
.join(redeemers,['customer_id','external_id'],'left')
.groupby('external_id')
.agg(f.count(f.when(f.col('assignation')==1,True).alias('assigned')),
f.count(f.when(f.col('activation')==1,True).alias('activated')),
f.count(f.when(f.col('redeemer')==1,True).alias('redeem'))
))
问题是输出不一样,数字不匹配。如何转换代码以获得相同的输出?
发布于 2021-02-17 20:50:41
您可以使用
要实现等价物,请执行以下操作
在Spark SQL中:
output = (assignations
.join(activations,['customer_id','external_id'],'left')
.join(redeemers,['customer_id','external_id'],'left')
.groupby('external_id')
.agg(
f.countDistinct(f.when(f.col('assignation') == 1, f.col('customer_id'))).alias('assigned'),
f.countDistinct(f.when(f.col('activation') == 1, f.col('customer_id'))).alias('activated'),
f.countDistinct(f.when(f.col('redeemer') == 1, f.col('customer_id'))).alias('redeemed')
)
)
https://stackoverflow.com/questions/66242250
复制相似问题