我是sfdc的新手。我已经由用户创建了报告。我想用python把报告的数据转储到csv/excel文件中。我看到有几个python包可以做到这一点。但是我的代码给出了一个错误
from simple_salesforce import Salesforce
sf = Salesforce(instance_url='https://cs1.salesforce.com', session_id='')
sf = Salesforce(password='xxxxxx', username='xxxxx', organizationId='xxxxx')我可以有设置API的基本步骤和一些示例代码吗
发布于 2016-02-16 23:25:41
这对我很有效:
import requests
import csv
from simple_salesforce import Salesforce
import pandas as pd
sf = Salesforce(username=your_username, password=your_password, security_token = your_token)
login_data = {'username': your_username, 'password': your_password_plus_your_token}
with requests.session() as s:
d = s.get("https://your_instance.salesforce.com/{}?export=1&enc=UTF-8&xf=csv".format(reportid), headers=sf.headers, cookies={'sid': sf.session_id})d.content将包含一个逗号分隔值字符串,您可以使用csv模块读取该字符串。
我从那里将数据放入pandas,因此有了函数名和import pandas。我删除了将数据放到DataFrame中的函数的其余部分,但是如果您对这是如何完成的感兴趣,请告诉我。
发布于 2018-08-17 05:01:32
如果它是有用的,我想写出我用来回答这个问题的步骤现在(2018年8月-2018年),基于Obol的评论。作为参考,我遵循了salesforce_reporting包在https://github.com/cghall/force-retrieve/blob/master/README.md上的自述文件说明。
要连接到Salesforce:
from salesforce_reporting import Connection, ReportParser
sf = Connection(username='your_username',password='your_password',security_token='your_token')然后,要将我想要的报告放入Pandas DataFrame:
report = sf.get_report(your_reports_id)
parser = salesforce_reporting.ReportParser(report)
report = parser.records_dict()
report = pd.DataFrame(report)如果您愿意,也可以将上面的四行简化为一行,如下所示:
report = pd.DataFrame(salesforce_reporting.ReportParser(sf.get_report(your_reports_id)).records_dict())我在自述文件中遇到的一个不同之处是,sf.get_report('report_id', includeDetails=True)抛出了一个声明get_report() got an unexpected keyword argument 'includeDetails'的错误。简单地删除它似乎会使代码工作得很好。
report现在可以通过report.to_csv('report.csv',index=False)导出,也可以直接操作。
编辑:将parser.records()更改为parser.records_dict(),因为这允许DataFrame具有已经列出的列,而不是对它们进行数字索引。
发布于 2020-02-26 16:15:39
下面的代码相当长,可能只适用于我们的用例,但其基本思想如下:
找出日期间隔长度和额外的需要过滤,永远不会运行到“超过2,000”的限制。在我的例子中,我可以使用每周日期范围过滤器,但需要应用一些额外的过滤器
然后像这样运行它:
report_id = '00O4…'
sf = SalesforceReport(user, pass, token, report_id)
it = sf.iterate_over_dates_and_filters(datetime.date(2020,2,1),
'Invoice__c.InvoiceDate__c', 'Opportunity.CustomField__c',
[('a', 'startswith'), ('b', 'startswith'), …])
for row in it:
# do something with the dict自2020-02-01以来,迭代器每周执行一次(如果您需要每日迭代器或每月迭代程序,则需要更改代码,但更改应最小),并应用过滤器CustomField__c.startswith('a'),然后应用CustomField__c.startswith('b'),…作为一个生成器,所以你不需要自己去处理过滤器循环。
如果有一个返回超过2000行的查询,迭代器就会抛出异常,以确保数据不完整。
这里有一个警告: SF有每小时最多500个查询的限制。比方说,如果你有一年的52周和10个额外的过滤器,你已经达到了这个限制。
下面是这个类(依赖于simple_salesforce)
import simple_salesforce
import json
import datetime
"""
helper class to iterate over salesforce report data
and manouvering around the 2000 max limit
"""
class SalesforceReport(simple_salesforce.Salesforce):
def __init__(self, username, password, security_token, report_id):
super(SalesforceReport, self).__init__(username=username, password=password, security_token=security_token)
self.report_id = report_id
self._fetch_describe()
def _fetch_describe(self):
url = f'{self.base_url}analytics/reports/{self.report_id}/describe'
result = self._call_salesforce('GET', url)
self.filters = dict(result.json()['reportMetadata'])
def apply_report_filter(self, column, operator, value, replace=True):
"""
adds/replaces filter, example:
apply_report_filter('Opportunity.InsertionId__c', 'startsWith', 'hbob').
For date filters use apply_standard_date_filter.
column: needs to correspond to a column in your report, AND the report
needs to have this filter configured (so in the UI the filter
can be applied)
operator: equals, notEqual, lessThan, greaterThan, lessOrEqual,
greaterOrEqual, contains, notContain, startsWith, includes
see https://sforce.co/2Tb5SrS for up to date list
value: value as a string
replace: if set to True, then if there's already a restriction on column
this restriction will be replaced, otherwise it's added additionally
"""
filters = self.filters['reportFilters']
if replace:
filters = [f for f in filters if not f['column'] == column]
filters.append(dict(
column=column,
isRunPageEditable=True,
operator=operator,
value=value))
self.filters['reportFilters'] = filters
def apply_standard_date_filter(self, column, startDate, endDate):
"""
replace date filter. The date filter needs to be available as a filter in the
UI already
Example: apply_standard_date_filter('Invoice__c.InvoiceDate__c', d_from, d_to)
column: needs to correspond to a column in your report
startDate, endDate: instance of datetime.date
"""
self.filters['standardDateFilter'] = dict(
column=column,
durationValue='CUSTOM',
startDate=startDate.strftime('%Y-%m-%d'),
endDate=endDate.strftime('%Y-%m-%d')
)
def query_report(self):
"""
return generator which yields one report row as dict at a time
"""
url = self.base_url + f"analytics/reports/query"
result = self._call_salesforce('POST', url, data=json.dumps(dict(reportMetadata=self.filters)))
r = result.json()
columns = r['reportMetadata']['detailColumns']
if not r['allData']:
raise Exception('got more than 2000 rows! Quitting as data would be incomplete')
for row in r['factMap']['T!T']['rows']:
values = []
for c in row['dataCells']:
t = type(c['value'])
if t == str or t == type(None) or t == int:
values.append(c['value'])
elif t == dict and 'amount' in c['value']:
values.append(c['value']['amount'])
else:
print(f"don't know how to handle {c}")
values.append(c['value'])
yield dict(zip(columns, values))
def iterate_over_dates_and_filters(self, startDate, date_column, filter_column, filter_tuples):
"""
return generator which iterates over every week and applies the filters
each for column
"""
date_runner = startDate
while True:
print(date_runner)
self.apply_standard_date_filter(date_column, date_runner, date_runner + datetime.timedelta(days=6))
for val, op in filter_tuples:
print(val)
self.apply_report_filter(filter_column, op, val)
for row in self.query_report():
yield row
date_runner += datetime.timedelta(days=7)
if date_runner > datetime.date.today():
breakhttps://stackoverflow.com/questions/22853232
复制相似问题