我的目标是通过Python从stocktwits下载tweet。
我找到了杰森·豪里写的this script。遗憾的是,我没能让它运行起来。到目前为止,我所做的是下载"api.py“和"requestors.py”脚本,并在后面的“ST_ACCESS_TOKEN”中替换为我的访问令牌。但是,当我运行命令get_watched_stocks(my_watchlist_id)
时,我得到以下错误:
ipython-input-12-b889976b3838> in get_watched_stocks(wl_id)
115 """ Get list of symbols being watched by specified StockTwits watchlist
116 """
--> 117 wl = R.get_json(ST_BASE_URL + 'watchlists/show/{}.json'.format(wl_id), params=ST_BASE_PARAMS)
118 wl = wl['watchlist']['symbols']
119 return [s['symbol'] for s in wl]
TypeError: unbound method get_json() must be called with Requests instance as first argument (got str instance instead)
有人知道我可能做错了什么吗?如果没有:有人可以一步一步地解释一下我如何使用Haury先生的或任何其他脚本从stocktwits下载tweet吗?
发布于 2017-05-09 12:00:26
您也可以使用selenium,我的脚本是针对CC_transcript的,但您可以将其应用于stocktwits的任何帐户:
###########################################################################
### This script is a web scraper for stocktwits. ###
## applied specifically on cc_transcripts . ###
### To use it you need first to install Python 3.5.2 on your computer. ###
### Install the module "Selenium" 3.1.1, and "chromedriver.exe" ###
###########################################################################
from selenium import webdriver
import sys
import time
from selenium.webdriver.common.keys import Keys
#only for Chrome, for firefox need another driver
print("Loading... Please wait")
Pathwebdriver="D:\\Programs\\Python\\Python35-32\\Scripts\\chromedriver.exe"
driver = webdriver.Chrome(Pathwebdriver)
#website to analyse
driver.get("https://stocktwits.com/cctranscripts?q=cctranscripts")
#Scrolling of the webpage
ScrollNumber=3
print(str(ScrollNumber)+ " scrolldown will be done.")
for i in range(1,ScrollNumber): #scroll down X times
print("Scrolling... #"+str(i))
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(2) #Delay between 2 scrolls down to be sure the page loaded, 1s is too short some loading take longer
#retrieving source code
html_source = driver.page_source
data = str(html_source.encode('utf-8'))
#driver.close()#close of chrome to able the opwning of new windows and to save source code.
#Saving source code (in the same folder as this script)
SaveSource = False
if SaveSource:
text_file = open("SourceCode.html", "w")
text_file.write(data)
text_file.close()
#Analysis of the source code
PosScanning=1
GlobalList=[]
print("Processing data")
while data[PosScanning:].find("picked")>0:
PosPick=data[PosScanning:].find("picked") +PosScanning
List = [0, 0, 0, 0, 0, 0, 0] #Ticker,Nb of shares, Text of stocktwits,Link,Price of buying, Date, Text of CC_transcript
#Quote
dataBis=data[PosPick::-1] #reading the string backward
PosBegin=PosPick - dataBis.find(">") +1 #looking for the begining of the text
data=data[PosBegin:] #shortening the string each loop to increase the speed of processing
PosEnd=data.find("<")#looking for the end of the text
#print(data[PosBegin:PosEnd])
List[2]=data[:PosEnd].replace(","," ")
#Nb of shares
List[1]=List[2].split(' up', 1 )[1]
List[1]=List[1].split('share', 1 )[0]
List[1]=List[1].replace(" ","")
#link to the transcript
PosLinkBegin=data.find("href=")+6
PosLinkend=data.find("\"",PosLinkBegin,PosLinkBegin+3000)
#print(data[PosLinkBegin:PosLinkend])
List[3]=data[PosLinkBegin:PosLinkend]
#Symbol
PosSymbolBegin=data.find("data-symbol=")+13
PosSymbolEnd=data.find("\"",PosSymbolBegin,PosSymbolBegin+300)
#print(data[PosSymbolBegin:PosSymbolEnd])
List[0]=data[PosSymbolBegin:PosSymbolEnd]
#data-body, the "picked" is repeat 2 times, need to ignore it
PosBody1=data.find("picked",PosSymbolEnd,PosSymbolEnd+10000)+100
PosBody2=data.find("picked",PosBody1,PosBody1+10000)
PosScanning=PosBody2 +100
GlobalList.append(List)
#Opening Link to retrieve information
print("Opning links to retrieve detailed information form CC_transcript")
j=1
for item in GlobalList:
print("Retrieving data: " +str(j)+"/"+str(len(GlobalList)))
driver.find_element_by_tag_name('body').send_keys(Keys.COMMAND + 't')#open tab
driver.get(item[3])
html_source2 = driver.page_source
data2 = str(html_source2.encode('utf-8'))
#text of CC_transcript
TextePos=data2.find("$(\"#meaning\").popover();")
item[6] = data2[TextePos+40:TextePos+1000].replace(","," ")
#price of Shares
BuyPos=item[6].find("place at")+10
BuyPosend=item[6][BuyPos:].find("share")+BuyPos +6
item[4]=item[6][BuyPos:BuyPosend]
#date
DatePos=item[6].find(" on ")
DatePosEnd=item[6][DatePos:].find(".")+DatePos
item[5]=item[6][DatePos+4:DatePosEnd]
j=j+1
driver.close()
#output of final data
print("Writting data in .csv file")
f = open('stocktwits.csv','w')
f.write("Ticker")
f.write(' , ')
f.write("Nb of shares")
f.write(' , ')
f.write("Text of stocktwits")
f.write(' , ')
f.write("Link")
f.write(' , ')
f.write("Price of buying")
f.write(' , ')
f.write("Date")
f.write(' , ')
f.write("Text of CC_transcript")
f.write('\n')
for item in GlobalList:
for elem in item:
f.write(elem)
f.write(' , ')# excel change of column
f.write('\n') # excel change of line
f.close()
time.sleep(5)
print("Done")
发布于 2018-06-08 04:22:52
@annach看看pytwits,它是StockTwits的REST-API的Python包装器。
由于该项目非常年轻,它还远未完善,但要获得监视列表上的符号列表,您只需执行以下操作:
pip install pytwits
然后:
import pytwits
def main():
access_token = 'TOKEN'
stocktwits = pytwits.StockTwits(access_token=access_token)
watchlist = stocktwits.watchlists(path='show', id='WL_ID_HERE')
print('\n\n'.join([symbol['symbol'] for symbol in watchlist.symbols]))
if __name__ == '__main__':
main()
发布于 2017-02-04 11:54:45
如果您查看一下在requestors.py中创建的Requests类,就会发现创建者希望它们成为该类上的静态方法,但却忘记实际将它们变为静态方法。如果您进入该文件并将一个@staticmethod
放在两个函数定义之上,它将会起作用。例如
def get_json(url, params=None):
现在变成了
@staticmethod def get_json(url, params=None):
经过测试和确认
https://stackoverflow.com/questions/42024747
复制相似问题