我正在使用requests和cfscrape库登录到https://kissanime.to/Login
'''Login to website'''
def login(self, usr, pw):
login_url = 'https://kissanime.to/Login'
sess = requests.Session()
# login credentials
payload = {
'username': usr,
'password': pw,
'redirect': ''
}
# Creating cfscrape instance of the session
scraper_sess = cfscrape.create_scraper(sess)
a = scraper_sess.post(login_url, data=payload)
print(a.text)
print(a.status_code)
a.text
给我的登录页面和a.status_code
给我的一样
这意味着我的登录根本不起作用。我是不是遗漏了什么?根据chrome的网络监视器,我也应该得到status code 302
POST数据图像:
发布于 2016-06-26 06:41:43
我用mechanicalsoup解决了这个问题
代码:
import mechanicalsoup
'''Login to website'''
def login(self, usr, pw):
login_url = 'https://kissanime.to/Login'
# Creating cfscrape instance
self.r = cfscrape.create_scraper()
login_page = self.r.get(login_url)
# Creating a mechanicalsoup browser instance with
# response object of cfscrape
browser = mechanicalsoup.Browser(self.r)
soup = BeautifulSoup(login_page.text, 'html.parser')
# grab the login form
login_form = soup.find('form', {'id':'formLogin'})
# find login and password inputs
login_form.find('input', {'name': 'username'})['value'] = usr
login_form.find('input', {'name': 'password'})['value'] = pw
browser.submit(login_form, login_page.url)
发布于 2016-06-26 03:41:25
此内容来自Requests文档:
许多需要身份验证的web服务都接受HTTP Basic Auth。这是最简单的一种,Requests直接支持它。
requests.get('https://api.github.com/user',auth=HTTPBasicAuth('user','pass'))
您必须将有效负载作为JSON发送。
import requests,json
'''Login to website'''
def login(self, usr, pw):
login_url = 'https://kissanime.to/Login'
sess = requests.Session()
# login credentials
payload = {
'username': usr,
'password': pw,
'redirect': ''
}
# Creating cfscrape instance of the session
scraper_sess = cfscrape.create_scraper(sess)
a = scraper_sess.post(login_url, data=json.dumps(payload))
print(a.text)
print(a.status_code)
参考:http://docs.python-requests.org/en/master/user/authentication/
https://stackoverflow.com/questions/38034984
复制相似问题