You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So I have a Beautiful Soup script like so, where I first have to hit a url and login with a cookie and payload before I visit the main page I want to scrape. Is this possible using this integration? I think it's csrf?
from bs4 import BeautifulSoup as bs
import requests
login = "xxx"
password = "xxx"
sitecode = "xxx"
lesson_url = "https://student.readingplus.com/seereader/api/dash/lessons"
login_url = "https://student.readingplus.com/seereader/api/j_spring_security_check"
with requests.session() as s:
req = s.get(login_url).text
html = bs(req,"html.parser")
payload = {
"j_username": login,
"j_password": password
}
headers = {"cookie": "school_code_4="+sitecode+";"}
res =s.post(login_url, data=payload, headers=headers)
r = s.get(lesson_url)
soup = bs(r.content, "html.parser")
usernameDiv = soup.find("div", class_="name")
print("Username: " + usernameDiv.getText())
The text was updated successfully, but these errors were encountered:
So I have a Beautiful Soup script like so, where I first have to hit a url and login with a cookie and payload before I visit the main page I want to scrape. Is this possible using this integration? I think it's csrf?
The text was updated successfully, but these errors were encountered: