i have a python web crawler on pythonanywhere but when i try to run my code i don't get any error but i don't get any result also.
i have a python web crawler on pythonanywhere but when i try to run my code i don't get any error but i don't get any result also.
Then you probably have some code that looks at the results of the scrape and finds something in the data. If it doesn't find the thing then it doesn't do anything. Put some logging into your code so you can see what you're scraping and what is happening to it.
i don't think something is wrong with my code.okay let me show you the code.
import requests from bs4 import BeautifulSoup
def bedbux_spider(max_pages): pn = 1 while pn <= max_pages: url = 'http://bedbux.com/workfeed.php?pn=' + str(pn) source_code = requests.get(url) plain_text = source_code.text soup = BeautifulSoup(plain_text, "html.parser") for link in soup.findAll('a', {'class': 'item-name'}): title = link.string href = "http://www.bedbux.com/" + link.get('href') print(title) print(href)
pn += 1
bedbux_spider(1)
import requests from bs4 import BeautifulSoup
def bedbux_spider(max_pages): pn = 1 while pn <= max_pages: url = 'http://bedbux.com/workfeed.php?pn=' + str(pn) source_code = requests.get(url) plain_text = source_code.text soup = BeautifulSoup(plain_text, "html.parser") for link in soup.findAll('a', {'class': 'item-name'}): title = link.string href = "http://www.bedbux.com/" + link.get('href') print(title) print(href)
pn += 1
bedbux_spider(1)
Yes. As I said: You're using soup.findAll, which will return an empty list if there are none of the things you're looking for in the page, so the loop will do nothing. Also, see http://help.pythonanywhere.com/pages/403ForbiddenError/
Okay so what should i do now? because the code runs fine on my local machine.
Look at the response you're getting from requests and read the link that I sent you.