Forums

Can't get my website functioning

So, i uploaded my Django website to PythonEverywhere and it shows me the website. But the problem is that my views.py wich uses a function to web scrap with beautiful soup. and then it takes you to the other page and show you the results. It instantly just changing pages without showing results. It works fine on the manage.py runserver on my computer.. but i cant make it do anything beside show me the templates..

What's the error message? When I try and search for a phrase using your site I get: HTTP Error 403: request disallowed by robots.txt. So it looks like you are trying to scrape a site that explicitly disallows it.

i am having the same problem. when i run all the scraping code in my local server(computer) it works fine, sqllite is updated. but when i uploaded the same codes to pythonanywhere.com it is showing me error like "HTTP Error 403: request disallowed by robots.txt". What is the problem i cant understand. If the scraping website(TARGET WEBSITE) restricts the scraping how my localserver can extract all information. and why not here?

Please help.

Free accounts are restricted to a whitelist. If it is a public api you are trying to access let us know the relevant endpoint and we can whitelist it. Otherwise you will have to upgrade.