So currently I use these for my webapp:
Flask-Cache
- to cache pages with custom timeout and the ability to create a key for each cache. (So I can cache by flask's request.args)
cachetools
- basically the same functionality as Flask-Cache (Used a key to get flask's request.args) and it can store the function's parameters and create cache based off that.
and then there's flask_limiter
where I can store user's attempts to access the resource and prevent them from overusing the api endpoint.
You see, that I have basically three things in place (though I am thinking of getting rid of Flask-Cache and solely use cachetools as my primary caching). It went smoothly through development on my personal server, I am able to quickly get the resource I want as it is "saved" within it's memory.
Upon deploying on Pythonanywhere, I found out that each of the workers have their own cache/ratelimiter and they are not shared between each workers. I have read the documentation of each and every cache modules I use and they all support things like redis, memcache, etc. All of which that pythonanywhere does not support.
Is there an option where I could "trick" these tools to think that mysql would be the new Redis (if you get what i mean)? Or what other options could I use to implement caching (where I can create keys [cache prefixes/suffixes], TTL, and/or cache based on the parameters of a def) that are PyAW friendly? What about rate limiting?
Thank you so much for your help!
(PS: Here is my github repository if you would like to take a look at how I currently implement things https://github.com/EndenDragon/Titan)