Forums

How to maintain state during Flask AJAX requests

I created a Flask web page (within PythonAnywhere) requesting data in a polling situation via AJAX calls.

There are some complex object structures that cannot (easily) be jsonified and need to be maintained during the life of that page.

On my Windows PC I created an object in app and just read/updated as needed. That paradigm does not work in PythonAnywhere (nor do I believe it to be an optimal solution anyway). PythonAnywhere does not support redis or memcached (so that route is taken away). I could write it out to SQLite but have a feeling there are other options.

Any ideas on the best approach to access/modify my Flask objects during the AJAX calls?

My program is constructed like:

@app.route('/view_page')
@login_required
def view_page(program_selection):
    create_object=cc(current_user)
    render_template('myview.html', somevars=current_user.username)

@app.route('/_myajax_call')
@login_required
def ajax_call():    
    # Need to get create_object here
    some_data = some_function(create_object)
    return jsonify(result=some_data)

Also posted on stackoverflow How to maintain state during Flask AJAX requests

There's a comment on the SO post that links to another SO post that looks like it covers most of the options.

The link from stackoverflow recommends using Flask-Cache. PythonAnywhere will really only allow the simple or filesystem option which rely on pickle. Pickle will not accept objects (classes, etc.) hence I will need to serialize those objects. I was trying to get away from that .. did I interpret that correctly?

i thought the simple option just keeps a python dict in memory/doesn't rely on pickle. but note that if you are using multiple web workers, then there would be multiple caches

I'm not using multiple web workers .. but I need to save class instances which contains queues and a finite state machine object .. that will not work with Flask-Cache.

can't you save the object inside of the python dict?

Why can't you use pickle and save to the filesystem?

So as an example of what is failing ..

(in init.py)

from multiprocessing import Queue
from flask.ext.cache import Cache
myCache = Cache(app,config={'CACHE_TYPE': 'simple', 'CACHE_DEFAULT_TIMEOUT':7200}
trans_que = Queue(-1)

(in routes.py)

myCache.set('test', 'hello')   # works fine
myCache.set('queue', {'q1': trans_que} )  # Fail

I have not tried pickle .. but from what I understand, underneath Cache/simple is pickle.

If you're defaulting to using threads for multiprocessing then it won't work. Threads will not work in a web app on PythonAnywhere.

I am using threads as I have a polling situation. Are you saying that use of methods/functions from import threading will not work in PythonAnywhere?

Yes. Importing works fine, but actually starting the thread will not.

I have a work-around. If I use WebSockets with Flask and Gevent (as demonstrated by Miguel Grinberg) in place of polling, I would no longer have to poll and get rid of the need to to use cache of any kind. Would that work in PythonAnywhere?

OK, I downloaded the sample WebSockets app from Miguel Grinberg .. it worked great on my local Windows PC .. but when uploaded to PA .. the results/response were inconsistent. Can you let me know if WebSockets and Gevent are within acceptable PA performance? If so, I'll start another thread on that subject.

unfortunately websockets don't work on PA :(

bfg, you are correct, after some research .. it seems that WebSockets is not available. I have a long polling situation .. (evidently there are issues with threads on PA as well) .. my only option on PA seems to be with Long running tasks.

Is there a PA page that states what Python options are NOT advisable/permissible?

There isn't, unfortunately -- but I think you've basically found the two things we don't have solid support for, WebSockets and multi-threading in web apps. The way PythonAnywhere web apps work is that for scaling and load-balancing purposes, website code may be running in any one of many different processes at different times, and might move from machine to machine between requests, so anything in-memory or in-process should be treated as valid within the context of one specific request only.

The normal way people manage websites that can be running in different processes/machines from hit to hit is to use the database as storage for the session state; I'm guessing from your previous posts that that isn't really an option in this case. Could you give a bit more detail as to exactly what you need to store? Is it stuff that's needed for background processing, or something like that?

Hi Giles .. So ... my user needs are to (infrequently) run a long job (~15 mins) which scans 4000+ web pages to do some analysis. During that time, I was planning on giving the user a real-time status of what is happening.

Originally I was going to spawn a thread to download those pages with queues and a FSM (finite state machine object) helpers, then poll my necessary processes.

My gut says .. to play in your sandbox .. I can use your hint provided by Long running tasks .. which can remove my need for queues (complicated ones anyway) and the FSM .. which would allow for uncomplicated polling.

Does that sound right? Any suggestions?

That sounds like exactly the right solution :-)

I'd personally set things up so that the web app writes a line into a "stuff to do" DB table, then the long-running task picks that up, marks it as in-progress, does the work, then marks it as done. The front-end can poll a view that checks the status in the table so that the user gets feedback on progress. Essentially it's using the DB as a kind of queue.

OK, so I like your recommendation .. now I have a little (hopefully) re-engineering on my hands.

btw, please add WebSockets to the ever-so-growing wish list.

Thank you.

we've had websockets on the list for a while! I'll add a +1 from you. But i have to say, there are some infrastructure difficulties that means it won't happen in the short term...

harry, I understand about the complexities. giles has given me a work-around .. so life will continue.

If I may add to your wish-list .. how about a redis server?

also on the list. might happen a little sooner than websockets actually! still months not weeks tho.