Hi, <p>I'm trying to host a prototype Keras ML predictor in my pythonanywhere Hacker-level account. I'm using flask and when I try to call the endpoint, it takes ages to load and finally returns a pythonanywhere error, which I guess is a timeout error. <p> <p>When testing this code locally (oldish macbook pro), the Flask server runs normally and my predictions are pretty much instant, since I load the model when starting the server. So it can't be that the code execution is just too slow. Also I tracked that the method uses around 480mb of memory. I think I saw that pythonanywhere instances should have around 3 gigs of memory, so I don't see how even that would be the problem? Shouldn't the worker be killed instantly if it exceeds the memory limit? I do load the model in memory, but it's only about 150 mb.<p> Any help?
2018-05-13 08:39:25 Sun May 13 08:39:24 2018 - *** HARAKIRI ON WORKER 2 (pid: 97, try: 1) ***
2018-05-13 08:39:25 Sun May 13 08:39:24 2018 - HARAKIRI !!! worker 2 status !!!
2018-05-13 08:39:25 Sun May 13 08:39:24 2018 - HARAKIRI [core 0] 10.0.0.31 - POST //predict since 1526200163
2018-05-13 08:39:25 Sun May 13 08:39:24 2018 - HARAKIRI !!! end of worker 2 status !!!
2018-05-13 08:39:25 DAMN ! worker 2 (pid: 97) died, killed by signal 9 :( trying respawn ...