Forums

Mysql database operations 2 - 3 times slower since Feb 26

Hello. As the subject says. All of my mysql database tasks are now taking 2 - 3 times longer to complete as of sometime on Feb 26. I have made no code changes since Feb 21. Backing up the database, restoring locally, and running our most intensive task against it shows no noticeable issue, so it does not appear to be a data problem.

We keep adding CPU seconds to stop falling into the tarpit, but the issue is happening whether or not we're in the tarpit. It also doesn't make sense that tasks are now taking so much longer with no new code changes.

Any insight as to what may have changed on Pythonanywhere's end would be greatly appreciated. Thanks.

UPDATE: It appears mysql may not be the culprit, but something is wrong with scheduled tasks. They are taking 3-4 minutes to instantiate and are running 2 - 3 times slower than if I initiate the same task on the command line within a console. I have a task that takes about 15 - 20 minutes to complete on average (let's call it Task A) and another that takes about 4-5 minutes to complete (Task B).

Since Feb 26, Task A has started taking 45 - 50 minutes to complete and Task B is taking 10-15 minutes to complete. Both of these tasks are inter-dependent and were previously scheduled in such a way that they would not overlap. Due to whatever is causing my scheduled tasks to run really slow, they are now overlapping which is not only consuming all of my CPU time, but also causing them to end up dead-locking each other out of the database write that happens at the end of the task resulting in the entire task to rollback.

Could someone please take a look at my account or let me know why scheduled tasks are suddenly running really slow even though there have been no changes on my end?

Thanks.

The servers that run scheduled tasks are generally more loaded than console servers, so comparing between consoles and tasks is not particularly useful. Load is also variable on task servers - sometimes they will be more loaded (in particular, running on the hour will place your tasks in the most heavily loaded times). Using the time a task may take to run is not really a good way to try to ensure that they do not collide. Perhaps you can have the task that runs first place a marker to indicate that it is busy and then the task that runs second can wait for that marker to disappear before it proceeds.

I appreciate the code suggestion. However, I am more concerned that the code is now taking so much longer to run when it has been running fine with minimal changes as a scheduled task since September of 2022. It is now suddenly taking 2 - 3 times longer. Instead of paying for 4000 seconds we would now need to pay for roughly 12000 seconds of run time even though nothing has changed on our end. This is a significant sudden decrease in performance.

You do not need to pay more for the extra time that the task takes. CPU seconds are seconds that you spend using a CPU. If your task is waiting for it's chance to use the CPU, then that is not counted as CPU time.

It's not waiting. It's actively working, but that work is now taking 2 - 3 times longer to complete than it did previously.

I'm seeing other issues too. My scheduled task that just runs a mysqldump on my database is now taking 1000 - 3000 seconds to complete where it previously took 500-700 seconds. It also has started to end in failure with an error stating the connection to MySQL was lost.

I believe I'm experiencing whatever problem the user was experiencing in this issue: https://www.pythonanywhere.com/forums/topic/32505/

Do I need to email support?

If it's taking longer and using more CPU time, then that is because it is doing more work. If it's taking longer, but using the same amount of CPU time, then that is because it is doing the same amount of work, but it's taking longer because of load on the task server.

Try adding --compress and --quick to your mysqldump command. They will help with getting large datasets across the network.

You are not experiencing the same issue as that forum post that you mentioned. We know what caused that and fixed it.