Nope, it's not too big to fit in the margin, here it is :-) :
This is my solution to the problem of a Django app on Pythonanywhere using Sqlite running VERY slow, due to Pythonanywhere accessing the Sqlite database file over a network every time it is read ( see above discussion ).
NB1: This solution only works if your DB is read-only for the app. I guess it could be made to write back any changes to the file DB, or use a seperate DB for read/write data with ".using('otherdb')." for example.
NB2: This works for me, but I am only came to it by googling and playing around.
It is highly suboptimal and nasty, but it runs MUCH faster than accessing the Sqlite DB file every time.
Caveat developer.
You need two DBs declared in settings.py, one default memory DB, the other the original DB file, that is full of data:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': ':memory:'
},
'dbfile': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join( BASE_DIR, 'sourcedata.db.sqlite3' ),
},
}
Then you need to define a function to be called on EVERY DB connection.
I put this code in apps.py:
import sqlite3
from django.conf import settings
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
from django.dispatch import receiver
from django.db.backends.signals import connection_created
# load default (memory) db from file db on EVERY new connection to default db
strDbDump = None
@receiver(connection_created)
def onDbConnectionCreate( connection, **kwargs ):
global strDbDump
if ( strDbDump is None ):
# Read a file DB into string
connectionDbFile = sqlite3.connect( settings.DATABASES['dbfile']['NAME'] )
stringIoDbDump = StringIO()
for lineDbDump in connectionDbFile.iterdump():
stringIoDbDump.write( '%s\n' % lineDbDump )
connectionDbFile.close()
stringIoDbDump.seek( 0 )
strDbDump = stringIoDbDump.read()
# Write string into memory DB
with connection.cursor() as cursor:
cursor.executescript( strDbDump )
connection.commit()
return
This means the string with the DB contents will be loaded into the memory database EVERY TIME you open a new connection.
As I say, horribly inefficient. But it works.
Comments very welcome.