@giles: Thanks for clarifying, it's useful to know they'll always be on the same machine for the time being.
If anyone does need a slightly more platform-agnostic approach, there's always fnctl locking. I know the reliability of this has come up before, but it's the variety file locking that seems to work on the widest array of platforms (even where the atomicity of creat()
isn't guaranteed, for example).
To adapt the example above:
import errno
import fcntl
import logging
import os
import sys
lock_filename = os.path.expanduser("~/scheduled-script.lock")
try:
lock_fd = open(lock_filename, "w")
fcntl.lockf(lock_fd, fcntl.LOCK_EX | fcntl.LOCK_NB)
logging.debug("Acquired lock on %r, lock_filename)
except IOError, e:
if e.errno in (errno.EAGAIN, errno.EACCES):
logging.info("FAILED to acquire lock on %r", lock_filename)
sys.exit(2)
logging.error("Error acquiring lock on %r: %s", lock_filename, e, exc_info=True)
sys.exit(1)
Bit more verbose - of course, you can get away without all that logging and differentiated error codes if you don't fancy it. Once benefit of this is that you can write things like the PID and start time of the instance into the lock file once you've acquired the lock, which can be useful for debugging.
Note that the lack of lock_fd.close()
(or equivalent use of with
) is quite intentional - the lock file is only closed when the script exits and Linux closes the file descriptor. This works even if the process terminates ungracefully, so there should be zero chance of a dead process leaving the file locked.
You could equally well put this in, say, a main()
function, but for obvious reasons it needs to be near the top-level scope of the script - as soon as lock_fd
goes out of scope, your script is no longer protected by the lock.