I'm getting a couple of unfortunate errors
Traceback (most recent call last):
File "/home/tjw0000/.local/lib/python3.6/site-packages/apscheduler/schedulers/base.py", line 958, in_process_jobs
executor.submit_job(job, run_times)
File "/home/tjw0000/.local/lib/python3.6/site-packages/apscheduler/executors/base.py", line 71, in submit_job
self._do_submit_job(job, run_times)
File "/home/tjw0000/.local/lib/python3.6/site-packages/apscheduler/executors/pool.py", line 22, in _do_submit_job
f = self._pool.submit(run_job, job, job._jobstore_alias, run_times, self._logger.name)
File "/usr/lib/python3.6/concurrent/futures/thread.py", line 115, in submit
self._adjust_thread_count()
File "/usr/lib/python3.6/concurrent/futures/thread.py", line 134, in _adjust_thread_count
t.start()
File "/usr/lib/python3.6/threading.py", line 846, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
2017-12-27 18:23:22 -- Task failed to start. Too many processes/threads
These are in my Task log.
I'm running 8 instances of the same script executing slightly different code, the scripts run hourly, stay running for 56 minutes, and then shut down.
About half of the scripts give me one of the above errors, I can't tell why these are throwing the errors versus the other ones (and, for that matter I can't tell why i get the two different thread errors).
Is there a setting I can increase or something I can do to avoid getting these thread errors? The script is really lightweight, it just happens to be always running and potentially hogging threads (I don't know much about that part)