Forums

Can I schedule an iPython notebook to run automatically on PA?

What the title says - I have a notebook that grabs data, transforms and formats it and emails it out. Do I have to dump the code into a .py script to run it on the PA scheduler? Thanks!

Unfortunately you can't run notebooks as scheduled tasks -- like you say, the best solution is probably to extract the relevant code to a normal Python file and run that.

Hello

is it the same answer still today? I would like to schedule a Jupyter notebook because I find it easier to develop on that environment (autocompletion, the ability to run line by line without to copy paste, helpful color patterns and so on) that then script environment doesn't have. I know that you can download the jupyter notebook as a script but this doesn't work because then you would have to upload the file and delete the results per cell that is created in the code.

My issue will be solve if 1. I can schedule a jupyternotebook or 2. if I can have a more functional interface in the script (autocompletion and the ability to send a line or a selection to console using a hotkey).

Any ideas on this :). By the way I love this product :) Pythonanywhere rules!!

Executing jupyter notebooks from the command line The same functionality of executing notebooks is exposed through a command line interface or a Python API interface. As an example, a notebook can be executed from the command line with:

jupyter nbconvert --to notebook --execute mynotebook.ipynb Executing notebooks using the Python API interface This section will illustrate the Python API interface.

Example Let’s start with a complete quick example, leaving detailed explanations to the following sections.

Import: First we import nbconvert and the ExecutePreprocessor class:

import nbformat from nbconvert.preprocessors import ExecutePreprocessor Load: Assuming that notebook_filename contains the path of a notebook, we can load it with:

with open(notebook_filename) as f: nb = nbformat.read(f, as_version=4) Configure: Next, we configure the notebook execution mode:

ep = ExecutePreprocessor(timeout=600, kernel_name='python3') We specified two (optional) arguments timeout and kernel_name, which define respectively the cell execution timeout and the execution kernel.

The option to specify kernel_name is new in nbconvert 4.2. When not specified or when using nbconvert <4.2, the default Python kernel is chosen. Execute/Run (preprocess): To actually run the notebook we call the method preprocess:

ep.preprocess(nb, {'metadata': {'path': 'notebooks/'}}) Hopefully, we will not get any errors during the notebook execution (see the last section for error handling). Note that path specifies in which folder to execute the notebook.

Save: Finally, save the resulting notebook with:

with open('executed_notebook.ipynb', 'wt') as f: nbformat.write(nb, f) That’s all. Your executed notebook will be saved in the current folder in the file executed_notebook.ipynb.

Interesting -- thanks for posting that!

thanks for the info rajesh1997

that information comes from this link

So to be more specific, for generations to come I was able to schedule task on jupyter notebook by going to the normal task scheduler and in the run part inputing:

jupyter nbconvert --to notebook --inplace --execute development3.ipynb

the "inplace" means that the code won't create a new notebook file with the output. without it the task will create a new jupyternotebook with the suffix "nbconvert" on it. something that I don't want.

Thank you! This really helped

Yes, it was helpful for more than one user!