Forums

output_value = func(*func_args dash callback error

Hello,

I am running an app that performs a query; receives a json object, parses it, creates a dataframe from it, and then displays the results in the browser via dash. I am getting the following error when I run certain queries, but do not get this exception for smaller data queries:

2022-05-03 02:14:36 Exception on /_dash-update-component [POST]#012Traceback (most recent call last):#012  File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 2051, in wsgi_app#012    response = self.full_dispatch_request()#012  File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1501, in full_dispatch_request#012    rv = self.handle_user_exception(e)#012  File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1499, in full_dispatch_request#012    rv = self.dispatch_request()#012  File "/usr/local/lib/python3.7/site-packages/flask/app.py", line 1485, in dispatch_request#012    return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)#012  File "/home/shussai2/.local/lib/python3.7/site-packages/dash/dash.py", line 1345, in dispatch#012    response.set_data(func(*args, outputs_list=outputs_list))#012  File "/home/shussai2/.local/lib/python3.7/site-packages/dash/_callback.py", line 151, in add_context#012    output_value = func(*func_args,

I don't think I am even seeing the complete trace in the error log. Anyway, the app I have woks fine locally (python 3.7, spyder, anaconda 3.0, dash1.7.0, dash-bootstrap-components 1.0.0, dash-core-components 1.6.0, dash-html-components 1.0.2, flask 1.1.1, python 3.7.4), I do not see this error and query is performed and displayed correctly.

However, when I run it in my pythonanywhere site, I see this error for certain queries. I notice it typically throws the exception at the following lines in queries.py:

    for col_name in all_df.columns:
        print(col_name)
        # Works fine and prints out all the columns in the df

    print("\nBefore RECALL shorter_recall_df creation")

    # Throws exception when I try to create a 'res_event_number',
    # df with specific columns of the all_df
    shorter_recall_df = all_df[['recalling_firm',
                        'product_res_number',
                        'k_numbers',
                        'recall_status',
                        'event_date_initiated',
                        'product_quantity',
                        'reason_for_recall',
                        'root_cause_description',
                        'product_description',
                        'distribution_pattern',
                        'action',
                        'firm_fei_number',
                        'cfres_id',
                        'other_submission_description',
                        'event_date_terminated'
                        ]]
    print("\nPrint shorter_recall_df:")
    print(shorter_recall_df)
    print("\nPrint column names of shorter_recall_df:")
    for col_name in shorter_recall_df.columns:
        print(col_name)

Please let me know where I am making a mistake and how to resolve this issue.

Since you have logging, you may compare the columns that were available (from printing of all_df.columns) -- that will be in the server log, with the error that is shown in the error log (you need to refresh the page if you don't see the full traceback message). Make sure that you're comparing the same timestamps in both log files. Anyway, if you're getting an exception on a look-up, you're probably using wrong keys in shorter_recall_df definition.

Thank you for the response, so when I do print out the columns of bigger dataframe (all_df), I get the following:

cfres_id

product_res_number

event_date_initiated

event_date_created

recall_status

res_event_number

product_code

k_numbers

product_description

code_info

recalling_firm

reason_for_recall

root_cause_description

action

product_quantity

distribution_pattern

firm_fei_number

openfda

And right after I try to create a subset df (shorter_recall_df) from all_df, it throws the exception. I am quite sure the columns are existent in the bigger all_df, which I am subsetting in shorter_recall_df. Also, note it works fine locally.

So can it be the order of columns I am listing in shorter_recall_df that is incorrect? Also, can you please explain to me why it works locally and not in my pythonanywhere? Is it versioning difference? Thank u for your help.

other_submission_description is not in that list of columns, so that is probably the issue.

That did fix the issue. Thank you. Also just to share, now I check each column name first to see if it exist in my query response back before I create a subset dataframe. This way Key Error exception is not thrown. Code is as follows:

   #dataframe created containing all the data received from query 
    all_df = pd.read_json(json.dumps(alldata))

    #create a new dataframe only consisting of some of the columns
    #from the entire dataframe of the enforcement report
    #first create a list of possible columns that you are expecting to receive

    selected_columns = [
                        'recalling_firm',
                        'recall_number',
                        'event_id',
                        'classification',
                        'center_classification_date',
                        'recall_initiation_date',
                        'initial_firm_notification',
                        'reason_for_recall',
                        'code_info',
                        'voluntary_mandated',
                        'product_type',
                        'product_quantity',
                        'product_description',
                        'distribution_pattern',
                        'report_date',
                        'status',
                        'termination_date',
                        'more_code_info'           
                       ]

    #empty list declared
    sel_cols_list = list()
    for sel_col in selected_columns:
        if sel_col in all_df.columns:
            print("sel_col EXIST and is: " + sel_col)
            sel_cols_list.append(sel_col)
        else:
            print("sel_col NOT EXIST and is: " + sel_col)

    print("\nsel_cols_list is:")
    print(sel_cols_list)

    #create a subset df from the verified columns that do
    #exist in the larger all_df
    shorter_enforce_df = all_df[sel_cols_list]