Hi, @fede, I barely changed anything. Just run the program and got these numbers. Also, at the end, it receied this error:
Traceback (most recent call last):
File "/Users/lingvisa/Nlp/chatbot/rasa_lookup_demo/run_lookup.py", line 234, in <module>
run_demo(key, disp_bar=disp_bar)
File "/Users/lingvisa/Nlp/chatbot/rasa_lookup_demo/run_lookup.py", line 127, in run_demo
plot_metrics(metric_list)
File "/Users/lingvisa/Nlp/chatbot/rasa_lookup_demo/run_lookup.py", line 178, in plot_metrics
print_metrics(metric_list)
File "/Users/lingvisa/Nlp/chatbot/rasa_lookup_demo/run_lookup.py", line 156, in print_metrics
key = metric_list[0]["key"]
IndexError: list index out of range
@twittmin - what exact commands are you running? Since the numbers are exactly the same, maybe double check that you are really evaluating two different models?
@amn41I simply run this as instructed in the github instruction:
python run_lookup.py food
The code from the function:
def run_demo(key, disp_bar=True):
# runs the demo specified by key
# get the data for this key and the configs
training_data, training_data_lookup, test_data, model_dir = get_path_dicts(key)
config_file = "configs/config.yaml"
config_baseline = "configs/config_no_features.yaml"
# run a baseline
model_loc = train_model(training_data, config_baseline, model_dir)
evaluate_model(test_data, model_loc)
# run with more features in CRF
model_loc = train_model(training_data, config_file, model_dir)
evaluate_model(test_data, model_loc)
# run with the lookup table
model_loc = train_model(training_data_lookup, config_file, model_dir)
evaluate_model(test_data, model_loc)
# get the metrics
metric_list = strip_metrics(key)
# either print or plot them
if disp_bar:
plot_metrics(metric_list)
else:
print_metrics(metric_list)