api.metrics
Module Contents
Classes
Class to analyse MedCATtrainer exports |
Functions
|
Computes metrics in a background task |
Attributes
- api.metrics._dt_fmt = '%Y-%m-%d %H:%M:%S.%f'
- api.metrics.logger
- api.metrics.calculate_metrics(project_ids, report_name)
Computes metrics in a background task :param projects: list of projects to compute metrics for.
Uses the ‘first’ for the CDB / vocab or ModelPack, but should be the same CDB, but will still try and compute metrics regardless.
- Returns:
computed metrics results
- Parameters:
project_ids (List[int]) –
report_name (str) –
- class api.metrics.ProjectMetrics(mct_export_data, cat)
Bases:
objectClass to analyse MedCATtrainer exports
- Parameters:
mct_export_data (dict) –
cat (medcat.cat.CAT) –
- __init__(mct_export_data, cat)
- Parameters:
mct_export_paths – List of paths to MedCATtrainer exports
mct_export_data (dict) –
cat (medcat.cat.CAT) –
- _annotations()
- annotation_df()
DataFrame of all annotations created :return: DataFrame
- concept_summary(extra_cui_filter=None)
Summary of only correctly annotated concepts from a mct export :return: DataFrame summary of annotations.
- enrich_medcat_metrics(examples)
Add the user prop to the medcat output metrics. Can potentially add more later for each of the categories
- user_stats(by_user=True)
Summary of user annotation work done
- Parameters:
by_user (bool) – User Stats grouped by user rather than day
- Returns:
DataFrame of user annotation work done
- rename_meta_anns(meta_anns2rename=dict(), meta_ann_values2rename=dict())
TODO: the meta_ann_values2rename has issues :param meta_anns2rename: Example input: {‘Subject/Experiencer’: ‘Subject’} :param meta_ann_values2rename: Example input: {‘Subject’:{‘Relative’:’Other’}} :return:
- _eval_model(model, data, config)
- Parameters:
model (torch.nn.Module) –
data (List) –
config (medcat.config.config_meta_cat.ConfigMetaCAT) –
- Return type:
Dict
- _eval(metacat_model, mct_export)
- full_annotation_df()
DataFrame of all annotations created including meta_annotation predictions. This function is similar to annotation_df with the addition of Meta_annotation predictions from the medcat model. prerequisite Args: MedcatTrainer_export([mct_export_paths], model_pack_path=<path to medcat model>)
- Return type:
pandas.DataFrame
- meta_anns_concept_summary()
Calculate performance metrics for meta annotations per concept.
- Returns:
List[Dict]: List of dictionaries containing concept-level meta annotation metrics
- Return type:
List[Dict]
- generate_report(meta_ann=False)