Skip to contents

Prints a formatted summary of the evaluation metrics for a prognostic model, either from training data or new data evaluation.

Usage

print_model_summary_pro(model_name, results_list, on_new_data = FALSE)

Arguments

model_name

A character string, the name of the model (e.g., "lasso_pro").

results_list

A list containing model evaluation results, typically an element from the output of run_models_pro() or the result of bagging_pro(), stacking_pro(). It must contain evaluation_metrics and model_object (if applicable).

on_new_data

Logical, indicating whether the results are from applying the model to new, unseen data (TRUE) or from the training/internal validation data (FALSE).

Value

NULL. Prints the summary to the console.

Examples

# \donttest{
if (requireNamespace("E2E", quietly = TRUE) &&
 "train_pro" %in% utils::data(package = "E2E")$results[,3]) {
  data(train_pro, package = "E2E")
  initialize_modeling_system_pro()
  results <- models_pro(data = train_pro, model = "lasso_pro")

  # Print summary for the trained model
  print_model_summary_pro("lasso_pro", results$lasso_pro, on_new_data = FALSE)

  # Example for a failed model
  failed_results <- list(evaluation_metrics = list(error = "Training failed"))
  print_model_summary_pro("MyFailedModel", failed_results)
}
#> Prognosis modeling system already initialized.
#> Running model: lasso_pro
#> 
#> --- lasso_pro Prognosis Model (on Training Data) Metrics ---
#> C-index: 0.7400
#> Time-dependent AUROC (years 1, 3, 5): 0.5322, 0.6498, 0.6411
#> Average Time-dependent AUROC: 0.6077
#> KM Group HR (High vs Low): 3.1559 (p-value: 2.597e-08, Cutoff: 0.3937)
#> --------------------------------------------------
#> Prognosis Model: MyFailedModel   | Status: Failed (Training failed)
# }