Upload your model and dataset. We automatically evaluate robustness, consistency, and prediction stability.
How it works
Three simple steps to evaluate the dependability of your machine learning model.
1
Upload your trained model
Provide your existing model file (.pkl, .joblib, or .pt).
AssureML never trains, only evaluates.
2
Attach a small dataset
Upload a representative dataset (CSV, TXT, or ZIP of images).
We use it to probe your model under controlled disturbance.
3
Get dependability scores
We run automated tests and give you metrics on robustness, consistency and variance,
helping you understand how stable your model really is.
What does AssureML measure?
We focus on three core dimensions of AI dependability inspired by upcoming AI regulations and best practices.
Metric
Robustness
Measures how sensitive your model is to small input noise. Ideally, a reliable model should not
change its predictions drastically when the input is slightly perturbed.
Metric
Consistency
Evaluates whether the model gives similar outputs for inputs that should mean the same
(e.g., rounded numbers, lowercased text, or lightly transformed images).
Metric
Variance
Looks at the spread of predictions over repeated, micro-perturbed runs. Lower variance means
more stable behaviour, which is crucial for safety-critical and regulated applications.