This is not an official repository of Matbench, but eventually, it may be incorporated into Matbench
matbench-genmetrics ο
This repository provides standardized benchmarks for benchmarking generative models for crystal structure. Each benchmark has a fixed dataset, a predefined split, and notions of best (i.e. metrics) associated with it.
NOTE: This project is separate from https://matbench-discovery.materialsproject.org/ which provides a slick leaderboard and package for benchmarking ML models on crystal stability prediction from unrelaxed structures. This project instead looks at assessing the quality of generative models for crystal structures.
Getting Startedο
Installation, a dummy example, output metrics for the example, and descriptions of the benchmark metrics.
Installationο
pip install matbench-genmetrics
See Advanced Installation for more information.
Exampleο
NOTE: be sure to set
dummy=False
for the real/full benchmark run.MPTSMetrics10
is intended for fast prototyping and debugging, as it assumes only 10 generated structures.
>>> from matbench_genmetrics.mp_time_split.utils.gen import DummyGenerator
>>> from matbench_genmetrics.core.metrics import MPTSMetrics10, MPTSMetrics100, MPTSMetrics1000, MPTSMetrics10000
>>> mptm = MPTSMetrics10(dummy=True)
>>> for fold in mptm.folds:
>>> train_val_inputs = mptm.get_train_and_val_data(fold)
>>> dg = DummyGenerator()
>>> dg.fit(train_val_inputs)
>>> gen_structures = dg.gen(n=mptm.num_gen)
>>> mptm.evaluate_and_record(fold, gen_structures)
>>> print(mptm.recorded_metrics)
{
0: {
"validity": 0.4375,
"coverage": 0.0,
"novelty": 1.0,
"uniqueness": 0.9777777777777777,
},
1: {
"validity": 0.4390681003584229,
"coverage": 0.0,
"novelty": 1.0,
"uniqueness": 0.9333333333333333,
},
2: {
"validity": 0.4401197604790419,
"coverage": 0.0,
"novelty": 1.0,
"uniqueness": 0.8222222222222222,
},
3: {
"validity": 0.4408740359897172,
"coverage": 0.0,
"novelty": 1.0,
"uniqueness": 0.8444444444444444,
},
4: {
"validity": 0.4414414414414415,
"coverage": 0.0,
"novelty": 1.0,
"uniqueness": 0.9111111111111111,
},
}
Metricsο
Metric |
Description |
---|---|
Validity |
A loose measure of how βvalidβ the set of generated structures are by comparing the space group number distribution of the generated structures with the benchmark data. Formally, this is one minus (Wasserstein distance between distribution of space group numbers for train and generated structures divided by distance of dummy case between train and |
Coverage |
A form of βrediscoveryβ, where structures from the future that were held out were βdiscoveredβ by the generative model, i.e., when the generative model βpredicted the futureβ. Formally, this is the match counts between held-out test structures and generated structures divided by number of test structures. |
Novelty |
A measure of how novel the generated structures are relative to the structures that were used to train the generative model. Formally, this is one minus (match counts between train structures and generated structures divided by number of generated structures). |
Uniqueness |
A measure of whether the generative model is suggesting repeat structures or not. Formally, this is one minus (non-self-comparing match counts within generated structures divided by total possible non-self-comparing matches). |
A match is when StructureMatcher(stol=0.5, ltol=0.3, angle_tol=10.0).fit(s1, s2)
evaluates to True
.
Detailed descriptions of the metrics are given on the Metrics page.
We performed a βslow march of timeβ benchmarking study, which uses the mp-time-split
data from a future fold as the βgeneratedβ structures for the previous fold. The results are presented in the charts below. See the corresponding notebook for details.
Advanced Installationο
PyPI (pip
) installationο
Create and activate a new conda
environment named matbench-genmetrics
(-n
) with python==3.11.*
or your preferred Python version, then install matbench-genmetrics
via pip
.
conda create -n matbench-genmetrics python==3.11.*
conda activate matbench-genmetrics
pip install matbench-genmetrics
Editable installationο
In order to set up the necessary environment:
clone and enter the repository via:
git clone https://github.com/sparks-baird/matbench-genmetrics.git cd matbench-genmetrics
create and activate a new conda environment (optional, but recommended)
conda env create --name matbench-genmetrics python==3.11.* conda activate matbench-genmetrics
perform an editable (
-e
) installation in the current directory (.
):pip install -e .
NOTE: Some changes, e.g. in
setup.cfg
, might require you to runpip install -e .
again.
Optional and needed only once after git clone
:
install several pre-commit git hooks with:
pre-commit install # You might also want to run `pre-commit autoupdate`
and checkout the configuration under
.pre-commit-config.yaml
. The-n, --no-verify
flag ofgit commit
can be used to deactivate pre-commit hooks temporarily.install nbstripout git hooks to remove the output cells of committed notebooks with:
nbstripout --install --attributes notebooks/.gitattributes
This is useful to avoid large diffs due to plots in your notebooks. A simple
nbstripout --uninstall
will revert these changes.
Then take a look into the scripts
and notebooks
folders.
Dependency Management & Reproducibilityο
Always keep your abstract (unpinned) dependencies updated in
environment.yml
and eventually insetup.cfg
if you want to ship and install your package viapip
later on.Create concrete dependencies as
environment.lock.yml
for the exact reproduction of your environment with:conda env export -n matbench-genmetrics -f environment.lock.yml
For multi-OS development, consider using
--no-builds
during the export.Update your current environment with respect to a new
environment.lock.yml
using:conda env update -f environment.lock.yml --prune
Project Organizationο
βββ AUTHORS.md <- List of developers and maintainers.
βββ CHANGELOG.md <- Changelog to keep track of new features and fixes.
βββ CONTRIBUTING.md <- Guidelines for contributing to this project.
βββ Dockerfile <- Build a docker container with `docker build .`.
βββ LICENSE.txt <- License as chosen on the command-line.
βββ README.md <- The top-level README for developers.
βββ configs <- Directory for configurations of model & application.
βββ data
β βββ external <- Data from third party sources.
β βββ interim <- Intermediate data that has been transformed.
β βββ processed <- The final, canonical data sets for modeling.
β βββ raw <- The original, immutable data dump.
βββ docs <- Directory for Sphinx documentation in rst or md.
βββ environment.yml <- The conda environment file for reproducibility.
βββ models <- Trained and serialized models, model predictions,
β or model summaries.
βββ notebooks <- Jupyter notebooks. Naming convention is a number (for
β ordering), the creator's initials and a description,
β e.g. `1.0-fw-initial-data-exploration`.
βββ pyproject.toml <- Build configuration. Don't change! Use `pip install -e .`
β to install for development or to build `tox -e build`.
βββ references <- Data dictionaries, manuals, and all other materials.
βββ reports <- Generated analysis as HTML, PDF, LaTeX, etc.
β βββ figures <- Generated plots and figures for reports.
βββ scripts <- Analysis and production scripts which import the
β actual PYTHON_PKG, e.g. train_model.
βββ setup.cfg <- Declarative configuration of your project.
βββ setup.py <- [DEPRECATED] Use `python setup.py develop` to install for
β development or `python setup.py bdist_wheel` to build.
βββ src
β βββ matbench_genmetrics <- Actual Python package where the main functionality goes.
βββ tests <- Unit tests which can be run with `pytest`.
βββ .coveragerc <- Configuration for coverage reports of unit tests.
βββ .isort.cfg <- Configuration for git hook that sorts imports.
βββ .pre-commit-config.yaml <- Configuration of pre-commit git hooks.
Noteο
This project has been set up using PyScaffold 4.2.2.post1.dev2+ge50b5e1 and the dsproject extension 0.7.2.post1.dev2+geb5d6b6.