To perform benchmarking, ground truth annotations should be encoded in a format that is specific to the associated problem class. BIA workflows are also expected to output results in the same format.
Currently 9 problem classes are supported in BIAFLOWS and their respective annotation formats and computed benchmark metrics are described below.
Note: each problem class has a long name (explicit) and short name (e.g. Object Segmentation / ObjSeg). The same hold for metrics (e.g. DICE / DC).
A description of each benchmark is available on the workflow runs result table by clicking on the symbol.
|Delineate objects or isolated regions
|Estimate pixels class
|Estimate the number of objects
|Detect objects in an image (e.g. nucleus)
|Filament Tree Tracing
|Estimate the medial axis of a connected filament tree network (one per image)
|sample SWC format
|Filament Networks Tracing
|Estimate the medial axis of one or several connected filament network(s)
|Skeleton binary masks
|Estimate the position of specific feature points
|Estimate the tracks followed by particles (no division)
|Estimate object tracks and segmentation masks (with possible divisions)
|Label masks + Division text file