mediaire aT THE ECR '22

#youandAI

The most innovative suite for AI-based MRI diagnosis

Visit
our booth

Try our bestseller or test the brand new mdknee and mdprostate tools.

JOIN OUR
INDUSTRY TALKS

What’s next in AI-based diagnosis? Our leadership team will share a new perspective.

CHECK OUT
OUR PUBLICATIONS

Gain insights into our most recents studies, investigating how our software improves day-to-day work for radiologists.

FiNd us HERE:
expo Hall X1,
BOOTH# AI.19

OUR 3 T's FOR THE ECR '22

Test

Exclusive product demos for mdbrain, mdknee and mdprostate

Team

Meet our founders, medical practitioners and AI experts

Timeout

Enjoy popcorn in our chill-out lounge

test our bestseller and NEW prototypes now at ECR '22

Book upfront and experience the world's first AI-tool
for MRI knee diagnosis

OUR INDUSTRY TALKS at ECR

INDUSTRY KEYNOTE

AI-BASED DIAGNOSIS AT SCALE: OVERCOMING BARRIERS FOR MAINSTREAM ADOPTION

The clinical implementation of AI in medical imaging has reached a breakthrough moment. How do we overcome barriers to scale AI-based diagnosis?

Thursday,
July 15

15:00-15:20

AI THEATRE
(EXPO X1)

INDUSTRY PITCH

MDBRAIN -
THE MOST COMPREHENSIVE SOLUTION IN NEURO-MRI

Rethink neuroimaging: mdbrain provides significant and clinically relevant value to the radiologist's practice. Understand how you can drive efficiency and quality.

Friday,
July 16

11:10-11:20

AI THEATRE
(EXPO X1)

EVIDENCE-BASED BENEFITS: OUR ECR PUBLICATIONS

Background and Objectives

A commonly used approach for segmentation of medical imaging data is to process the 3D-volumes slice-wise for each orientation and to merge the predictions. Our goal was to find an optimal merging strategy for the detection and segmentation of new Multiple Sclerosis (MS) lesions.

Methods

Our training data consisted of 40 pairs (baseline and follow-up scan; 1-3 years apart) of 3D FLAIR MR images from the MSSEG-2 challenge (https://portal.fli-iam.irisa.fr/msseg-2/data/) and our internal test dataset contained 25 pairs with new lesions and 21 without. For each pair, consensus segmentation of new MS lesions was provided. We used a 2D U-Net for segmenting new lesions on the axial, coronal and sagittal slices. Then, we evaluated three different lesion-wise merging strategies: Lesion was predicted if detected in a) at least one orientation (union); b) at least two orientations (majority); c) all orientations (unanimous voting). F1 score was computed for all methods. Additionally, we submitted a model with the optimal merging strategy to the MSSEG-2, which was trained on 65 datasets (40+25 with new lesions). 

Results

The highest F1 score on our test data was achieved using unanimous voting with an F1 of 0.64 (union: 0.31, majority: 0.55). 

The model with unanimous voting ranked highest among all submissions of the MSSEG-2 challenge (F1=0.541) and outperformed one of the four human experts (F1=0.524).

Discussion

White matter lesions are often indistinguishable from gray matter if assessed using only one orientation. In this work, we showed that segmenting only lesions detected in all three orientations makes our model comparable to human assessment and achieved the best F1 score among all competing methods. 

Join our research presentation @ECR about ‘Artificial intelligence in neuroimaging’ on the 15th of July in RPS 1505 from 14:00 to 15:30.

Purpose

To assess whether the AI-based system mdbrain leads to efficiency gains when used in a real-life setting. 

Methods and Background

We asked 7 radiologists from 5 sites to assess subsequent radiological images as part of their daily routine, in total 285 (128/157 with/without the system’s support, resp.). Diagnosis (Dementia/MS) and reading time were documented. Additionally, the system’s subjective influence on the radiological report was surveyed. 

Results

The median assessment time was significantly reduced by 25% when mdbrain was used (p < 0.001) equivalent to 1:56min. This reduction was significant for both diagnoses, and more pronounced for Dementia (-57%) compared to MS cases (-13%). We further observed a strong correlation between years of experience in radiology vs. reduction of reading times (R=0.76, p=0.05).

Radiologists reported that mdbrain had a diagnostic impact in 118/128 AI-aided assessments. Among these cases, radiologist reported that mdbrain “reinforced their original assessment” in 76 cases, “enabled a clearer diagnosis” in 25 cases, “reported anomalies that could have been missed” in 7 cases, and “lead to confusion or less clear diagnosis” in 9 cases. Therefore, mdbrain had a positive diagnostic impact in 108 out of 128 cases (84%). 

Conclusion

In the majority of cases (84%) the AI-based system had a positive qualitative impact on the diagnostic process. In terms of efficiency, we observed a clear drop in reading times (median -25%) whereas the effect was more pronounced in Dementia (median -57%).

Join our research presentation @ECR about ‘Artificial intelligence in brain imaging’ on the 16th of July in RPS 1705b from 08:00 to 09:00.

Background and Objectives

The diagnosis of multiple sclerosis (MS) requires the assessment of lesion load from brain MRIs. Traditionally, MS lesions are manually annotated by radiologists, a process that is inefficient and error prone. The AI-software mdbrain leverages deep-learning to automatically segment MS lesions. Here, we assess the accuracy of the lesion-segmentation algorithm to be released in mdbrain 4.5 compared to SPM-SLS (http://atc.udg.edu/salem/slsToolbox/) and to the inter-rater performance of 4 experts.

Methods

mdbrain uses a deep neural network to segment lesions from a FLAIR scan. The network was trained using 280 annotated FLAIRs. Performances were tested on a separate dataset of 30 FLAIRs annotated by 4 experts. To assess segmentation accuracy, we computed the lesion-wise F1 score between each algorithm (mdbrain and SPM-SLS) and rater, averaged across raters. The inter-rater F1 was computed by comparing the annotation of each rater against the remaining 3. F1 scores were also computed for different lesion classes separately.

Results

mdbrain achieved an F1 score of 0.72, which was larger than SPM-SLS (F1=0.55) but slightly smaller than the inter-rater (F1=0.75). F1 scores of mdbrain were larger than the inter-rater for juxtacortical (mdbrain F1=0.75; inter-rater F1=0.72) and infratentorial lesions (mdbrain F1=0.58; inter-rater F1=0.55), but smaller for periventricular (mdbrain F1=0.74; inter-rater F1=0.77) and deep-white matter lesions (mdbrain F1=0.70; inter-rater F1=0.76). An average time of 2 minutes was required by mdbrain to process a single scan (GPU-equipped machine). 

Discussion

The AI-software mdbrain 4.5 achieved a lesion-segmentation accuracy comparable to a pool of human experts and considerably higher than SPM-SLS.

Join our research presentation @ECR about ‘Artificial intelligence in brain imaging’ on the 16th of July in RPS 1705b from 08:00 to 09:00.

Purpose:

To test and compare the repeatability and diagnostic accuracy of different academic and commercial brain volumetry solutions with different methodologies with and without DeepLearning (DL).

Methods and Materials:

Brain volumetry measurements were carried out with the open-source software packages FreeSurfer (v6.0.0) and SPM (v12) and were compared against a commercially available software solution mdbrain (v4.4) based on DeepLearning algorithms. The MIRIAD study including 45 patients with confirmed Alzheimer’s disease and 23 healthy controls being followed over 2y served as data set. Furthermore, back-to-back scans (n=178) carried out on the same day were included. Images were acquired on a 1.5T MR-scanner using standard 3D-T1w images. Brain volumetry was performed for several regions incl. whole brain, grey & white matter, all lobes, hippocampus and all ventricles. All systems were compared in terms repeatability and performance. The performance was quantified using ROC-analysis by calculating the corresponding AUCs.

Results:

For the repeatability tests, the DL-based mdbrain showed a significantly better stability as compared to FreeSurfer and SPM for all analyzed regions (e.g. mean deviations to reference for whole brain 0.06+/-0.09% vs. 0.43+/-0.87% vs 0.12+-0.06%). Performance analysis also yielded higher AUC values for mdbrain and SPM (mean value whole brain 0.96 & 0.95) vs. FreeSurfer (0.77). 

Conclusion:

As compared to FreeSurfer & SPM mdbrain showed sign. better repeatability for all of the evaluated regions. This is reflected by the improved mean values and a lower overall error. Taking into account the shorter evaluation time of <5min for the DL-based product vs. ~10h for FreeSurfer and ~30min for SPM, mdbrain appears to be a valuable tool to enable the routine application of brain volumetry in clinical practice.

STAY UP-TO-DATE - FOLLOW US
#youandAI