deep learning models for

radiology.|pathology.|genomics.|dermatology.|mammography.|colonoscopy.|endoscopy.|electrocardiography.|ophthalmology.|pharmacology.

collection

Empowering medical AI researchers to share fully reproducible and portable model implementations.

Crowdsourced through contributions by the scientific research community, modelhub is a repository of self-contained deep learning models pretrained for a wide variety of medical applications. Modelhub highlights recent trends in deep learning applications, enables transfer learning approaches and promotes reproducible science.

Proven methods.

Scientifically Tested

All models are accompanied by peer-reviewed studies published in scientific journals.

All inclusive.

Diverse Data

Find models for various types of medical data from CT to ECG and gene expressions.

Supports Open Neural Network Exchange - ONNX.

Tool Agnostic

Interchangeable ONNX models with sample data and pre-/post-processing pipelines.

Statistical analysis across all models.

Live Meta-analysis

Explore the latest trends, styles, architectures and best practices.

Open access models.

Open-Source

Download fully-documented Docker containers for transfer learning and benchmarking.

Community-sourced.

Contribute today!

Upload your deep learning model today and retain full rights with the license of your choice.

Collection

CardiacFCN

A Fully Convolutional Neural Network for Cardiac Segmentation in Short-Axis MRI
Phi Vu Tran

Automated cardiac segmentation from magnetic resonance imaging datasets is an essential step in the timely diagnosis and management of cardiac pathologies. We propose to tackle the problem of automated left and right ventricle segmentation through the application of a deep fully convolutional neural network architecture. Our model is efficiently trained end-to-end in a single learning stage from wholeimage inputs and ground truths to make inference at every pixel. To our knowledge, this is the first application of a fully convolutional neural network architecture for pixel-wise labeling in cardiac magnetic resonance imaging. Numerical experiments demonstrate that our model is robust to outperform previous fully automated methods across multiple evaluation measures on a range of cardiac datasets. Moreover, our model is fast and can leverage commodity compute resources such as the graphics processing unit to enable state-of-the-art cardiac segmentation at massive scales. The models and code are available at https://github.com/vuptran/cardiac-segmentation.

Arxiv Test Drive
Description
The proposed FCN architecture is efficiently trained end-to-end on a graphics processing unit (GPU) in a single learning stage from whole image inputs and ground truths to make inference at every pixel, a task commonly known as pixel-wise labeling or per-pixel classification.
Architecture
Fully Convolutional Neural Network (CNN)
Application
Cardiac Imaging
Type
Supervised learning
License
MIT
* only linux is currently supported.

1. Install Docker. (CE is sufficient)
2. Download the bash script.
$ curl -O https://raw.githubusercontent.com/modelhub-ai/modelhub/master/start_scripts/start_cardiacfcn.sh
3. Change permissions.
$ chmod +x start_cardiacfcn.sh
4. Run.
For the basic test drive version:
$ sudo ./start_cardiacfcn.sh
For the jupyter notebook version:
$ sudo ./start_cardiacfcn.sh -e
To explore the docker:
$ sudo ./start_cardiacfcn.sh -b

Cascaded-FCN Liver

Automatic Liver and Lesion Segmentation in CT Using Cascaded Fully Convolutional Neural Networks and 3D Conditional Random Fields
Patrick Ferdinand Christ, Mohamed Ezzeldin A. Elshaer, Florian Ettlinger, Sunil Tatavarty, Marc Bickel, Patrick Bilic, Markus Rempfler, Marco Armbruster, Felix Hofmann, Melvin D'Anastasi, Wieland H. Sommer, Seyed-Ahmad Ahmadi, Bjoern H. Menze

Automatic segmentation of the liver and its lesion is an important step towards deriving quantitative biomarkers for accurate clinical diagnosis and computer-aided decision support systems. This paper presents a method to automatically segment liver and lesions in CT abdomen images using cascaded fully convolutional neural networks (CFCNs) and dense 3D conditional random fields (CRFs). We train and cascade two FCNs for a combined segmentation of the liver and its lesions. In the first step, we train a FCN to segment the liver as ROI input for a second FCN. The second FCN solely segments lesions from the predicted liver ROIs of step 1. We refine the segmentations of the CFCN using a dense 3D CRF that accounts for both spatial coherence and appearance. CFCN models were trained in a 2-fold cross-validation on the abdominal CT dataset 3DIRCAD comprising 15 hepatic tumor volumes. Our results show that CFCN-based semantic liver and lesion segmentation achieves Dice scores over 94% for liver with computation times below 100s per volume. We experimentally demonstrate the robustness of the proposed method as a decision support system with a high accuracy and speed for usage in daily clinical routine.

Arxiv Test Drive
Description
This model showcases the first step of the Automatic Liver and Lesion Segmentation. It segments the liver on a single slice CT image. The trained model for the second step (lesion segmentation) is included but not executed in the demo.
Architecture
Fully Convolutional Network (FCN)
Application
CT Abdomen
Type
Supervised learning
License
Unrestricted use for research and educational purposes.
* only linux is currently supported.

1. Install Docker. (CE is sufficient)
2. Download the bash script.
$ curl -O https://raw.githubusercontent.com/modelhub-ai/modelhub/master/start_scripts/start_CascadedFCN_Liver.sh
3. Change permissions.
$ chmod +x start_CascadedFCN_Liver.sh
4. Run.
For the basic test drive version:
$ sudo ./start_CascadedFCN_Liver.sh
For the jupyter notebook version:
$ sudo ./start_CascadedFCN_Liver.sh -e
To explore the docker:
$ sudo ./start_CascadedFCN_Liver.sh -b

SqueezeNet

SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size
Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer

Recent research on deep neural networks has focused primarily on improving accuracy. For a given accuracy level, it is typically possible to identify multiple DNN architectures that achieve that accuracy level. With equivalent accuracy, smaller DNN architectures offer at least three advantages: (1) Smaller DNNs require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth to export a new model from the cloud to an autonomous car. (3) Smaller DNNs are more feasible to deploy on FPGAs and other hardware with limited memory. To provide all of these advantages, we propose a small DNN architecture called SqueezeNet. SqueezeNet achieves AlexNet-level accuracy on ImageNet with 50x fewer parameters. Additionally, with model compression techniques we are able to compress SqueezeNet to less than 0.5MB (510x smaller than AlexNet).

Arxiv Test Drive
Description
SqueezeNet begins with a standalone convolution layer (conv1), followed by 8 Fire modules (fire2-9), ending with a final conv layer (conv10). We gradually increase the number of filters per fire module from the beginning to the end of the network. SqueezeNet performs max-pooling with a stride of 2 after layers conv1, fire4, fire8, and conv10
Architecture
Convolutional Neural Network (CNN)
Application
ImageNet
Type
Supervised learning
License
BSD 2-Clause 'Simplified'
* only linux is currently supported.

1. Install Docker. (CE is sufficient)
2. Download the bash script.
$ curl -O https://raw.githubusercontent.com/modelhub-ai/modelhub/master/start_scripts/start_squeezenet.sh
3. Change permissions.
$ chmod +x start_squeezenet.sh
4. Run.
For the basic test drive version:
$ sudo ./start_squeezenet.sh
For the jupyter notebook version:
$ sudo ./start_squeezenet.sh -e
To explore the docker:
$ sudo ./start_squeezenet.sh -b

How it Works

modelhub infrastructure

Sponsors

Modelhub is proudly sponsored by
hms
bwh
dfci
nih
nci

Contact

Send us an email or fill in a quick contribution form.