Automated Medical Image Modality Recognition by Fusion of Visual and Text Information
- 1 January 2014
- book chapter
- Published by Springer Nature in Lecture Notes in Computer Science
- Vol. 17, 487-495
- https://doi.org/10.1007/978-3-319-10470-6_61
Abstract
In this work, we present a framework for medical image modality recognition based on a fusion of both visual and text classification methods. Experiments are performed on the public ImageCLEF 2013 medical image modality dataset, which provides figure images and associated fulltext articles from PubMed as components of the benchmark. The presented visual-based system creates ensemble models across a broad set of visual features using a multi-stage learning approach that best optimizes per-class feature selection while simultaneously utilizing all available data for training. The text subsystem uses a pseudo-probabilistic scoring method based on detection of suggestive patterns, analyzing both the figure captions and mentions of the figures in the main text. Our proposed system yields state-of-the-art performance in all 3 categories of visual-only (82.2%), text-only (69.6%), and fusion tasks (83.5%).Keywords
This publication has 6 references indexed in Scilit:
- ImageCLEF 2013: The Vision, the Data and the Open ChallengesLecture Notes in Computer Science, 2013
- Multimodal Medical Image RetrievalAdvances in Intelligent Systems and Computing, 2013
- Evaluating Color Descriptors for Object and Scene RecognitionPublished by Institute of Electrical and Electronics Engineers (IEEE) ,2009
- Fast Discrete Curvelet TransformsMultiscale Modeling & Simulation, 2006
- Face Recognition with Local Binary PatternsLecture Notes in Computer Science, 2004
- SMOTE: Synthetic Minority Over-sampling TechniqueJournal of Artificial Intelligence Research, 2002