• Home
  • Print this page
  • Email this page
Home About us Editorial board Ahead of print Current issue Search Archives Submit article Instructions Subscribe Contacts Login 

 Table of Contents  
ORIGINAL ARTICLE
Year : 2018  |  Volume : 4  |  Issue : 3  |  Page : 65-69

A multisource adaptive magnetic resonance image fusion technique for versatile contrast magnetic resonance imaging


1 Medical Physics Graduate Program, Duke University; Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
2 Medical Physics Graduate Program, Duke University; Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA; Medical Physics Graduate Program, Duke Kunshan University, Kunshan, Jiangsu, China
3 Medical Physics Graduate Program, Duke University; Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA; Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, China

Date of Submission24-May-2018
Date of Acceptance11-Jun-2018
Date of Web Publication29-Jun-2018

Correspondence Address:
Dr. Jing Cai
Department of Radiation Oncology, Duke University Medical Center, Durham, NC 27710

Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/ctm.ctm_21_18

Rights and Permissions
  Abstract 


Aim: Magnetic resonance imaging (MRI) has been widely used in radiation therapy (RT) treatment planning. The current practice to capture clinical indications like tumor from MRI is to review multiple types of MRI separately, which can be inefficient and the tumor contrast is limited by existing images. This study presented a novel approach to effectively integrate clinical meaningful information of multiple MRI to produce a set of fused MRI with versatile image contrasts. A multisource adaptive fusion technique was developed in this approach using limited number of standard MR images as input.
Methods: The multisource adaptive MRI fusion technique is designed with five key components: input multiple MRI, image preprocessing, fusion algorithm, adaptation methods, and output-fused MRI. A linear-weighting fusion algorithm is used to demonstrate the proof of concept. Fusion options (weighting parameters and image features) are precalculated and saved in a database for fast fusion operation. Input- and output-driven approaches are developed for MRI contrast adaptation. The technique is tested in human digital phantom 4D extended cardiac-torso (XCAT) for versatile contrast MRI generation.
Results: A graphic user interface was developed in Matlab environment. Input- and output-driven adaptation methods were implemented for interactive user operation to achieve different clinical goals. Using four input MR images (T1W, T2W, T2/T1W, and diffusion weighted), the fusion technique generated hundreds of fused MR images with versatile image contrasts.
Conclusion: A novel multisource adaptive image fusion technique capable of generating versatile contrast MRI from a limited number of standard MR images was demonstrated. This method has the potential to enhance the effectiveness and efficiency of MR applications in RT.

Keywords: Fusion, liver cancer, magnetic resonance imaging, target volume tumor contrast


How to cite this article:
Zhang L, Yin FF, Moore B, Han S, Cai J. A multisource adaptive magnetic resonance image fusion technique for versatile contrast magnetic resonance imaging. Cancer Transl Med 2018;4:65-9

How to cite this URL:
Zhang L, Yin FF, Moore B, Han S, Cai J. A multisource adaptive magnetic resonance image fusion technique for versatile contrast magnetic resonance imaging. Cancer Transl Med [serial online] 2018 [cited 2018 Sep 19];4:65-9. Available from: http://www.cancertm.com/text.asp?2018/4/3/65/235602




  Introduction Top


Accurate target volume definition is essential for precision radiation therapy (RT). It is especially important for hypofractional treatments such as stereotactic radiosurgery and stereotactic body RT. Multimodality medical images (computed tomography [CT], magnetic resonance imaging [MRI], positron emission tomography [PET], etc.) are often utilized to assist tumor volume delineation. Compared to CT images which are the clinical standard for RT simulation and treatment planning, MRI images have superior soft-tissue contrast and are becoming widely used in conjunction to CT images to provide the necessary soft-tissue information, especially for cancers in brain, abdomen, and pelvis.

MR images have different weighting contrasts (T1W, T2W, T2/T1W, diffusion weighted, etc.) and each of them presents a unique set of image characteristics that can be desirable for a particular clinical application. For example, T2W MRI and diffusion-weighted image (DWI) provide excellent tumor contrast for liver and pancreatic cancers [1],[2],[3] and T2/T1W MR images features high blood vessel signal.[4] In the current clinical practice, despite different contrast MR images are often acquired, they are used in a rather isolated manner, that is, physicians can only review a single set of MR images at a time and need to switch between different sets of MR images to comprehend the information from all MR images. This is an inefficient and ineffective approach for information collection and poses a number of limitations and potential problems. First, it can be affected by large interpatient variations in tumor contrast as the performance of an MR sequence may vary significantly between patients (unpublished data). Second, it can also be affected by large interobserver variations in tumor volume delineation due to suboptimal tumor contrast in some patients. Third, image information (such as organ contrast) is limited by single-MR images and potential new information is not utilized.

It is impossible to capture all the features from one MR image set that would ensure clinical accuracy and robustness of the analysis. One desired approach is to integrate effective information of multiple MR images to make a more reliable and accurate assessment. Research to combine information from multiple MR images has been investigated, mainly for the purpose of generating synthetic CT image from MRI for treatment planning.[5],[6] However, these methods are highly task specific and are not designed for information fusion for broad RT applications.

In this article, we present a novel, multisource adaptive MR fusion technique capable of producing a large number of fused MR images with versatile image contrasts for RT applications using limited number of standard MR images as input. This technique allows for application-specific adaptation and optimization of image contrast and can potentially enhance MR uses in a number of RT applications such as contouring, segmentation, MRI-based treatment planning, and response assessment. This article demonstrates the design and feasibility of this new fusion technique with two adaptation methods through studies on digital human phantom.


  Methods Top


Adaptive multisource MR fusion method

[Figure 1] shows the design and workflow of the adaptive multisource MRI fusion method. There are five key components/steps of this method: input multisource MR images, image preprocessing, fusion algorithm, adaptation methods, and output fused MR images. Input images are volumetric MR data with different image weighting contrasts and should be acquired at the same respiratory phase, such as end of inhalation (EOI), end of exhalation (EOE), or respiratory gated. In our study, we used T1W, T2W, T2/T1W, and DWI MR images that are acquired during breath-hold at EOI phase.
Figure 1: Design and workflow of the adaptive multisource magnetic resonance imaging fusion method

Click here to view


All input MR images will be first preprocessed for inhomogeneity correction, image registration (e.g., deformable image registration [DIR]), image intensity normalization, and denoising. In this study, we performed inhomogeneity correction using MR fade correction function in Velocity software (Varian Medical Systems, Palo Alto, CA, USA). Multiple MR images of the same subject were registered to T2W image using DIR algorithm implemented in MIM Maestro software (MIM Software Inc., Cleveland, OH, USA). Image intensity was clipped to its 99th percentile intensity and normalized to be between 0 and 1. Optional denoising was applied using adaptive Wiener low-pass filter in Matlab with a window size of 3 by 3.

Fusion algorithm

As a proof of concept, a linear weighting fusion algorithm was applied in this preliminary study. [Figure 2] shows the implementation of the linear weighting fusion algorithm and its database structure. The input MR images are denoted as Xk, k = 1, 2, …, K, where K is the number of MR source images and the output fused MR images are denoted as Yi= f (wki, Xk), i = 1, 2, …, I, where I is the number of fusion options and f is the fusion algorithm. For linear weighting algorithm,
Figure 2: Data structure of multisource magnetic resonance imaging fusion method

Click here to view




where wki is the weighting coefficients for the kth source MR image in the ithfusion option.

The structure design as shown in [Figure 2] is to achieve high clinical efficiency. To make it fast, we predetermine all possible fusion options for the given input MR images, creating a database of fused images with the corresponding weighting parameters (w1, w2,…), as well as the image feature metrics (M1, M2,…) measured from the fused MR images. Examples of image feature metrics include the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of organs of interest, image similarity. Once the database is created, the fusion operations become a simple process of selecting the desired fused image(s) by finding the right combinations of the weighting parameters or image features from the database, which can be done interactively in real time.

Adaptation methods

The fusion operation needs to be robust and adaptive for clinical applications. Several adaptation strategies can be implanted to facilitate the process, such as input-driven, output-driven, and knowledge-driven. Input-driven allows the users to customize fusion by adjusting the relative contributions of the input images. Output-driven achieves desired fusion by regulating the image features in the fused images. Knowledge-driven is able to automatically determine an optimal set of fused images for specific radiotherapy applications (contouring, planning, motion modeling, etc.) through machine learning of expert experiences. In this study, we have implemented the input-driven and out-driven methods in Matlab (Mathworks, Natick, MA, USA) with user-friendly interface, as shown in [Figure 3].
Figure 3: Matlab graphical user interface of the multisource magnetic resonance imaging fusion method

Click here to view


For the input-driven method, the linear weighting coefficient for each source MR images can be interactively adjusted by user to generate fused image Yi in real time. In our study, wki was set to real numbers ranging from −1 to 1 with interval of 0.2, resulting in a total number of 11K fusion options. The weighting factors in this case are determined through iterative manual adjustment until the desired fused images are obtained. The weights of each image are adjusted manually in the Matlab GUI as shown in [Figure 3]. The fused image will update in real time as the weights of input images change. The fast update is enabled by searching and displaying presaved fused image database as described in fusion algorithm section.

For the output-driven method, we included six image feature metrics to regulate the fusion process as a preliminary test in this study. These features include CNR of tumor, blood vessel, and bone; and SNR of liver, lung, and muscle. The six image features were calculated and saved for all fusion options, and the program was set to allow users to adjust the feature values to achieve the designed fusion results. For example, for application of tumor volume delineation, the tumor CNR of the fused image is set to high while other metrics are set as normal. It should be noted that changing one metric may result in changes in other metrics, and therefore, not all combinations of metrics in output-driven method are possible. The weighing factors, in this case, are determined by searching the fusion database as described in fusion algorithm section for the set of fusion parameters that can produce desired image features.

Evaluation of the fusion method

To evaluate the adaptive multisource MR fusion technique, initial tests on 4D extended cardiac-torso (XCAT) digital human phantom were performed.[7] T1W, T2W, T2/T1W, and DWI MR images of the abdomen area with a hypothetic spherical tumor (diameter: 30 mm) in the liver were simulated at the EOI phase using the XCAT phantom. The digital phantom images are intrinsically coregistered to prevent the potential misalignment issue in patient data. Image resolution was 1.67 mm isotropic for all images. To be realistic, organ and tumor intensity values and images noises of these simulated MR images were set to the average values of the same parameters measured from real MR images of eight liver cancer patients.

Input- and output-driven methods were applied to generate versatile contrast-fused MR images. In the input-driven method, weighting of each input image is adjusted and the fused image is updated in real time, till an image satisfying the user's application is generated. In the output-driven method, high tumor CNR, high blood vessel CNR, and high bone CNR were set to achieve enhanced tumor, blood vessel, and bony structures, respectively.


  Results Top


Input-driven adaptation

[Figure 4] shows an example of the multisource adaptive MR fusion using the input-driven method. The top panel shows the coronal views of the input images for the fusion method: simulated T1W, T2W, T2/T1W, and DWI MR images. The bottom panel shows examples of 16 fused MR images (out of hundreds of fused options) with different image contrasts that are achieved through the input-driven method, demonstrating the capability of the multisource MR fusion method to generate a large number of fused MR images with versatile image contrast and image features.
Figure 4: Illustration of the adaptive multisource MR fusion method applied to the 4D extended cardiac torso digital human phantom. With four input MR images (T1W, T2W, T2/T1W, and diffusion-weighted image) and using the input-driven approach, the fusion method generated hundreds of fused images with versatile image contrasts (only 16 are shown here)

Click here to view


Output-driven adaptation

[Figure 5] shows an example of the multisource adaptive MR fusion using the output-driven method. Different fusion options are plotted as lines with different colors consisting of the values of the six image feature metrics [Figure 5]a. This line map provides a good overview of all possible fusion options. Generating a fused image with a unique set of combined image features is achieved by selecting the proper line. [Figure 5]b shows examples of three fused images corresponding to three different clinical applications: tumor enhancement, blood vessel enhancement, and vertebral body enhancement.
Figure 5: Output-driven approach applied to 4D extended cardiac-torso digital human phantom. (a) Plot of image feature lines. Each line represents a possible fusion option that has a unique set of signal-to-noise ratio or contrast-to-noise ratio values of organs. (b) Three representative fused images created using the output-driven approach. They are created for enhanced contrast-to-noise ratio of tumor, vessel, and bone, respectively

Click here to view



  Discussion Top


In this study, we have developed a multisource adaptive MR fusion method that is capable of generating a large number of fused MR images with versatile image contrasts using only a limited number of input MR images. It essentially creates a new dimension in the patient image dataset: the image contrast. To the best of our knowledge, this is a new concept and the method is the first of its kind and has not been reported before. We have demonstrated in this article that the resulting images of the fusion method can be adaptively tuned to encompass specific image features based on the clinical application. Once fully developed, this method may allow for continuous image contrast adjustment across a broad range and automatic generation of optimal fused images for a variety of clinical applications such as autosegmentation, MR-to-CT conversion (synthetic CT), and multiparametric MR analysis. It should be noted that in this study we only demonstrated that the proposed fusion method is capable of generating various image contrasts. How to choose the proper fused image for a particular clinical application is beyond the focus of this study and should be determined based on the requirements of the particular clinical application. For example, if the fused image were to be used for MRI-based radiotherapy treatment planning, high tumor contrast, and high geometric integrity would be desired while blood vessel signals are less important. In our future study, we will develop machine learning based methods, together with the multisource adaptive fusion method, to automatically generate the desired fused images for different clinical applications.

The multisource adaptive MR fusion method is different from the various image processing methods that are used to improve image quality, image contrast, etc. Fusion is a process of incorporating the inherent information of different image sets into one, while image processing is a process of enhancing certain desired image features on one image set (and thus no new information is included). In addition, the enhancement of image quality/contrast through image processing is largely dependent on and often limited by the original image set. While in multisource adaptive MR fusion method, the degree of freedom for adjusting image quality/contrast is much greater due to the versatile information from the different source images.

This study presented the feasibility of generating a large number of fused imaged with versatile image contrasts using limited number of source images. There are many potential clinical applications of the proposed method using the fused images that yet to be studied in the future, including but not limited to, MRI-based radiotherapy treatment planning, autosegmentation, treatment response assessment, etc. It is expected that the added dimension of image contrast as compared to 3D-MRI will likely to enhance the efficacy of MR images in these clinical applications.

The multisource adaptive MR fusion method is designed as an open platform, wherein each of its components can be independently adjusted or replaced for performance enhancement. For instance, a linear weighting-based fusion algorithm was used in this study as a proof-of-concept demonstration. It can be replaced or complemented by other fusion algorithms, such as pyramid decomposition-based or wavelet transform-based algorithms,[8],[9],[10],[11],[12],[13] which might potentially improve the performance of the multisource fusion method. In addition, the fusion adaption methods as shown in this study (input-driven and output-driven) were based on manual operations. While this allows for best possible guidance on the fusion process from the clinicians, the process could still be time-consuming and sometimes challenging. To enhance clinical efficiency, a knowledge-based adaptation approach is highly desired, wherein the experts' prior knowledge of optimal fusion is learned through machine learning and used to determine the optimal fusion parameters for new cases. This is a topic of interest planned for our next study.

It should be noted that although we used four MR image sets as the input source images for the fusion in this study, there is no limitation on the number and type of input images. It can be expected that more input images can lead to greater range of image contrast and more fusion options, and subsequently more and better clinical applications. Other MR images that are candidates for input images include dynamic contrast-enhanced MR images and proton-density-weighted MR images. In addition, the fusion method can be potentially used for multimodality image applications. For instance, PET images and CT images with/without contrast can be fused with MR images, taking advantage of the different and rich functional or anatomical information from each image modality. Since there have been numerous works on MRI/CT, PET/CT, and MRI/PET fusions, it could be relatively fast to achieve multimodality multisource adaptive fusion in the near future.

It is critically important to best align the input images to achieve optimal fusion results, and for that reason, one needs to pay close attention to the imaging process and the image registration as they may adversely affect the accuracy of anatomy. Efforts should be made to minimize patient's motion between different image acquisitions and to maintain the same breathing status if all possible. In addition, MRI distortion should be corrected for all input images. MR images with large distortions should be avoided and MR images with least distortion, such as the fast spin-echo T2W MRI, should be used as the target in image registration to correct for residual misalignment between the input images. The accuracy of the DIR algorithms need to be evaluated and validated for the specific applications and types of images as their performance can be highly dependent on the image quality. This is a related but separate research topic that is out of the scope of this current work. We are currently developing an improved, physiological-based DIR algorithm in a parallel project that we plan to incorporate into the fusion method in the next step of our research.[14]

In conclusion, we demonstrated a novel multisource adaptive image fusion technique capable of generating versatile contrast MRI from a limited number of standard MR images. The applications include tumor contrast enhancement, blood vessel enhancement, and bony structure enhancement. This method holds great promise to enhance the effectiveness and efficiency of MR applications in radiotherapy.

Financial support and sponsorship

The work was financially supported by the National Institute of Health (grant: R21CA195317) and Varian Medical Systems.

Conflicts of interest

There are no conflicts of interest.



 
  References Top

1.
Albiin N. MRI of focal liver lesions. Curr Med Imaging Rev 2012; 8 (2): 107–16.  Back to cited text no. 1
    
2.
Heerkens HD, Hall WA, Li XA, Knechtges P, Dalah E, Paulson ES, van den Berg CA, Meijer GJ, Koay EJ, Crane CH, Aitken K, van Vulpen M, Erickson BA. Recommendations for MRI-based contouring of gross tumor volume and organs at risk for radiation therapy of pancreatic cancer. Pract Radiat Oncol 2017; 7 (2): 126–36.  Back to cited text no. 2
    
3.
Sankowski AJ, Ćwikla JB, Nowicki ML, Chaberek S, Pech M, Lewczuk A, Walecki J. The clinical value of MRI using single-shot echoplanar DWI to identify liver involvement in patients with advanced gastroenteropancreatic-neuroendocrine tumors (GEP-NETs), compared to FSE T2 and FFE T1 weighted image after i.v. Gd-EOB-DTPA contrast enhancement. Med Sci Monit 2012; 18 (5): MT33–40.  Back to cited text no. 3
    
4.
Malayeri AA, Johnson WC, Macedo R, Bathon J, Lima JA, Bluemke DA. Cardiac cine MRI: quantification of the relationship between fast gradient echo and steady-state free precession for determination of myocardial mass and volumes. J Magn Reson Imaging 2008; 28 (1): 60–6.  Back to cited text no. 4
    
5.
Jacobs MA, Zhang ZG, Knight RA, Soltanian-Zadeh H, Goussev AV, Peck DJ, Chopp M. A model for multiparametric MRI tissue characterization in experimental cerebral ischemia with histological validation in rat – Part 1. Stroke 2001; 32 (4): 943–9.  Back to cited text no. 5
    
6.
Ren S, Hara W, Wang L, Buyyounouski MK, Le QT, Xing L, Li R. Robust estimation of electron density from anatomic magnetic resonance imaging of the brain using a unifying multi-atlas approach. Int J Radiat Oncol Biol Phys 2017; 97 (4): 849–57.  Back to cited text no. 6
    
7.
Segars WP, Sturgeon G, Mendonca S, Grimes J, Tsui BM. 4D XCAT phantom for multimodality imaging research. Med Phys 2010; 37 (9): 4902–15.  Back to cited text no. 7
    
8.
Zhang Z, Blum RS. A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application. Proc IEEE 1999; 87 (8): 1315–26.  Back to cited text no. 8
    
9.
Du J, Li W, Xiao B, Nawaz Q. Union laplacian pyramid with multiple features for medical image fusion. Neurocomputing 2016; 194: 326–39.  Back to cited text no. 9
    
10.
Sahu A, Bhateja V, Krishn A. Medical Image Fusion with Laplacian Pyramids. 2014 International Conference on Medical Imaging, M-Health and Emerging Communication Systems (Medcom). 2015. p. 448–53.  Back to cited text no. 10
    
11.
Amolins K, Zhang Y, Dare P. Wavelet based image fusion techniques – An introduction, review and comparison. ISPRS J Photogramm Remote Sens 2007; 62 (4): 249–63.  Back to cited text no. 11
    
12.
Du LY, Yin J, Zhan X. A new adaptive image fusion technique of CT and MRI images based on dual-tree complex wavelet transform. Appl Mech Mater 2013; 411–414: 1189–92.  Back to cited text no. 12
    
13.
Sulochana S, Vidhya R, Mohanraj K, Vijayasekaran D. Effect of wavelet based image fusion techniques with principal component analysis (PCA) and singular value decomposition (SVD) in supervised classification. Indian J Geo Mar Sci 2017; 46 (2): 338–48.  Back to cited text no. 13
    
14.
Liang X, Chang Z, Yin F, Cai J. Development of a deformableimage registration (DIR) error correction method employing Kolmogorov-Zurbenko (KZ) Filter [abstract]. Med Phys 2016; 43 (6): 3737.  Back to cited text no. 14
    


    Figures

  [Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
Introduction
Methods
Results
Discussion
References
Article Figures

 Article Access Statistics
    Viewed378    
    Printed20    
    Emailed0    
    PDF Downloaded80    
    Comments [Add]    

Recommend this journal


[TAG2]
[TAG3]
[TAG4]