Neural network for sonogram gap filling

Abstract
In duplex imaging both an anatomical B-mode image and a sonogram are acquired, and the time for data acquisition is divided between the two images. This gives problems when rapid B-mode image display is needed, since there is not time for measuring the velocity data. Gaps then appear in the sonogram and in the audio signal, rendering the audio signal useless, thus making diagnosis difficult. The current goal for ultrasound scanners is to maintain a high refresh rate for the B-mode image and at the same time attain a high maximum velocity in the sonogram display. This precludes the intermixing of the B-mode and sonogram pulses, and time must be shared between the two. Gaps will appear frequently in the sonogram since, e.g., half the time is spent on B-mode acquisition. The information in the gaps can be filled from the available information through interpolation. One possibility is to use a neural network for predicting mean frequency of the velocity signal and its variance. The neural network then predicts the evolution of the mean and variance in the gaps, and the sonogram and audio signal are reconstructed from these. The technique is applied on in-vivo data from the carotid artery. The neural network is trained on part of the data and the network is pruned by the optimal brain damage procedure in order to reduce the number of parameters in the network, and thereby reduce the risk of overfitting. The neural predictor is compared to using a linear filter for the mean and variance time series, and is shown to yield better results, i.e., the variances of the predictions are lower. The ability of the neural predictor to reconstruct both the sonogram and the audio signal, when only 50% of the time is used for velocity data acquisition, is demonstrated for the in-vivo data.

This publication has 4 references indexed in Scilit: