Transcranial Assessment and Visualization of Acoustic Cavitation: Modeling and Experimental Validation

Abstract
The interaction of ultrasonically-controlled microbubble oscillations with tissues and biological media has been shown to induce a wide range of bioeffects that may have significant impact on therapy and diagnosis of brain diseases and disorders. However, the inherently non-linear microbubble oscillations combined with the micrometer and microsecond scales involved in these interactions and the limited methods to assess and visualize them transcranially hinder both their optimal use and translation to the clinics. To overcome these challenges, we present a framework that combines numerical simulations with multimodality imaging to assess and visualize the microbubble oscillations transcranially. In the present work, microbubble oscillations were studied with an integrated US and MR imaging guided clinical FUS system. A high-resolution brain CT scan was also co-registered to the US and MR images and the derived acoustic properties were used as inputs to two- and three-dimensional Finite Difference Time Domain simulations that matched the experimental conditions and geometry. Synthetic point sources by either a Gaussian function or the output of a microbubble dynamics model were numerically excited and propagated through the skull towards a virtual US imaging array. Using passive acoustic mapping (PAM) that was refined to incorporate variable speed of sound, we were able to correct the aberrations introduced by the skull and substantially improve the PAM resolution. The good agreement between the simulations incorporating microbubble emissions and experimentally-determined PAMs suggest that this integrated approach can provide a clinically-relevant framework and more control over this nonlinear and dynamic process.
Funding Information
  • National Institutes of Health (K99EB016971, Grant R25CA089017, Grant P01CA174645, Grant P41EB015898)