The UC Berkeley System for Interactive Visualization of Large Architectural Models

Abstract
Realistic-looking architectural models with furniture may consist of millions of polygons and require gigabytes of data—far than today's workstations can render at interactive frame rates or store in physical memory. We have developed data structures and algorithms for identifying a small portion of a large model to load into memory and render during each frame of an interactive walkthrough. Our algorithms rely upon an efficient display database that represents a building model as a set of objects, each of which can be described at multiple levels of detail, and contains an index of spatial cells with precomputed cell-to-cell and cell-to-object visibility information. As the observer moves through the model interactively, a real-time visibility algorithm traces sightline beams through transparent cell boundaries to determine a small set of objects potentially visible to the observer. An optimization algorithm dynamically selects a level of detail and rendering algorithm with which to display each potentially visible object to meet a userspecified target frame time. Throughout, memory management algorithms predict observer motion and prefetch objects from disk that may become visible during imminent frames. This paper describes an interactive building walkthrough system that uses these data structures and algorithms to maintain interactive frame rates during visualization of very large models. So far, the implementation supports models whose major occluding surfaces are axis-aligned rectangles (e.g., typical buildings). This system is able to maintain over twenty frames per second with little noticeable detail elision during interactive walkthroughs of a building model containing over one million polygons.

This publication has 1 reference indexed in Scilit: