Interactive data analysis: the Control project
- 1 August 1999
- journal article
- Published by Institute of Electrical and Electronics Engineers (IEEE) in Computer
- Vol. 32 (8), 51-59
- https://doi.org/10.1109/2.781635
Abstract
Data analysis is fundamentally an iterative process in which you issue a query, receive a response, formulate the next query based on the response, and repeat. You usually don't issue a single, perfectly chosen query and get the information you want from a database; indeed, the purpose of data analysis is to extract unknown information, and in most situations, there is no one perfect query. People naturally start by asking broad, big-picture questions and then continually refine their questions based on feedback and domain knowledge. In the Control (Continuous Output and Navigation Technology with Refinement Online) project at the University of California, Berkeley, the authors are working with collaborators at IBM, Informix, and elsewhere to explore ways to improve human-computer interaction during data analysis. The Control project's goal is to develop interactive, intuitive techniques for analyzing massive data sets.Keywords
This publication has 6 references indexed in Scilit:
- Online association rule miningPublished by Association for Computing Machinery (ACM) ,1999
- Ripple joins for online aggregationPublished by Association for Computing Machinery (ACM) ,1999
- New sampling-based summary statistics for improving approximate query answersPublished by Association for Computing Machinery (ACM) ,1998
- Online aggregationACM SIGMOD Record, 1997
- The KDD process for extracting useful knowledge from volumes of dataCommunications of the ACM, 1996
- Optimal composition of real-time systemsArtificial Intelligence, 1996