Multiresolution Grids and CUBE V2/CHRT

Development work on the new multiresolution grid structure at the core of CUBE V2 is proceeding under the direction of Dr. Brian Calder. We have adopted as a working name CUBE with Hierarchical Resolution Techniques, or CHRT. The code base is now essentially complete and preliminary testing has taken place. As part of a more rigorous testing process compared to CUBE V1, considerable time has gone into providing formal test-vectors and unit testing code for the algorithm and data structure, designed to show that it is functioning as expected and generating results without untoward side effects. These tests are particularly strong on the data structure itself, to show basic functionalities, but also exist for data processing, and are being extended as the algorithm functionality is being extended. Full documentation of the source code has also been undertaken using the doxygen application. This is intended primarily for internal use in the current context, but will hopefully prove useful to any of our Industrial Associates who choose to implement this algorithm at a later date.

Preliminary tests of the algorithm with field data appear to show that the mechanisms to estimate data density from raw data are working adequately and that density estimates can be transformed into resolution estimates that appear to be sufficiently smoothly varying to allow the system to operate as expected. The algorithm has also been extended to allow for multiple methods of resolution determination, and to allow for user-specified preferred resolution bands, or a constriction to a dyadic resolution scheme.

In addition to the base algorithm, Calder has been working on mechanisms for distribution of the algorithm over multiple machines, and within a networked environment. This effort represents an adopted design goal for the algorithm that it should not necessarily have to reside and operate on the same machine as the user client software: users are now able to poll the network for computational resources, and utilize any that are currently available and appropriately configured for their current needs. This is an approach to implementation of a ‘headless data processing computer’ and is also adaptable to parallel processing schemes. The protocol is designed so that it could also have a number of servers clustered together under a single ‘head’ controller so that we could aggregate multiple machines as one logical server for more heavily dedicated processing.