One of the main selling points of mCerebrum is its high-rate data handling. Impressive improvements over other architectures such as the AWARE framework are presented. However, it is less clear to me how likely there is a need for such a high throughput.
Concretely, which studies that currently rely on the mCerebrum framework have the highest data samples collected per second? How many samples, and would running these studies not have been possible without the described optimizations?
In order to better interpret ‘number of samples’ it would also be useful to have some overview of concrete sensors and the amount of data samples they contribute per second. The SenSys paper states one sample is a double and 300Hz is listed for accelerometer, gyro, magnetometer, gps, light, microphone, and barometer. How should this be interpreted exactly? There is some room for ambiguity here (e.g., for x, y, z independently, or the whole ACM measure as a whole?)