I understood from the webinar (and recent publication) that the data upload window was introduced to save battery life. I am impressed with the many other non-functional requirements which also seem to be addressed by mCerebrum.
My question was primarily targeted at understanding how the architecture was structured. As of now, I do not have an actual use case which needs continuous data upload (as it comes in) at the cloud interface. One could imagine several though, although I can’t judge how likely they are:
- Based on ‘suspicious’ data analyzed locally, switch to continuous data upload to perform more advanced analysis on a server, impossible to run on the mobile device (e.g., expensive computations, combining data with other data streams).
- Time sensitive interventions where such ‘suspicious’ activity would need to be reviewed in real-time by someone (e.g. clinicians).
Although I have not looked into this, I doubt a REST-based API would perform well with such high-frequency data. In that sense, am I correct in presuming introducing the functionality I describe would be quite a severe change to the architecture?