How to enable low-power sensor context in mobiles

Transcription

How to enable low-power sensor context in mobiles
How to enable low-power sensor context
in mobiles
Know the techniques for ensuring effective implementation of sensor context in
a power-conscious mobile environment.
By Debbie Meduna, Dev Rajnarayan and Jim Steele
Sensor Platforms Inc.
Context awareness among mobile platforms is a hot topic in today’s information-rich world. Using the sensors
available on these platforms, one can infer the context of the device, its user, and its environment. This can
enable new useful applications that make smart devices smarter. It is common to utilise machine learning to
determine the underlying meaning in the large amount of sensor data being generated all around us. However,
traditional machine learning methods (such as neural networks, polynomial regression, and support vector
machines) do not directly lend themselves to application in a power-conscious mobile environment. The
necessary techniques for ensuring effective implementation of sensor context are discussed.
Figure 1: A general tiered architecture suitable for a mobile
platform. The always-on sensing is done in an embedded core
utilising low-power inertial sensors. The application processor is
only awakened when something interesting happens, where it can
utilise additional contextual information from non-inertial sensors
such as GPS and calendar.
Current state of mobile context awareness
Context is the inference of user or environment from sensor measurements on mobile devices. Sensors are in
all smart devices such as phones and tablets, as well as many wearable devices such as activity wrist bands
[1], shoes [2], headphones [3], and even glasses [4]. Interpreting this sensor data is getting more sophisticated
as well. For example: an accelerometer can detect that a user is biking [5]; a front-facing camera can detect
whether a user is sceptical of what he is reading [6]; GPS and Wi-Fi positioning can detect which store a user
entered [7]; audio can detect whether a user is in a meeting or a night club [8]; and blood pressure sensors can
detect whether a person is relaxed or agitated [9].
EDN Asia | ednasia.com
Copyright © 2013 eMedia Asia Ltd.
Page 1 of 4
Several major players have also begun using sensors for mobile context: Samsung's context awareness uses
the camera to improve user experience such as eye-tracking to scroll the screen when reading; and Google
introduced a new location API at Google I/O 2013 which in part provides an activity recognition result [10].
Tiered architecture to enable always-on
In the above examples, a common technique used to extract information out of sensor data is machine
learning. Whether it is neural networks, polynomial regression, support vector machines, expert systems, or
another technique, machine learning is typically comprised of resource-hungry algorithms. To make matters
worse, context-aware systems are most effective when always-on, but continuous use of most hardware
depletes the battery rapidly.
Take Google’s activity recognition as an example. This functionality is available on any Android phone that has
downloaded and installed the Google Play services SDK, but there are many complaints about this draining the
battery [11]. Furthermore, the detection latency is often observed to be at least 30 seconds (and often much
longer). In addition, Google does not specify the expected classification accuracy of their result. These
deficiencies need to be rectified to encourage developer adoption of context-aware functionality.
We believe that it is possible to achieve low latency, high accuracy, and low power consumption in sensorbased context results. The key is to adopt a hierarchical approach: lower power systems trigger successively
higher power systems when necessary to improve accuracy (figure 1). This requires a rigorous design
approach to quantify the available accuracy of each sub-system, and empower developers to specify the
desired level of accuracy required for their applications. The following sections describe the key components of
the tiered architecture developed by Sensor Platforms.
Figure 2: Comparing the Sensor Platforms context change detector (blue) to a
simple motion change (red) for a variety of typical activities. The context change
detector is insensitive to latency, which shows it identifies the actual signal,
whereas the simple motion detector merely wakes the CPU on any motion.
Assumptions for this plot: 5000mWh battery, minimum CPU wake time of 1
second before lapsing back to sleep, and power during that 1-second of
computation is 250mW [13]. Further optimisation can be achieved through a
context engine as discussed in the next section.
Embedded change detector to minimise AP wakes
One of the lowest power sensors in a smart phone is the accelerometer, which can be sampled at 50Hz for less
than 0.02 mWh of energy. This is a negligible fraction of a typical phone battery which generally has a capacity
of more than 5000 mWh. Still, by keeping the main application processor awake just to sample this data uses
around 250mWh, completely defeating the purpose of low-power sensing. Over the course of a day, this would
significantly drain the device battery. There needs to be a smart always-on trigger system that can continuously
analyse incoming sensor data using very low power and memory resources, and wake up a more powerful
EDN Asia | ednasia.com
Copyright © 2013 eMedia Asia Ltd.
Page 2 of 4
processor whenever something interesting happens. At Sensor Platforms, we have developed just such a
system called a “context change detector” [12].
This architecture enables both low power and low latency. This can be shown by running the change detector at
different latencies on various user activities (figure 2). By comparing to a simple motion detector (such as that
available with the inertial wake functionality of an accelerometer), the severity of the trade-off between latency
and power consumption is mitigated, if not eliminated, by using a smart trigger system. Note that the blue bars
are relatively insensitive to latency, indicating that the change detector is responding to changes in the signals,
whereas the red bars are inversely proportional to latency, indicating that the simple motion detector is
triggering every single time.
Lean motion sensor context engine to ensure low power
A low-level trigger system, such as the context change detector, greatly reduces how often the sensor data is
processed, enabling always-on systems. A further optimisation is to abstract out the relevant context
information from the raw sensor data, allowing higher-level context algorithms to use these abstracted results
rather than the sensor data directly for further inferences. This inertial sensor “context engine” reduces the
amount of data that must be processed at higher levels. For example, raw inertial sensor data is typically
sampled at 50-100Hz on Android systems, whereas motion context updates typically occur on the order of
seconds. Consequently, this middle layer can be thought of as a second-level trigger system for higher level
resource-intensive context algorithms.
The ability to abstract meaningful context information out of inertial sensor data may not be readily apparent,
but at Sensor Platforms we have successfully demonstrated that this is both possible and accurate. Consider
the following example where a device is resting on the table, then picked up and held at the same orientation in
hand. The accelerometer signals in the x-, y-, and z-direction for these two contexts seem indistinguishable to
the human eye, but not to a properly trained classifier (figure 3). Our machine-learning algorithms correctly
distinguish off-body and in-hand within a few seconds even for this difficult classification scenario.
Not only is the Sensor Platforms’ FreeMotion Library capable of producing accurate context inferences from
inertial sensors, it does so with efficient, lean machine learning algorithms. This leanness is a key aspect of this
middle context layer for achieving lower power, and requires thoughtful feature selection. The entirety of the
motion-based context engine in the FreeMotion Library can fit on an embedded system.
Figure 3: The measured acceleration for two distinct context
states looks identical to the eye (upper plot), but lean machine
learning can distinguish them (Sensor Platform's FreeMotion
Library result is in the lower plot).
Usability for developers ensures adoption
With such a low-power mobile context architecture in place, the context information can enable developers
through additional virtual sensors. In much the same way that developers can currently request
SCREEN_ORIENTATION in Android, rather than interpreting raw accelerometer data, they further should be
able to query different context sensors. For motion based context, we see at least three fundamental virtual
sensors:
CARRY: describes how the device is being held or moved relative to the user (e.g. inHand, offBody, inPocket),
POSTURE: indicates how the user is positioned or is moving relative to the environment (e.g. sitting, walking,
lyingDown),
TRANSPORT: indicates the immediate environment containing the user (e.g. inVehicle, onConveyance).
EDN Asia | ednasia.com
Copyright © 2013 eMedia Asia Ltd.
Page 3 of 4
Algorithms can combine these virtual sensors with each other, and possibly with other system-level sensors
(e.g. application data, location data, audio signals, etc.), to generate even more complex context inferences.
For example, the accelerometer can detect that a user is biking, but a higher-level context could distinguish
between exercising or commuting to work, based on location and time of day. Therefore, the concept of virtual
context sensors is one that extends beyond motion-based context to contexts derived from any set of sensors
available on the device.
First and foremost, making these new virtual sensors available to developers will enable the next level of
context-aware platforms. Second, making these new virtual sensors effective for developers requires the API to
report confidence level. Just as location based services provide uncertainty information (best known to the end
user as the size of the blue circle on map applications), context results should report reliability and accuracy.
Finally, standardising these confidence levels requires a methodology to measure confidence on context
results. Context accuracies in the industry are currently given as a single pre-determined percentage, e.g. 95%
accuracy, but this accuracy only reflects the population for which the algorithm was trained and, more
importantly, does not account for real-time fluctuations in the accuracy. For example, the accuracy of a walking
context algorithm would likely go down if the user has a mis-step or fumbles the device, and this should be
reflected in the sensor output. Determining a consistent standard for context awareness platforms is a difficult
but important task. Sensor Platforms is working with industry standards bodies to address this.
Conclusion
The mobile architecture presented here uses machine learning to estimate sensor context, with strong
emphasis on low latency and low power. The key enablers of this architecture are the tiered trigger systems
developed by Sensor Platforms: the context change detector and the motion context engine, which together
minimise AP wakes and reduce triggering of more resource intensive context algorithms. In addition, providing
context information in the form of virtual sensors with corresponding confidence levels gives the developer more
flexibility. Adoption of the additional virtual sensors introduced here provides an extensible, easy-to-use
framework for combining a multitude of sensor information to enable the next level of context awareness
apps. References
[1] http://www.nytimes.com/2012/11/15/technology/personaltech/a-review-of-new-activity-tracking-bands-fromnike-and-jawbone.html?pagewanted=all&_r=0
[2] http://bestrunningwatches.co.uk/reviews/garmin-foot-pod-for-forerunner-review/
[3] http://www.cnet.com/8301-33373_1-57359020/cheap-sensors-enabling-new-smartphone-fitness-gadgets/
[4] http://en.wikipedia.org/wiki/Google_Glass
[5] Mannini, A. and A.M. Sabatini, “Machine Learning Methods for Classifying Human Physical Activity from OnBody Accelerometers,” Sensors 2010, 10, 1154-1175.
[6] http://www.affectiva.com/assets/Measuring-Emotion-Through-Mobile-Esomar.pdf
[7] http://scobleizer.com/2012/07/11/mobile-3-0-arrives-how-qualcom-just-showed-us-the-future-of-the-cellphone-and-why-iphone-sucks-for-this-new-contextual-age/
[8] http://www.informatik.uni-freiburg.de/~spinello/storkROMAN12.pdf
[9] http://www.cooking-hacks.com/index.php/documentation/tutorials/ehealth-biometric-sensor-platform-arduinoraspberry-pi-medical
[10] http://developer.android.com/reference/com/google/android/gms/location/ActivityRecognitionClient.html
[11] http://forum.xda-developers.com/showthread.php?t=2217982 and
http://productforums.google.com/forum/#!msg/mobile/0CQTG4PWp24/BrqxhGSeOmkJ
[12] Sensor Platforms Blog article: http://www.sensorplatforms.com/activity-transitions-context-aware-systems/
[13] http://www.anandtech.com/show/5559/qualcomm-snapdragon-s4-krait-performance-preview-msm8960adreno-225-benchmarks/4
EDN Asia | ednasia.com
Copyright © 2013 eMedia Asia Ltd.
Page 4 of 4