Steve Jobs did a wonderful thing when he put sensors (GPS, accelerometers, gyroscopes, compasses, etc.) into the iPhone and the the iPad family. Others have followed, adding barometers, light sensors and more besides. But there is a downside — battery usage — as well as an upside — the potential to save that battery usage by making more intelligent use of these sensors, and not only to maximize power utilization.
These points were brought home at MWC by Kevin Shaw, the CTO at Sensor Platforms (of San Jose, CA). Sensor Platforms specializes in producing platform-agnostic software to enable SOC manufacturers and device manufacturers to make best use of the sensors on mobile devices. One objective is not to disable sensors and even the main power-hog, the CPU, when they are not needed.
For example, consider GPS. When you go into a building you lose GPS signals. But on most mobile devices that does not mean the GPS function switches itself off, until you go outside again. Rather the opposite is true: the GPS tries and tries to find satellites and in so doing greedily consumes the battery. In different ways this is true for all sensors: if they are running when they are not in use they use up that valuable battery resource.
In the mobile world — especially in the enterprise mobile world where battery performance is at a premium, to last at minimum a long work day — battery performance matters. With each new device release, whether laptop or smartphone or tablet, the reviewers crawl over of the mAh ratings to forecast longevity in use. Yet the irony is that the quickest and least expensive way to lengthen battery life is to use less electricity. If one can turn off parts of the system that are not being used, this reduces consumption.
At MWC Kevin went further and produced two Samsung SIIIs, one with Sensor Platform’s Sensor Fusion/Context Aware platform and one without. The Context Aware software uses that mobile’s sensors even when those users are not directly interacting with their devices, so that apps can know the users’ context (what he or she is doing). He argues that “This empowers a new generation of smart devices that will improve people’s lives without intruding on their activities”.
Sensor Platforms optimizes by:
- providing a layer of basic user contexts: device motion, carry, posture and transport
- implementing a Resource Manager which directs computing sources when contexts change, while minimizing power consumption when contexts remain unchanged
- enabling contexts to do system power management — for instance knowing when a user has not moved and thus can skip GPS refreshes, or knowing that the device is in a pocket and turn off the backlight
- presenting app developers with an API that treats contexts as virtual sensors.
He then showed how intelligent application of the sensors enables greater context awareness. The SIII with the software could detect when he sat down or stood up (the other could not). It could also detect when he put the equipped SIII in his pocket and when he took it out. He even showed that the increased sensitivity could detect when one went up 1 floor or down 3 floors (using the barometric change in pressure). Indeed this ability to measure vertical changes was accurate already to about 30cm, and constantly improving. And all of this was accomplished while reducing power consumption (switching off unnecessary power consumers on a device when it is in a pocket or briefcase, for instance or reducing main CPU usage when less power hungry components of a mobile device can do the same for less).
What might this mean for an enterprise?
The first, and possibly the most important, is that devices which have “sensor control”, like that provided by Sensor Platforms, incorporated by vendors will have longer usage times. Unfortunately, this not an easy capability to establish before you buy — and will not be until SOC manufacturers and device vendors publicize what is possible.
The second is more interestingfor the enterprise. With increased context awareness new capabilities can be added to not only the device but potentially to apps. For example, in a medical application you may want to know from the combination of sensors that someone has fallen over, which might in turn trigger an alarm. Similarly, if you know where you are located (from GPS) before you go into a building, the use of accelerometers, compasses, barometers and other sensors will enable accurate positioning within that building, and even on what floor. This sort of capability could open up mew types of accurate indoor solutions that are not amenable to solving by GPS today.
There is a caveat. Sensor Platforms basically sells only to manufacturers. To this analyst an opportunity may be begging — to provide the software tools (or platform) to assist app developers to build more intelligent apps that used sensors better — even when the device manufacturer has not integrated Sensor Platforms-like capabilities in to a device. While doing it this way might not be an energy efficient as embedding the capabilities at the SoC or device level, the opportunities enabled might be all the more startling for enterprises.