Sensory Grammars and the BehaviorScope


Microprocessors, radios and sensors are already widely used in our everyday lives. Each person today owns and uses several such devices embedded in PDAs, cell phones, GPS units, Bluetooth headsets and other gadgets. Despite their wide usage, it is still very difficult for a human to instruct multiple devices to collaborate and operate autonomously based on what they observe from their environment using sensors. This suggests a new form of computing needs to be defined in which devices distributed in space and equipped with sensors and actuators could develop their own model of the world, use it to parse a set of observations into distinguishable actions, and act on them. For practicality and cost considerations, this computational framework should also interpret and summarize the data generated by the network in to higher-level, more meaningful representations that are also more compact for the sensor network to propagate.

The goal of this project is to leverage the latest technological innovations to design and build a distributed asynchronous computer out of a large number of intelligent wireless sensors, called BehaviorScope. The BehaviorScope architecture will promote a new form of computing in time and space in which the instructions will come directly from the physical world. Sensors will observe and interpret behaviors in tome and space, interpret them and act on them to provide applications and services. To develop such systems, our research focuses on three main components:

  1. Sensory grammar frameworks
  2. Middleware infrastructures for grammar frameworks
  3. Custom, motion-discriminating, privacy preserving image sensors

While we expect the BehaviorScope architecture to be applicable to many domains, we have chosen an automated assisted living system as the current driver application to put some of the concepts to the test.

Using the BehaviorScope to Monitor Activities in Assisted Living

In our assisted living application, a home sensor network is tasked to monitor and help elders that live home alone. A small set of custom, privacy preserving image sensors (less than 10 in typical homes) is deployed inside a house to localize the motion of people inside a house. The motion patterns of people are processed along with other sensor and building information to infer the activities taking place and provide the appropriate responses. Example videos of a home setup and some of the initial outputs from one of our testbed deployments are shown below.

    

Collected trace playback from a home deployment with 6 sensor nodes (deployed at the positions with the orange letters)

Simple BehaviorScope playback outputs from an initial home deployment (25-day data trace)

Sensory Grammars

In our architecture low-level sensor measurements are interpreted to high-level semantics using a hierarchy of sensory grammars. The goal of this hierarchy is to provide a structured way of interpreting macroscopic spatial and temporal gestures in to predefined actions in a process akin to the recognition of speech in today's automated speech recognition systems. This interpretation takes place in a bottom-up manner, starting from the sensor nodes and continuing into the network as shown in the figure below.

 Raw data collected by the network is fed into a hierarchical behavior interpretation framework

Custom Motion Discriminative Image Sensors & Lightweight Camera Sensor Networks

To capture high-fidelity motion information, we use a custom bio-mimetic imager architecture, in combination with sensory grammars to derive a new family of motion sensors. Such sensors do not exist today. The motion detection provided by PIR is too coarse for behavior interpretation and the elaborate video processing techniques required by COTS camera technologies make this task computationally (and power) expensive. The custom sensors we pursue try to provide a middle ground between cameras and PIR. They can provide high accuracy motion tracking without requiring image processing and without providing image information. This makes them ultra-low-power and privacy preserving. More details on the imager architecture can be found on the AERNets website.

  

A set of wide-angle lens camera sensor nodes observe the home, localize, count and track humans without revealing their images

 

Papers and Reports

J. Fang, A. Bamis and A. Savvides, Discovering Routine Events and their Periods in Sensor Time Series Data, under submission

T. Teixeira, D. Jung, G. Dublon and A. Savvides, PEM-ID: Identifying People by Gait-Matching using Cameras and Wearable Accelerometers, Proceedings of the Third ACM/IEEE International Conference on Distributed Smart Cameras, ICDSC, Como, Italy, August - September 2009

T. Teixeira and A. Savvides, Recognizing Activities from Context and Arm Pose using Finite State Machines, Proceedings of the Third ACM/IEEE International Conference on Distributed Smart Cameras, ICDSC, Como, Italy, August - September 2009

A. Bamis, J. Fang and A. Savvides, Detecting Interleaved Sequences and Groups in Camera Streams for Human Behavior Sensing, Proceedings of the Third ACM/IEEE International Conference on Distributed Smart Cameras, ICDSC, Como, Italy, August - September 2009

A. Bamis and A. Savvides STFL: A Spatio-Temporal Filtering Language with Applications to Assisted Living, Proceedings of the 2nd International Conference on Pervasive Technologies Related to Assistive Environments, Corfu, Greece, June 2009

T. Teixeira and D. Jung and G. Dublon Identifying People in Camera Networks using Wearable Accelerometers, Proceedings of the 2nd International Conference on Pervasive Technologies Related to Assistive Environments, Corfu, Greece, June 2009

D. Lymberopoulos and A. Bamis and A. Savvides A Methodology for Extracting Temporal Properties from Sensor Data Streams, Proceedings of the 7th Annual International Conference on Mobile Systems, Applications and Services (MobiSys), June 2009  slides

A. Yu, A. Bamis, D. Lymberopoulos, T. Teixeira and A. Savvides, Personalized Awareness and Safety with Mobile Phones as Sources and Sinks, Proceedings of Urbansense 2008, workshop held in conjunction with ACM SenSys 2008

D. Lymberopoulos and A. Bamis and A. Savvides Extracting Spatiotemporal Human Activity Patterns in Assisted Living using a Home Sensor Network to appear in Special Issue of International Journal on Personal and Ubiquitous Computing, 2009

A. Bamis and D. Lymberopoulos and T. Teixeira and A. Savvides The BehaviorScope Framework for Enabling Ambient Assisted Living to appear in Special Issue of International Journal on Personal and Ubiquitous Computing, 2009

A. Bamis, D. Lymberopoulos, T. Teixeira and A. Savvides, Towards Precision Monitoring of Elders for Providing Assistive Services, Proceedings of the First International Conference on Pervasive Technologies Related to Assistive Environments, Athens, Greece, July 2008

D. Lymberopoulos, T. Teixeira and A. Savvides, Macroscopic Human Behavior Interpretations Using Distributed Imagers and Other Sensors,  Proceedings of IEEE October 2008

D. Lymberopoulos T. Teixeira and A. Savvides, Detecting Patterns for Assisted Living Using Sensor Networks, Proceedings of SensorComm 2007, Valencia, Spain, October 2007

A. Bamis, N. Singh and A. Savvides, (hidden title) how we handle data flows in BScope... , ENALAB Technical Report 090701

D. Lymberopoulos, T. Teixeira and A. Savvides, BScope: A Scalable, Run-Time Architecture for Activity Recognition Using Wireless Sensor Networks, ENALAB Techical Report 040701

T. Texeira and A. Savvides, Lightweight People Counting and Localizing in Indoor Spaces using Camera Sensor Nodes, Proceedings of ICDSC, Vienna, Austria, September  2007

T. Teixeira, D. Lymberopoulos, E. Culurciello, Y. Aloimonos and A. Savvides, A Lightweight Camera Sensor Network Operating on Symbolic Information, Proceedings of  First Workshop on Distributed Smart Cameras 2006, held in conjunction with ACM SenSys 2006

D. Lymberopoulos, A. Barton-Sweeney, T. Teixeira and A. Savvides, An Easy-to-Program Sensor System for Parsing Human Activities, ENALAB Technical Report 090601, May 2006

D. Lymberopoulos, A. Ogale, A. Savvides and Y. Aloimonos, A Sensory Grammar for Inferring Behaviors in Sensor Networks, Proceedings of Information Processing in Sensor Networks, IPSN 2006, April 2006

T. Teixeira, E. Culurciello, J.H. Park, D. Lymberopoulos and A. Savvides, Address-Event Imagers for Sensor Networks: Evaluation and Modeling, Proceedings of Information Processing in Sensor Networks, Platforms session, IPSN/SPOTS 2006

Jonathan Schwarz, A Grammar-Based System for Game Playing with a Sensor Network, undergraduate project, May 2006

 

This project is sponsored in part by: