3.1. Concepts
The Titan framework for pervasive applications is shown in Figure 1 and has the following three components.
(i) Mobile Device. A mobile device (typically the user's mobile phone, but it could also be another kind of wearable computer) acts as the central point of the system and the interface with the user. The mobile device discovers available resources in the user's PAN. The user can then query available Pervasive Apps that can be offered with the available resources. The mobile device offers interaction possibilities with the user. It is also one instance of a Titan node (see below) and can similarly execute services (typically those requiring higher computational capabilities than what is available on a sensor node). In addition, it allows for dynamic service download (in the form of Java code). Such services typically form the core logic of the Pervasive Apps.
(ii) Internet Application Repositories. Application templates are hosted on Internet application repositories. They are represented by a set of interconnected services, which are required to be present in the user's PAN for the application to function. Substitution between services as well as alternative implementations are also provided to best exploit available resources. The composition of the effective service graph to instantiate is also carried by the Internet application repositories according to available resources.
(iii) Titan Nodes. This is the sensor networking part of Titan. It consists of firmware on the sensor nodes of the network. It allows the instantiation, reconfiguration, and execution of interconnected services on the sensor nodes, together with the communication in the network and with the mobile device. It essentially realizes the distributed execution of activity recognition algorithms represented as interconnected services in the PAN of the user. It is built upon TinyOS—a common sensor network operating system.
The process of finding suitable Pervasive Apps is shown in Figure 1. The top part shows the PAN of the user and the Titan nodes (in objects or on the body). The mobile phone runs a service directory, which acts as a database for the services available in the service pools of the Titan Nodes. Upon querying an application, the service directory's content is sent to application servers on the Internet to determine possible applications for the given PAN configuration.
Typically, services offered by sensor nodes are in relation to the typical use of the elements in which they are embedded. However, it is important to note that custom Titan Nodes can be programmed (statically) with custom sets of services and these services may be of various complexity. Figure 2 is an example, where nodes 1 and 2 contain sensors. Node 1 is a motion sensor placed on the wrist. It provides services delivering low-level information (raw acceleration). A typical activity recognition chain consists of sensor data acquisition, segmentation, feature extraction, and classification. Here, node 1 has been instructed to execute a service subgraph that splits the sensor data in windows, computes mean and standard deviation features, and locally classifies these features to indicate whether the gesture correspond to a movement of the hand going to the mouth. Node 2 on the other hand is a smart cup that provides a manufacturer-supplied high-level service that directly delivers detected activities, such that the cup is tilted. Here, no other services are used internally within the node because a specific sensor (e.g., a tilt sensor) delivers readily usable information. Node 3 is only capable of processing. It receives data across the network from the first two nodes and does decision fusion by correlating movements of the wrist with the tilting of the cup to detect that the user's gesture corresponds to drinking from the cup. The communication between services within a node or across nodes is handled transparently by Titan and is hidden from the programmer.
While in this work we describe sensor nodes programmed with general purpose services composed to the application scenario's needs, we envision in a future perspective that some services in sensor nodes will be provided by manufacturers of components of ambient intelligence environments.
3.2. Titan Nodes
Titan defines a programming model where applications, such as activity recognition applications, are described by an interconnected service graph. We refer to Titan Nodes as the nodes of the wireless sensor network that contain the Titan firmware, built on TinyOS [37]. The Titan nodes form the sensor networking component of the Titan framework. They allow the run-time instantiation of distributed applications represented as service graphs. Each Titan node typically executes a subgraph of the entire service graph making up the application.
The architecture of the Titan nodes is shown in Figure 3, and its elements are as follows.
3.2.1. Services and Service Pool
Titan nodes provide a set of services stored within a service pool. Services can implement signal processing function, classification tasks, sensor readout, or other kinds of processing. Not all Titan nodes implement the same kinds of services. For instance, nodes that do not contain sensors would not offer sensor readout services, while nodes with higher computational capability may offer more computationally intensive services. Services are flashed into the Titan nodes at design time.
Services have a set of input ports, from which they read data, process it, and deliver it to a set of output ports. Connections deliver data from a service output port to a service input port and store the data as packets in FIFO queues.
The services go through the following phases when they are used.
(1) Configuration. At this point, the service manager instantiates a service. To each service, it passes configuration data, which adapts the service to application needs. Configuration data may include, for example, sampling frequency and window size in signal processing services. The service can allocate dynamic memory to store state information.
(2) Runtime. Every time a service receives a packet, a callback function is executed to process the data. Titan provides the service with the state information it has set up during the configuration time. Services are executed in the sequence they receive a packet, and each service runs to completion before the next service can start.
(3) Shutdown. This phase is executed when the service subgraph is terminated on the node. All services have to free the resources they have reserved.
3.2.2. Service Manager
The service manager is the system allowing to reconfigure a Titan node. It instantiates the executed services according to the network manager's requests (see Section 3.3). The service manager is responsible for reorganizing the service subgraph executed on the local sensor node during a reconfiguration.
3.2.3. Dynamic Memory
The dynamic memory module allows services to be instantiated multiple times, and reduces static memory requirements of the implementation. The services can allocate memory in this space for their individual state information. This module is needed as TinyOS does not have an own dynamic memory management.
3.2.4. Packet Memory
The Packet Memory module stores the packets used by the services to communicate with each other. The packets are organized in FIFO queues, from which services can allocate packets before sending them. This data space is shared among the services.
3.2.5. Connections
Packets exchanged between the services carry a timestamp and information of the data length and type they contain. Services reading the packets can decide on what to do with different data types. If unknown data types are received, they may issue an error to the service manager, which may forward it to the network manager to take appropriate actions.
To send a packet from one Titan Node to another, Titan provides a communication service, which can be instantiated on both network nodes to transmit packets over a wireless link protocol as shown in Figure 4. During configuration time, the communication service is told which one of its input ports is connected to which output port of the receiving service on the other node. The two communication services ensure a reliable transmission of the packet data. The communication service is automatically instantiated by the network manager to distribute a service graph over multiple sensor nodes. Thus, for the programmer, there is no distinction when a service graph is mapped on one or more Titan nodes.
The recommended maximum size of a packet for Titan Nodes is 24 bytes, as it can easily be fitted with 5 bytes header into a TinyOS active message. The active message is used to transmit data over wireless links and offers 29 bytes of payload.
3.2.6. Service Manager and Service Discovery
A programmer designs his application by interconnecting services in the form a service graph. Service parameters as well as location constraints can also be defined.
The mapping of a service graph into executed services is controlled by the network manager. In order to support the network manager, the Titan nodes answer to broadcast service discovery messages originating from the network manager by providing a list of matching services available in the service pool and by providing status information about the node.
The network manager then decides on a partitioning of the full service graph realizing the application and provides the service manager of the Titan nodes with the specific subsets of the service graph to instantiate.
When data needs to be exchanged across nodes, communication services (see Section 3.2.5) are automatically inserted. The resulting service subgraphs containing the services to be executed on every sensor node are then send to each participating node's service manager, which takes care of the local instantiation as shown in Figure 4. After the configuration has been issued, the network manager keeps polling the Service managers about their state and changes the network configuration if needed. On node failures, the network manager recomputes a working configuration and updates the subgraphs on individual sensor nodes where changes need to be made, resulting in a dynamic reorganization of the network as a whole.
3.2.7. Synchronization
When sensors are sampled at two sensor nodes and their data is delivered to a third node for processing, the data streams may not be synchronized due to differing processing and communication delays in the data path. As a consequence, a single event measured at the two nodes can be mistaken for two.
If the two sensor nodes are synchronized by a timing synchronization protocol, a timestamp can be added to the data packet when it is measured. The data streams can then be synchronized by matching incoming packets with corresponding timestamps. Timing protocols have been implemented on TinyOS with an accuracy of a few 10
[38, 39].
If the two sensor nodes are not synchronized, the sensor data can be examined as in [40]. The idea is to wait until an event occurs that all sensors can measure, for example, a jump for accelerometers on the body. Subsequent packets reference their timestamp to the last occurrence of the event. This functionality is provided in the Synchronizer service.
3.3. Mobile Device
The mobile device is the interface between the user, the sensor network, and the Internet application repositories. The mobile device contains a network manager that controls the mapping and execution of the service graph on the Titan nodes, a service directory that contains a list of all available services discovered in the PAN, and a set of service graphs (representing various applications) waiting to be mapped to the sensor network. In addition, it can execute custom application logic services downloaded from the Internet application repositories, in the form of Java code.
3.3.1. Mapping Services to Network Nodes
When the execution of a specific service graph is requested, the network manager first inspects the capabilities of the sensor nodes in the environment by broadcasting a service discovery message containing a list of services to be found. Every node within a certain hop-count responds with the matching services it has in its service pool. From this information, the network manager builds the service directory.
The network manager then optimizes service allocation such that the overall energy consumption is minimized. For this purpose, it uses a metric summing up the main energy consumers, namely wireless communication, sensors and actuators, and the processing resources needed. The result of this allocation is communicated to the service manager of the concerned Titan nodes in the form of service subgraphs. Each node typically receives a subset of the overall service graph, thereby leading to a distributed execution of the entire service graph on multiple Titan nodes.
The Service Manager on the Titan Nodes then takes care of the service instantiation and that the data generated by one service is delivered to the next service according to the specification of the service graph. This occurs transparently, such that individual services are not aware of whether the next service is executed locally or whether the data first has to be transmitted to another sensor node.
Titan nodes can also invoke at run time the network manager to ask for reconfiguration (e.g., if battery runs low). During the execution of the service graph, the network manager monitors the network via the service manager on the Titan nodes to determine whether problems occur. In case a node fails, a new mapping of the service graph can be issued.
The task of the network manager is formally described as to map a service graph
, where
is the set of services, and
,
is the interconnections between them, onto a network graph
. The network graph is described by a set of nodes
and communication links
,
. The network manager's goal is to find a mapping
, such that a given cost function
is minimized.
Various cost functions targeting different tradeoffs have been proposed for such a task, such as the minimization of transmission cost, total energy consumed, or the maximization network lifetime [41]. We use here a metric targeting minimization of the total energy used in the network. The cost function makes use of a model of the sensor node using values stemming from benchmarking the Titan implementation on real sensor nodes (see Section 4 and [21]) with a TI MSP430 microcontroller and a CC2420 transceiver. The metric used for the evaluation relies on three main cost functions.
(i) Processing Cost
. The cost of processing service
on node
. This cost results into a measure for whether enough CPU cycles are available to execute all services of the subset assigned to the given node. To achieve an energy value, the time for processing on the nodes' microcontroller is determined and multiplied by the power consumption difference from active to standby mode.
(ii) Sensor Cost
. The cost of using sensor
required by service
on node
to collect data for the algorithm. As sensors can usually be turned off when not sampling, this cost value describes the additional energy dissipated on the node while sampling and includes possible duty cycling.
(iii) Communication Cost
. The cost of communicating data from one service to another for the node
. The communication cost is zero for two services communicating within the same node. For external communication, it prioritizes intracluster communication and introduces penalties for cross-cluster communication. The cost is determined per message and includes energy dissipated at the sending and receiving part.
The mapping is constrained by the maximum processing power
and communication rate
a node can support. These limits ensure the executability of the tasks on the nodes and guarantee that the maximum transmission capacity is not exceeded without modeling node load and scheduling overhead explicitly. Consequently, there is no guarantee on whether latency requirements on the algorithm can be met. The constraints are given for the service graph subset
assigned to a node
:
Each interconnection
is mapped to an edge
and added to two sets
as outgoing and incoming connections. Failure in meeting the constraints results in the service graph not being implementable. In such a case, the execution cost will be set to infinity.
The total execution cost of the network is achieved by summing up all costs incurring at nodes participating in the execution:
The costs introduced above depend on the device type to which they apply. The parameters for the device model and service models are sent to the service directory along with the node address upon service discovery. The service model in particular includes a mapping to determine the output data rate given a certain input data rate and the service parameters in the service graph description. When determining execution cost, the network manager first derives an estimation of the data communicated from service to service by propagating the data rates generated from each service to each successor. The individual cost functions make use of the service models and device models to produce the total mapping cost.
The contributions of the individual cost components vary with the application that is executed and the network it is running on. Typically, communication costs dominate, as for the energy of sending 1 bit over the air, a microcontroller can perform roughly 1000 instructions [42] for the same energy. Sensor costs on the other hand are usually constant as long as the actually used sensors have similar energy consumption per sample. The mapping thus tries to keep communication intensive connections between services on a single node. In most application, this means to draw as much processing as possible to the data source, as processing in most cases reduces the communication rate. In the case of activity recognition algorithms, this means that the processing such as data filtering and feature extraction is preferably run on the Titan node containing the sensors.
An exhaustive search of the best mapping is intractable for service graphs and networks of moderate size, as the search space grows with
(see [43]). Therefore, we use a genetic algorithm (GA) to optimize the mapping, as GAs are known to provide robust optimization tools for complex search spaces [44]. The GA parameters are selected in order to favor convergence to the global maximum by selecting a large population size, avoiding premature convergence, and by performing several runs. The resulting performance is the maximum of the performance obtained in each runs.
The service graph is encoded for the GA as chromosome with
genes, one for every service in the service graph. Each gene contains the set of nodes in the network providing the corresponding service. Mutations are applied by moving services from one node to another. Crossovers arbitrarily select two chromosomes, randomly pick a gene, and swap the gene and all its successors between the two chromosomes, which are then added to the population. The fitness of the chromosomes is evaluated using the cost metric given above.
Once the implementation of the service graph with the lowest cost has been found, the service graph subsets are sent to the individual network manager of the Titan nodes for execution. Additional aspects related to modeling and convergence speed are discussed in [43].
3.3.2. Application Logic as Services
The logic of Pervasive Apps is likely unique to each application. Thus, it does not lend itself to be realized by generic services, such as the ones provided by Titan nodes. In order to enable for a large variety of Pervasive Apps, Titan allows for application repositories to download application-specific services to the mobile device, in the form of Java code (this is the "control service" in Figure 1).
This Java code can access to all the features of the mobile device (usually a mobile phone), such as screen, touch input, audio output.
In other respects, the downloaded Java services follow the same service model as the Titan nodes and can interact with them. Thus, the service running on the mobile device forms part of the service graph describing the application, exactly like any other sensor node. In particular, the Java services have access to packet communication methods to exchange data with the other services running on the Titan nodes. Since the Titan nodes use an 802.15.4 radio, we have built a custom Bluetooth to 802.15.4 gateway to allow communication between the mobile device and the Titan nodes. The Java service thus communicates over Bluetooth to the gateway, and the gateway relays the data to the 802.15.4 interface.
The Titan network manager additionally provides a Java API that can be used by the Pervasive App to dynamically reconfigure the network with new service graphs. This allows tailoring the processing to the current Pervasive App state and turning unneeded sensors to low-power states.
3.4. Internet Application Repositories
Upon query by the user for available Pervasive Apps, the mobile device transfers the content of the available services in the user's PAN (i.e., the service directory) to the Internet application repository. The Internet application server then returns the applications that are possible given the available services and composes at run time the service graph to be effectively executed.
The application servers are databases storing application templates as service graphs. These templates use services that may or may not exist in the PAN. Each individual service in the application template may have multiple, functionally equivalent implementation possibilities involving one or more services. For instance, if a sensor node is not capable of executing an FFT, features such as zero crossings and amplitude range might be used instead. At runtime, the application servers use service composition algorithms to create a feasible application by combining libraries of template service graphs in their database. An efficient implementation has been shown in [45]. Figure 1 shows one example application template containing a service M, which is not available in the PAN. Consequently, it is replaced by a functionally equivalent service graph containing the services E, F, and G, which are all available in the service pool of the smart dice.
Another way for replacements to be possible is to allow the addition of new services to the service pool at runtime, for example, by means of wireless reprogramming or virtual machines. In this case, the application server may offer to download a particular service rather than compose its alternatives. This feature is especially useful for application-specific services which are not easily modeled by generalized services. This is usually the case for the main application logic. We use this approach in Section 4 to download a specific Java monitoring service to the mobile phone.
A composed application consists of one or more service graphs and a control service (application logic). The control service runs on the mobile device and instructs the network manager when to exchange the service graph currently executed in the PAN for another one. Using multiple service graphs in an application allows restricting the processing to only what is needed in the moment and turning sensor nodes that do not participate into power save mode until they are needed again.