logo
Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt
Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat.

    Synthetic Sensors for Smart Spaces

    The Internet of Things (IoT) has opened up new possibilities by making any “thing” – an object, living being, or a place – connected to the Internet. It has allowed billions of physical devices around the world to exchange data with each other by adding a level of digital intelligence to them. It has also made things smarter, more responsive, and created value in new ways. As a result, spaces or facilities that were previously unmonitored or lacked telemetry could be transformed into smart spaces that provide a better understanding of their condition. The real-time and historical data collected can be used to improve safety, operations, and experiences as well as increase cost savings, mitigate risks, and reduce carbon footprints.

    Different Approaches in Environmental Sensing

    The promise of smart homes, factories, offices, classrooms and public areas largely depends on robust sensors that monitor the environment. Therefore, sensor technology plays an important role in transforming a space through connectivity.

    The most commonly used approach to this is to upgrade the space using smart devices, machines, and appliances. However, these devices are expensive and have significant upgrade costs. Also, most smart devices have limited interoperability and rarely interact with each other. Thus, they make only some silos of smartness and hinder the holistic connected experience.

    A more flexible option to sidestep this issue is to use different sensor tags to capture a variety of events and states in the environment directly. Special-Purpose Sensing and Distributed Sensing are two popular direct sensing methods that utilise physically coupled sensors to monitor one or multiple environmental facets of interest.

    Special-Purpose Sensing

    Most often, a special purpose sensor is used to monitor a specific element of the environment. They can be robust for well defined, low dimensional sensing problems. But, due to its one-to-one relationship, there is no notion of generality. As an example, an occupancy sensor can only detect occupancy while a door sensor can only detect the status of the door (open or closed). There can be hundreds of complex environmental facets in a single space which are worth sensing. As each facet requires a specific and independent sensor, this approach can be costly to deploy and maintain as well as aesthetically and socially obtrusive.

    Distributed Sensing

    With an array of sensors that are networked together, distributed sensing allows users to increase sensing area and fidelity. A distributed sensor system can be homogenous (an array of identical sensor types) or heterogeneous (a mix of different sensor types. As shown in Figure 1, these sensors can be used to sense either one single facet (many-to-one) or multiple facets (many-to-many). The downside of distributed sensor systems is that they can be expensive and are highly dependent on the quality of their sensor distribution.

    Screenshot 2022-05-06 140034

    Figure 1: Approaches in Environment sensing

    Since direct sensing approach access requires many sensor tags to monitor the entire environment, the overall cost of sensor supply, deployment, and maintenance is enormous. In addition, powering direct sensors can be problematic. Due to the lack of multiple power outlets for each sensor, they are usually battery-powered. This can be a burden as the batteries need to be recharged periodically. Accessing an existing outlet to power multiple sensors by laying wires across the room can limit possible sensor locations and be aesthetically displeasing.

    General-purpose sensing

    A general-purpose sensor can be attached to different objects without any modification or can sense multiple facets without being physically connected to objects. In an ideal situation, a single, omniscient sensor is able to capture a large context of the environment. In order to achieve this one-to-many relationship, the sensors must be inherently indirect.

    Indirect sensors offer greater flexibility in sensor placement and are less aesthetically and socially disruptive. As they do not need to be physically coupled to objects, they can be mounted near a power outlet, eliminating the need for batteries.

    Computer vision is a close example of general-purpose sensing, where machine learning (ML) is used to process video feeds captured by a camera and monitor multiple environmental facets indirectly (E.g. Monitor occupancy inside a room, Lights are ON/OFF or Faucet is running using a camera feed). Although camera-based approaches are accurate, versatile, and powerful, they are also known for their high level of privacy invasion and social intrusiveness.

    Synthetic Sensors

    Virtualisation of low-level data into an interpretation of human semantics is one of the key aspects of general-purpose sensing. To achieve this phenomenon, some researchers from the Human-Computer Interaction Institute at Carnegie Mellon University have developed a novel sensor board (prototype) that combines distinct sensory capabilities into one sensor unit and becomes the eyes and ears of the room. Hence, their device can transform different types of ordinary household items/appliances into smart devices.

    As in Figure 2, their sensor board consists of nine physical sensors (PIR Motion sensor, EMI sensor, Accelerometer, Microphone/Audio sensor, Temperature, Barometric Pressure and Humidity sensor, Magnetometer, Grid-Eye IR array sensor, Colour/ Illumination sensor, and Wi-Fi RSSI) that can capture twelve unique sensor dimensions. All sensors are strategically placed on the sensor board, separating Analog and Digital components to isolate any electrical noise and ensure optimal performance.

    For data processing, it has an STM32F205 microcontroller with a 120MHz CPU. The sensor tag connects to the Internet via Wi-Fi. The board is powered via a USB 2.0 connector so that it can be easily plugged into an existing power outlet. The sampling rates of each sensor were adjusted accordingly through experiments to capture different environmental events.

    Instead of transmitting raw data from the sensor tag, only selected features of sensor data are transmitted to the cloud. Sensor signals are also anonymised, preventing the original signal from being reconstructed. This helps minimise network overheads and data privacy issues. They also leverage sensor activation groups (subsets of sensors that are activated during an event) to improve system robustness in the classification of different events.

    Picture2

    Figure 2: Customized general-purpose sensor tag

    These general-purpose sensors that abstract low-level data (E.g. Vibration, Light, Temperature and EMI) into high-level user-centred environment facets (E.g. Coffee ready, Faucet running, Oven door open and Gas Stove Burner ON), are called “Synthetic Sensors”.

    These sensors capture rich, nuanced data and translate them into context-specific information that reveals what is happening in the room. Although they are virtual, they can be treated as conventional physical sensor feeds to trigger user-defined functions and develop responsive applications.

    Figure 3: List of synthetic sensors studied in different environment settings and weighted breakdown of their feature merit

    Figure 3 displays a list of synthetic sensors tested across a range of environmental contexts. The sensors exhibited high accuracy and discriminative power. An average accuracy of 96% was achieved for 38 different synthetic sensors tested at five different locations. A clear majority of the sensors performed well with nearly 100% accuracy, and only three sensors performed in the 60% accuracy range. If activation groups are mutually exclusive, they also can differentiate simultaneous events occurring in the room

    How it works

    As shown in Figure 4, raw data collected from low-level sensors is fed into a featurization layer of the embedded software to abstract features. Then, the features of each sensor are concatenated and sent securely to a cloud server.

    Screenshot 2022-05-06 140704

    Figure 4: How Synthetic sensors work

    Next, the activated sensor channels are identified using an adaptive background model (rolling mean and standard deviation for each sensor). Afterwards, an activation group is formed with all triggered sensors. Finally, it is fed to the machine learning layer for classification. The ML layer consists of two learning modalities for manual training (supervised labelled data) and automatic learning (unsupervised two-stage clustering).

    Higher-Order Synthetic Sensors

    First-order synthetic sensors can only provide binary output (E.g. On or Off, Open or Close, Available or Not). Hence, scholars have also explored the possibility of building second-order synthetic sensors that could provide richer (non-binary) semantics. Here, the outputs of first-order sensors act as inputs (ML features) for the second-order synthetic sensors.

    They found that second-order sensors can capture higher-level semantics of objects and environments such as state, count, and duration.

    1. State – With several first-order synthetic sensors (microwave running, microwave door status, keypad press, completion chime), they created a “microwave state” sensor to detect whether a microwave’s status is available/ in-use/ interrupted/ finish.
    2. Count – Second-order synthetic sensors can count the occurrence of first-order events. So, a “count sensor” was built to track the number of paper towels dispensed without placing a physical sensor on the dispenser.
    3. Duration – Similarly, second-order synthetic sensors can be used to track the cumulative duration of a first-order event. They made sensors to estimate microwave usage time (seconds) and tap water consumption (millilitres).

    Feeds from second-order sensors can be used to monitor complex states of objects that go beyond ON and OFF to schedule alerts and notifications (notify when paper towels are running low) and monitor behavioural changes (water consumption patterns of users).

    Likewise, first-order and second-order synthetic sensors can be fed into third-order synthetic sensors to encapsulate even richer semantics. There’s no reason to stop at this level. It could go beyond up to the Nth order to monitor even more sophisticated events. For instance, appliance-level sensors can be fed into a kitchen-level sensor, home-level sensor, and so on. They can be used to observe more complex human activities and the health of assets.

    Figure 5: Levels of Synthetic Sensors

    A Future with Synthetic Sensors

    Synthetic sensors offer high levels of versatility for general-purpose sensing without any privacy implications. They work across many different environments such as houses, offices, public spaces, workshops, and industrial settings. They can answer many questions on environmental facets, except those that require a camera (E.g. What is written on the whiteboard? Who is sleeping on the couch?).

    Synthetic sensors empower a wide range of applications that rely on event-based, object-oriented models rather than declarative models driven by sensor thresholds. They can potentially connect environmental facets and enable us to monitor them like never before.

    Human environments are noisy and complex, making it difficult to accurately identify what is going on. A truly universal sensor should be able to recognize changing input parameters and distinguish between true events and false triggers. Even when a new device is added or an existing device is moved from its original location, the system should work with a high level of robustness. The synthetic sensors show promising results in solving these problems. In fact, future work is needed to further develop and advance machine learning models for accurate classification with minimal false positives. Synthetic sensors with advanced machine learning algorithms will power digital twins and be the starting point for Industrial Revolution 4.0.

    No Comments
    Post a Comment