Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt
Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat.

    DAWNS Of A new era for Artificial Vision

    How does the human eye work? When an animal is camouflaged against their background it’s not that easy to spot the animal but when     it moves, we can spot it immediately and its invisibility is lost. Thus, the reason is because our eyes are more sensitive to movement than static imagery. When the view is static the retinal cells are relatively quieter but when the objects in the view moves the retinal cells behaviour immensely activates. Hence the retinal cells transmit signals as a response to change.

    The Retinomorphic sensor is made of a photosensitive capacitor which mimics certain aspects of a biological retina. This was proposed by Cinthya Trujillo Herrera and John Labram of Oregon State University in US. This capacitor is made of a silicon dioxide bottom layer which is an insulator that does not respond to light and a light-sensitive perovskite methylammonium lead iodide top layer. The perovskite changes the capacitance in presence of changing illumination and when it was placed in series with a resistor and exposed to light, large voltage spikes were observed across the resistor. Another very strange thing that was observed was that the voltage spikes quickly decayed when the intensity of the light remained constant. Therefore, this sensor will generate a signal only in the presence of changing light intensities just like the biological retina. This behavior cannot be observed in existing cameras and other photosensitive sensors. The developers further developed a model to stimulate an array of Retinomorphic sensors and one of their tests involved observing the footage of a bird flying into the view, resting on a bird feeder and taking off. It was observed how the view disappeared when the bird stopped at the feeder and reappeared when it flew out. Labram says, “The new design thus inherently filters out unimportant information, such as static images, providing a voltage solely in response to movement.” 

    This sensor is a result of research in the evolving field known as neuromorphic engineering. The concept of such inventions is to develop analog and asynchronous digital electronic circuits to imitate the neural architectures present in biological systems. These systems will turn out to be adaptive, fault tolerant and scalable and will be able to process information using energy efficient, asynchronous, event-driven methods as the biological models they mimic. These Retinormorphic sensors will be a booster for improving artificial vision in systems.

    As for all these years the video cameras we use generate videos based on capturing a number of frames each second. While working with frame-based vision it can be seen that when capturing a movement of an object the sensors miss some information of the scene depending on the frame rate while repeatedly capturing the backgrounds extensively. The natural vision does not work in such a way as discussed above the cells in the retina only reports to the brain when a change in the view is detected. Using this method, the important information is collected without wasting time and energy reprocessing images of the unchanged objects and backgrounds of the view. This is known as event-based vision. The Retonomorphic sensors output is purely event-based. These sensors will create immense disruptions in fields such as automotive vehicles, Iot, Artificial Intelligence and Deep Learning, industrial automation and Healthcare.

    The greatest change that takes place when ordinary sensors are replaced with Retinomorphic sensors is that the processes that runs in discrete time will all be shifted to continuous time something which is inaccessible through traditional cameras. Therefore, cameras built with these type of sensors makes it a possibility to detect features and track them even during the blind time between the frames taken from a standard camera, which will be a great advantage for tasks such as object segmentation, visual odometry and scene understanding. Fast detection leads to automatic emergency braking to be faster and that leads us to build a safer autonomous vehicle. Also, with the low computational overhead and cost of these sensors many cameras can be used to increase redundancy in autonomous driving. These low power sensors are well suitable for next-generation devices with the feature ‘always on’. In fact, the use of these sensors will make our mobile phones, wearables and IoT devices last longer on battery power with advanced vision processing features. – Written by Manisha Hettiaratchi.

    No Comments
    Post a Comment