https://fareastgizmos.com/wp-content/uploads/2020/05/sony-intelligent-vision-sensor-stacked.jpg

Sony Announces World’s First Intelligent Vision Sensors with AI Processing Functionality

Sony today announced the upcoming release of two models of intelligent vision sensors, the first image sensors in the world to be equipped with AI processing functionality. The new Sony sensors feature a stacked configuration consisting of a pixel chip and logic chip. They are the world’s first image sensor to be equipped with AI image analysis and processing functionality on the logic chip. The signal acquired by the pixel chip is processed via AI on the sensor, eliminating the need for high-performance processors or external memory, enabling the development of edge AI systems.

The sensor outputs metadata (semantic information belonging to image data) instead of image information, making for reduced data volume and addressing privacy concerns. The AI capability makes it possible to deliver diverse functionality for versatile applications, such as real-time object tracking with high-speed AI processing. Different AI models can also be chosen by rewriting internal memory in accordance with user requirements or the conditions of the location where the system is being used.

READ  VTech Announces Toy Smart Watch – Play Watch

The pixel chip is back-illuminated and has approximately 12.3 effective megapixels for capturing information across a wide angle of view. In addition to the conventional image sensor operation circuit, the logic chip is equipped with Sony’s original DSP(Digital Signal Processor) dedicated to AI signal processing, and memory for the AI model. This configuration eliminates the need for high-performance processors or external memory, making it ideal for edge AI systems.

Signals acquired by the pixel chip are run through an ISP (Image Signal Processor) and AI processing is done in the process stage on the logic chip, and the extracted information is output as metadata, reducing the amount of data handled. Ensuring that image information is not output helps to reduce security risks and address privacy concerns.

READ  Sharp and frog develops a new user interface for smartphones-Feel UX

Users can write the AI models of their choice to the embedded memory and can rewrite and update it according to its requirements or the conditions of the location where the system is being used. For example, when multiple cameras employing this product are installed in a retail location, a single type of camera can be used with versatility across different locations, circumstances, times, or purposes. The AI model in a given camera can be rewritten from one used to detect heat maps to one for identifying consumer behavior, and so on.