Products

Image Sensor

An Event-based Vision Sensor (EVS) realizes high-speed data output with low latency by limiting the output data to luminance changes from each pixel, combined with information on coordinates and time.

EVS captures movements (luminance changes)

EVS is designed to emulate how the human eye senses light.
The human eye functions in such a way that the receptors on the retina, when exposed to light, convert it to visual signals to be sent to the brain. The subsequent neuronal cells identify the light and shade, and the information is conveyed to the visual cortex in the brain via the retinal ganglion cells.

In the EVS the incident light is converted into electric signals in the imager’s light receiving circuit. The signals pass through the amp unit and reach a comparator where the differential luminance data is separated and divided into positive and negative signals which are then processed and output as events.

About EVS (movies)

  • Technology part

  • Application part

EVS mechanism

In the Event-based Vision Sensor the luminance changes detected by each pixel are filtered to extract only those that exceed the preset threshold value. This event data is then combined with the pixel coordinate, time, and polarity information before being output. Each pixel operates asynchronously, independently from any other.
Figures illustrate how the sensor captures the ball movement.

Event-based Vision Sensor Data Output


Each pixel consists of a light receiving and luminance detection unit. The incident light is converted into a voltage in the light receiving unit. The differential detection circuit in the luminance detection unit detects the changes between the reference voltage and the converted incident light voltage. If the changes are greater than the set threshold value in either a positive or negative direction then the comparator identifies it as an event and this data is output.


The circuit is reset with the detected event luminance as the reference, and the threshold values are set from this new reference voltage in the positive (light) and negative (dark) directions. If the incident light changes in luminance by an amount greater than the value set as a threshold in the positive direction (i.e. the output voltage surpasses the positive threshold) a positive event is output; conversely, if the voltage is lower than the negative threshold, a negative event is output.

  • (1) The reference voltage and positive/negative thresholds are set.
  • (2) A negative event is output when the incident light luminance goes below the negative threshold value.
  • (3) The reference voltage and positive/negative threshold values are reset based on the value at the event output.
  • (4) Another negative event is output if the incident light luminance further lowers below the negative threshold.
  • (5) The reference voltage and positive/negative threshold values are reset again based on the value at the second event output.
  • (6) If subsequently the luminance increases and surpasses the positive threshold value, a positive event is output.

As the pixels convert the incident light luminance into electric voltage logarithmically, the sensor can detect subtle changes in the low luminance range while it responds to a large luminance changes in the high luminance range to prevent event saturation, as illustrated in the diagram. In this way the sensor realizes a high dynamic range.


This mechanism produces EVS images as shown below (right).

Frame-based image sensor
EVS

EVS image looks like the outline of the moving subject is extracted since the brightness of the pixels changes when the subject moves. (Shooting condition)
Above images were taken by a camera equipped with EVS on the dashboard of a car.

Core technology behind EVS

The industry’s smallest pixel: miniaturized high-resolution sensor made possible by utilizing a stacked structure

Unlike conventional technology that has the light-receiving circuit and luminance detection circuit on the same layer, this technology incorporates them on different layers: a pixel chip (upper layer) and logic chip (lower layer) which includes integrated signal processing circuits. These chips are stacked and connected using Cu-Cu technology within each pixel. The industry’s smallest pixel (4.86 μm) is integrated with the logic chip on a 40nm process, resulting in a 1/2.5 type sensor with HD resolution of 1,280 x 720.

Highspeed and lower latency

Each pixel detects luminance changes asynchronously and will output event data immediately. When multiple pixels produce events the arbitration circuit controls the output order based on the earliest-received event. In this way the sensor outputs events as they are generated, making it possible to only output necessary data at the microsecond order while keeping power consumption low.

Built-in H/W event filter

In order to cater to various applications the sensor is equipped with several filter functions specifically designed for event data.
This feature enables removal of unnecessary events such as periodical events due to LEDs flickering and other events that are highly unlikely to be the outline of a moving object. It can also restrict the data volume when necessary to ensure it falls below the event rate that can be processed in downstream systems.

  • (Results of software simulation. Data of the left image is reduced by approx. 92%.)

Image with event data accumulated for an equivalent of a single frame at 30 fps (left and right)
By turning on the filter (right image), the overall data volume can be reduced while retaining the information required for a given application.
The right image (with the filter turned on) shows the white lines on the road as the significant information.

Use cases where EVS can be leveraged

Areas of application for EVS and use cases

Categories Examples of applications
Industry Fast-moving object detection/tracking, machine movement monitoring, machine malfunction detection, 3D inspection/measurement, etc.
Robotics Position estimation (SLAM) of automatic driving / flight, cooperative operation with people, obstacle detection, etc.
Security Movement detection/analysis, detection of the number of individuals, intrusion detection, privacy-conscious monitoring of people, etc.
Medical and scientific measurement Observation using fluorescence microscopes, particle detection in flowing liquid, other medical purposes, etc.
Gaming Gesture control, posture recognition, eye tracking, etc.

Use cases

(1) Liquid monitoring

  • Frame-based sensor image

    The flow of water appears linear due to the blurring effect,
    making it impossible to discern individual drops.

    • EVS image

      Each drop of water
      can be captured [high frame rate]

    • EVS (super slow motion)

      The data is retained chronologically and output seamlessly, making it possible to extract images in super slow motion, capturing the image at a specific time stamp, etc.

(2) Human tracking

  • Frame-based sensor image

    The person in dark-color clothes is harder to detect.
    In a high illuminance environment, the frame-based sensor can also capture images.

  • EVS image

    The outlines of the persons are extracted. The attributes of the clothes do not affect the detection.
    The non-significant data (background, etc.) are not output
    [minimal data output]

  • Application output

    Detection results enhanced by machine learning. The figures are recognized as pedestrians.
    The recognition is sustained continuously even if the targets run, overtake, etc.

(3) Metal process monitoring

  • Frame-based sensor image

    The sparks are overexposed due to high luminance,
    making linear trails in the image.

  • EVS image

    Each fast-moving spark is captured individually [high frame rate]
    Data other than the sparks (such as the machinery) are not output
    [high efficiency: minimal data output]

  • Application output

    Each spark is tagged with ID and tracked
    ⇒ analyzable in terms of the number, size, speed, etc

(4) 3D measurement

  • Test environment

    The laser is projected onto a box moving on a conveyer belt
    and the reflected light is captured by the EVS camera.

  • EVS image

    The difference between the laser reflected on the box surface
    and that on the surface of the belt (default surface) are combined with
    temporal information to generate a 3D image.

  • Application output

    It is possible to configure the sensor so that the height information is taken only when the reflected laser beam enters the EVS. The high temporal resolution enables to obtain detailed height information with enhanced accuracy.

(5) Vibration monitoring

  • Frame-based sensor image

    It is impossible to discern the vibration in the model car
    on the platform with naked eye.

  • EVS image

    Only the vibrating areas are processed and output
    so that the vibration is clearly visualized.

  • Application output

    The frequency is analyzed per pixel and can be mapped out in two dimensions.

* Metavision® Intelligence Suite is the Event-Based Vision software developed by Prophesee.
Metavision® is a registered trademark of PROPHESEE S.A.

Sony's EVS products

* This button will redirect you to the salesforce.com Co., Ltd. website, which we have entrusted.

Find the latest information
on our newsletter for Industry application

Subscribe Now

Sony reserves the right to change products and specifications without prior notice.

to PageTop