AI could add color to night vision

0

Night vision is generally monotonous – everything the wearer can see is colored the same hue, which is mostly shades of green. But by using different wavelengths of infrared light and a relatively simple AI algorithm, scientists at the University of California, Irvine were able to restore color to these desaturated images. Their findings are published in the journal PLOS ONE this week.

Light in the visible spectrum, similar to an FM radio, consists of many different frequencies. Light and radio are part of the electromagnetic spectrum. But light, unlike radio waves, is measured in nanometers (characterizing its wavelength) instead of megahertz (characterizing its wave frequency). The light that the average human eye can perceive ranges from 400 to 700 nanometers in wavelength.

The typical security camera equipped with night vision uses a single color and wavelength of infrared light, greater than 700 nanometers, to create a scene. Infrared light is part of the electromagnetic spectrum invisible to the naked eye. These waves have been used by scientists to study thermal energy; infrared light signals are also what some remote controls use to communicate with the television screen.

Previously, to teach night vision cameras to see in color, researchers took a photo of the same scene with an infrared camera and a normal camera, and trained the machine to predict the color image from the infrared image at from these two types of inputs. . But in this experiment, the UC Irvine team wanted to see if night vision cameras using multiple wavelengths of infrared light could help an algorithm make better color predictions.

To test this, they used a monochrome camera that reacted to light from the visible and infrared spectrum. Most color cameras capture three different colors of light: red (604 nm), green (529 nm), and blue (447 nm). In addition to capturing the sample images with these colors of light, the experimental device also took pictures in the dark under three different wavelengths of infrared light at 718, 777, and 807 nm.

“The monochromatic camera is sensitive to all photons reflected from the scene it is looking at,” says Andrew Browne, professor of ophthalmology at UC Irvine and author on the PLOS ONE paper. “So we used an adjustable light source to illuminate the scene and a monochromatic camera to capture the photons reflected from that scene under all the different lighting colors.”

[Related: Stanford engineers made a tiny LED display that stretches like a rubber band]

The scientists then used the three infrared images paired with color images to train an artificial intelligence neural network to make predictions about what the colors of the scene should be. The neural network was able to reconstruct color images from the three infrared images that looked quite close to reality after the team trained it and improved its performance.

Browne and. al, PLOS ONE

“When we increase the number of infrared channels, or infrared colors, it provides more data and we can make better predictions that actually look quite similar to what the real image should be,” says Browne. “This paper demonstrates the feasibility of this approach to acquiring an image in three different infrared colors, three colors that we cannot see with the human eye.”

For this experiment, the team only tested its algorithms and technique on printed color photos. However, Browne says they are looking to apply this to videos and eventually to real-world objects and human subjects.

“There are certain situations where you can’t use visible light, either because you don’t want to see something or because visible light can be harmful,” says Andrew Browne, professor of ophthalmology at UC. Irvine. This may apply, for example, to people who work with light-sensitive chemicals, researchers who wish to study the eye, or military personnel. “The ability to see in color vision, or something resembling our normal vision, could be useful in low light conditions.”

Share.

About Author

Comments are closed.