From NVIDIA new IA technique to rebuild damaged images

NVIDIA has presented a deep learning method that can modify images or reconstruct damaged images or missing pixels.

The method comes from the team of researchers led by Guilin Liu and is based on the deep learning techniques on which NVIDIA has been working for some time. It can also be applied by manually removing parts of the images and then leaving the program to rebuild them.

The method, which performs a process called ” image inpainting “, could be implemented within a photo-editing software to remove unwanted content and leave the artificial intelligence to cover the missing parts.

” Our model is able to effectively manage missing parts of any shape, size or distance from the edges of the image. The previous deep learning approaches have focused on rectangular regions around the center of the image and often rely on wasteful post-processing “, say the NVIDIA researchers in their technical documentation. ” In addition, our model is capable of handling larger-sized holes “.

For the training of the neural network at the base of technology, the team of researchers generated 55,116 masks consisting of random stripes and designs of various shapes and sizes. The neural network is managed through the PyTorch framework with cuDNN acceleration through NVIDIA Tesla V100 GPUs. The training was carried out by applying the masks to the ImageNet, Places2 and CelebA-HQ datasets.

During the training phase, the images of the datasets are subtracted from the parts, and then the masks are applied to make the system understand how to fill the voids. The researchers said that the existing deep learning methods are in crisis because the outputs for the missing pixels necessarily depend on the value of the input to be supplied to the neural network.

To solve this problem, the NVIDIA team has developed a method that ensures that the output for the missing pixels does not depend on the input value provided for those pixels. The method is called ” partial convolution ” as explained here.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More