One of the key problems in computer vision is adaptation: models are too rigid to follow the variability of the inputs. The canonical computation that explains adaptation in sensory neuroscience is divisive normalization, and it has appealing effects on image manifolds. In this work we show that including divisive normalization in current deep networks makes them more invariant to non-informative changes in the images. In particular, we illustrate this concept in U-Net architectures for image segmentation.
Artemisa provides a high performance computing infrastructure that operates continuously thanks to its batch system.
The facility is currently composed of 23 servers that host one NVIDIA GPU Volta V100 each, 11 servers with one GPU NVIDIA Ampere A100 and a server with 8 GPUs of the said model. The servers are especially suitable for computing in artificial intelligence. In addition to these servers, which must be used in “batch” mode, there are two interfaces where the users can previously test their software.