Bioinspired Divisive Normalization improves segmentation

One of the key problems in computer vision is adaptation: models are too rigid to follow the variability of the inputs. The canonical computation that explains adaptation in sensory neuroscience is divisive normalization, and it has appealing effects on image manifolds. In this work we show that including divisive normalization in current deep networks makes them more invariant to non-informative changes in the images. In particular, we illustrate this concept in U-Net architectures for image segmentation.

Monitoring War Destruction from Space

This project is joint work by Clement Gorin, Andre Groeger, Arogya Koyrala and Hannes Mueller. It builds on previous work of the authors using satellite images and Deep Learning for automated detection of war-related building destruction in Syria. Satellite imagery is becoming ubiquitous and is released with ever higher frequency. Research has demonstrated that Deep Learning applied to satellite imagery holds promise for automated detection of war-related building destruction.

Solar storms and the Spanish critical infrastructures

In the last decades, our society has become more interdependent and complex than ever before. Local impacts can cause global issues, as the current pandemic clearly shows, affecting the health of millions of human beings. It is also highly dependent on relevant technological structures, such as communications, transport, or power distribution networks, which can be very vulnerable to the effects of Space Weather.

Vision Technologies for Monitoring of Fishing Catches

The SICAPTOR 2.0 project intends to redesign the iObserver electronic monitoring (REM) system in order to minimize its size and maximize its performance. This device takes pictures of all fishes passing through the conveyor belt in the fishing park, and identifies in real time the species to which each of the specimens belongs in each of the photographs, making also an estimate of height and weight of each individual and of the total amount of captured biomass. To attain these objectives, we use a set of deep learning algorithms specifically developed.

Non-invasive screening for meningitis

165 newborns die every day of Bacterial Meningitis (BM), an aggressive infection that leaves severe sequelae among 30% of survivors. Rapid detection, particularly in this age group, is difficult due to the little specificity and overlap of its symptoms with those of more common and less severe diseases. Current strategy to improve prognosis is the prompt antibiotic treatment after an early diagnosis by means of a lumbar puncture (LP), invasive and potentially harmful procedure.

Identifying weed species in crop fields

Convolutional Neural Networks (CNN) are currently being implemented in a wide variety of applications. This subdomain of Artificial Intelligence shows a powerful performance in machine vision applications and may be used to categorise and classify objects, amongst other image processing tasks.

In the Artificial Perception Group of the Centre for Automation and Robotics (CAR) we are interested in identifying and classifying weed species within crop fields, which is a very specific problem, as the system will only need to process images of soil and plants.

Machine Learning @ ATLAS Experiment

The ATLAS experiment at the Large Hadron Collider (LHC) is looking beyond the Standard Model of particle physics, searching for signs of unknown new physics. An important aspect to be able to find this new physics is the identification of the interesting events within all the events available. Interesting events are called “signal”, while others are “background”. Individuating these signal events, which are indeed extremely rare, is a really challenging task. The LHC has delivered billions of collisions which have been recorded by the ATLAS detector.