Ir arriba

Development of visualization and interpretation tools for convolutional neural networks

One of the main open problems in deep learning is the lack of explainability of the models. It is unquestionable that these algorithms outperform traditional techniques, but there is a real risk of introducing unintentional biases during the training process. The goal of this project is to evaluate different visualization solutions that have been proposed in the literature for convolutional neural networks and try to make sense of what they learn.

Alumno

Óscar Llorente González

Ofertado en

  • Grado en Ingeniería en Tecnologías de Telecomunicación - (GITT)