At the moment I'm reading a book about the different training models and their flexibility-interpretability.
I was wondering how we can achieve incredible performance with deep learning models without understanding the functioning behind. Furthermore, I would like to ask whether it is right to consider the difficult interpretability of these models as their main limitation.
(I'm a beginner, sorry for the trivial questions)
[link] [comments]