"Explainable AI" for deep learning is required for some applications, at least explainable enough to get some intuition for how a model works.

The recent paper "Axiomatic Attribution for Deep Networks" describes how to determine which input features have the most effect on a specific prediction by a deep learning classification model. I used the library IntegratedGradients that works with Keras and another version is available for TensorFlow.

I modified my two year old example model using the University of Wisconsin cancer data set today. If you want to experiment with the ideas in this paper and the IntegratedGradients library, then using my modified example might save you some time and effort.