A Theory of AI
White-boxing and Interpretability
We pursue the ultimate goal of white-boxing, i.e., gaining a quantitative understanding accessible by human intuition. In parallel, our research advances the interpretability of models and their results.