The necessity of explainability in AI and the importance of establishing casual relations
Updated: Sep 26, 2020
(after seeing the post by Yann LeCun)
Integrating causality into AI, so that increasing interpretability, is not easy and even thought to be impossible. The "how" has been questioned by many people, but the discussions of "why" were so controversial between fields that the conversations didn't make it to the "how" most of the time. I remember when this topic started getting more traction, and that was at NIPS 2017. I enjoyed the panel discussion at the Interpretable ML Symposium, which is available here if interested. I've been seeing more and more people who got interested in this topic and express opinions since then, which I think it's a good phenomena. Like other machine learning folks, I do hope to figure out concrete explanations on what's going on inside deep CNNs during learning.
That being said, I was intrigued to see Yann LeCun's recent post (shown left) about it on Facebook. (Following his postings is one of the reasons why I'm still doing Facebook.)
Although, an example he made with the airplane article (link) got criticized by some people, I thought his point was pretty clear, in that we do care about the causality and we know it's important, but it could result in counter-productivity, and there could be many applicable ways to get around. I think drug development example is better aligned to AI systems, rather than the airplane example.
Then, I was in a meeting before I finished wrapping this post up, and realized that I was not the only one who saw the post! These two people had a short discussion about the post before the meeting starts and apparently one didn't like so much about the airplane example. Yes, he was a physicist and also had deep background in neuroscience. I was surprised by the immediate effect of Yann LeCun's online posting, and then enjoyed hearing different opinions, especially from outside the CS field. This is exactly why I love residing at Bldg 46!
Another thing worth a mention is that someone told me today about Thinkers and Doers. We don't really have enough time to fully doing the both, so one gotta do more works than thinking (especially early year PhD students, haha), then you move your balance towards thinking more than doing. I agree with him a lot, though I'd like to switch back and forth from one to another actively. I was a Thinker for a portion of this week enjoying various conversions with people around on this topic! TGIF!