Knowledge-infused Learning is a class of Neuro-Symbolic AI techniques that incorporate broader forms of knowledge (lexical, domain-specific, common-sense, and constraint-based) into addressing limitations of either symbolic or statistical AI approaches, such as model interpretations and user-level explanations. Compared to powerful statistical AI that exploit data, KiL benefit from data as well as knowledge.
My Ph.D. Defense investigate the knowledge-infusion strategy in two important ways. The first is to infuse knowledge to make any classification task explainable. The second is to achieve explainability in any natural language generation tasks. The defense demonstrated the effective strategies of knowledge infusion that bring five characteristic properties in any statistical AI model:
- Context Sensitivity,
- Handling Uncertainty and Risk,
- Interpretable in learning,
- User-level Explainability,
across natural language understanding (NLU) tasks. Along with proven methodological contributions in AI made by the dissertation, it also introduced Knowledge-intensive Language Understanding tasks, a variant of General Language Understanding (GLUE) tasks that challenges AI and NLU research on explainability and interpretability.
Furthermore, the Defense showcased the utility of incorporating diverse forms of knowledge: linguistic, commonsense, broad-based, and domain-specific. As the Defense illustrated the success in various domains, achieving state-of-the-art in specific applications, and significant contributions towards improving the state of machine intelligence, it also highlighted careful steps to prevent errors arising due to knowledge infusion. The Defense also laid out future research direction towards Deep Knowledge Infusion, which would be pivotal in propelling machine understanding.