Guest Talk "Transparent Natural Language Understanding"


Gábor Recski 

Date/Time: 13.03.2024, 12:00-13:00 

Location: D2.2.094 


Transformer-based deep learning models have become the most commonly used tool in natural language processing (NLP). When the goal is to extract structured information from text, nearly all solutions involve the training of end-to-end neural networks using human-annotated data and then using the resulting models directly for text processing. But the black box nature of these solutions greatly limit their applicability in domains that require transparency, predictability, or configurability. Rule-based solutions, on the other hand, are generally seen as not scalable and too costly to build and maintain. Our research attempts to combine the best of both worlds, by using human-in-the-loop (HITL) learning for the semi-automatic creation of rule-based solutions. Our approach allows domain experts to build white box solutions in highly technical domains such as legal or medical NLP. In the talk we shall introduce the approach and demonstrate its use via some recent use-cases.


Back to overview