Skip to main content
MIT Corporate Relations
MIT Corporate Relations
Search
×
Read
Watch
Attend
About
Connect
MIT Startup Exchange
Search
Sign-In
Register
Search
×
MIT ILP Home
Read
Faculty Features
Research
News
Watch
Attend
Conferences
Webinars
Learning Opportunities
About
Membership
Staff
For Faculty
Connect
Faculty/Researchers
Program Directors
MIT Startup Exchange
User Menu and Search
Search
Sign-In
Register
MIT ILP Home
Toggle menu
Search
Sign-in
Register
Read
Faculty Features
Research
News
Watch
Attend
Conferences
Webinars
Learning Opportunities
About
Membership
Staff
For Faculty
Connect
Faculty/Researchers
Program Directors
MIT Startup Exchange
Back to Faculty/Researchers
Dr. Jad Kabbara
Research Scientist
Primary DLC
MIT Media Lab
MIT Room:
E14-526B
jkabbara@mit.edu
https://www-ccc-mit-edu.ezproxy.canberra.edu.au/person/jad-kabbara/
Research Summary
Dr. Jad Kabbara is a research scientist at CCC. He received his Ph.D. in Computer Science in May 2022 from McGill University and the Montreal Institute for Learning Algorithms (Mila). Before that, he received his Masters from McGill University in 2014 and Bachelors from the American University of Beirut in 2011. His Ph.D. research was in the broad area of Natural Language Processing, specifically, at the intersection of computational pragmatics and natural language generation and natural language understanding.
In his PhD, Kabbara worked on the computational modeling of presuppositions in natural language. Presuppositions are shared assumptions and facts that are not explicitly stated in the context (either in texts or conversations) and are taken for granted. For example, if we say in a conversation “Roger Federer won the match,” we presuppose he played a match (which we won) but that fact is not explicitly stated. In his Ph.D., Kabbara presented various neural models for learning presupposition effects in language (e.g., definite descriptions, adverbial presuppositions) and showed how we can use such models to improve the quality of extractive summaries. He also investigated large transformer-based models (e.g., BERT, RoBERTa) in the context of NLI to understand how well they perform on hard cases of presupposition as well as presenting learning frameworks to help improve their performance on such hard cases. His work was recognized with the ACL 2018 Best Paper Award and COLING 2022 Best Short Paper Award.
Recent Work
Related Faculty
Jan Philipp Schmidt
Research Scientist
Dr. Andrew B Lippman
Senior Research Scientist
Mark C Feldmeier
Lecturer