Mohammed Aldawsari

School of Computing and Information Sciences


Lecture Information:
  • October 30, 2020
  • 10:00 AM
  • Zoom: https://fiu.zoom.us/j/98469511985?pwd=b2l0Q25vUUFrZnBOaDlUK011cVBidz09

Speaker Bio

Mohammed Aldawsari is a Ph.D. candidate in the Cognition, Narrative, and Culture (Cognac) Lab at the School of Computing and Information Sciences. Aldawsari received the B.S. degree in Computer Science from Prince Sattam Bin Abdulaziz University, Saudi Arabia, in 2011 and the M.S. degree in Computer Science from DePaul University, USA in 2016. His research interests are in Natural Language Processing, with a focus on understanding events in text.

Description

Stories often appear in textual form, for example, news stories are found in the form of newspaper articles, blogs, broadcast transcripts, and so forth. These contain descriptions of current, past, or future events. Automatically extracting knowledge from these events descriptions is an important natural language processing (NLP) task, and understanding event structure aids in this knowledge extraction. Event structure is the fact that events may have relationships or internal structure, for example, they can be in a co-reference relationship with another event mention, or composed of subevents.

Understanding event structure has received less attention in NLP than is due. This work develops computational methods to automatically understand events found in narrative text and reveal their structure. In particular, I address four problems related to event structure understanding: (1) Detecting when one event is a subevent of another; (2) Identifying foreground and background events as well as the general temporal position of background events relative to the foreground period (past, present, future, and their combinations); (3) Leveraging foreground and background event knowledge to improve the extraction of event relations, specifically subevent, co-reference, and discourse-level temporal relations; and (4) Developing an event-based approach to solving the story fragment stitching problem, i.e., aligning a set of story fragments into a full, ordered, end-to-end list of story events. The latter problem is similar to the cross-document event co-reference relation task but is more challenging because the overall timeline of the story’s events need to be preserved across all fragments.

For the first problem, I present a supervised machine learning model that outperforms prior models on this task and show the effectiveness of discourse and narrative features in modeling subevent relations. For the second and third problem, I demonstrate a featurized supervised model for detecting foreground and background events and illustrate the usefulness of foreground and background knowledge in event relations tasks, namely, subevent, co-reference, and discourse-level temporal relations. Lastly, I introduce a graph-based unsupervised approach and apply an adapted model merging approach to solve the story fragment stitching problem.