Abstract |
Verbal humour brings a playful flexibility to our sober notions of meaning, truth and the mental lexicon. But before we can knowingly subvert this sober world view, we must learn the rules and conventions that define it. Traditionally, we have done this by building models that make explicit claims about words and meanings, but a new AI paradigm, large language models (LLMs), allows our models to tacitly learn much more about lexical semantics and pragmatics than anything we could hope to encode in a symbolic model. LLMs build context-specific encodings of words that are unique to, yet still generalizable from, their every attested use in a corpus. Humour, and the ability to generate novel jokes, stretch this ability to contextualize word meanings, just as they have severely tested our existing symbolic approaches. In this paper we explore whether LLMs like GPT-3.5 or GPT 4, which underpin the application ChatGPT, can “get” the intents of jokes that use familiar words in non-obvious ways, or whether, indeed, they can craft meaningful new jokes of their own. |
BibTex |
@inproceedings{euralex_2024_paper_2, address = {Cavtat}, title = {You Talk Funny! Someday Me Talk Funny Too! – On Learning to See the Humorous Side of Familiar Words},isbn = {978-953-7967-77-2}, shorttitle = {Euralex 2024}, url = {}, language = {eng}, booktitle = {Lexicography and Semantics. Proceedings of the XXI EURALEX International Congress}, publisher = {Institut za hrvatski jezik}, author = {Veale, Tony}, editor = {Despot, Kristina Š. and Ostroški Anić, Ana and Brač, Ivana}, year = {2024}, pages = {27-41} } |