Lecture Series by Prof. Roni Katzir

At MIT

06 March 2025

Among the lectures:

 

On the roles of anaphoricity and questions in free focus

 

Abstract:

The sensitivity of focus to context has often been analyzed in terms of anaphoric relations between sentences and surrounding discourse. I will suggest that we abandon this anaphoric view. Instead of anaphoric felicity conditions, I propose that focus leads to infelicity only indirectly, when the processes that it feeds — in particular,  exhaustification and question formation —- make an inappropriate contribution to discourse. I outline such an account, incorporating insights from Büring (2019) and Fox (2019). A challenge to this account comes from cases where anaphoricity seems needed either to block deaccenting that would be licensed by a question or to allow local deaccenting that is not warranted by a question. Such cases appear to support recent anaphoric proposals such as Schwarzschild (2020) and Goodhue (2022). I argue that this potential motivation for anaphoricity is only apparent and that where anaphoric conditions on focus are not inert they are in fact harmful.

 

--------------------------------------------------------------------------------------------------------------------------

 

Large language models and human linguistic cognition

(As part of the Breakstone Speaker Series in Language, Computation and Cognition)

 

Abstract:

Several recent publications in cognitive science have made the suggestion that the performance of current Large Language Models (LLMs) challenges arguments that linguists use to support their theories (in particular, arguments from the poverty of the stimulus and from cross-linguistic variation). I will review this line of work, starting from proposals that take LLMs themselves to be good theories of human linguistic cognition. I will note that the architectures behind current LLMs lack the distinction between competence and performance and between correctness and probability, two fundamental distinctions of human cognition. Moreover, these architectures fail to acquire key aspects of human linguistic knowledge — in fact, they make inductive leaps that are not just non-human-like but would be surprising in any kind of rational learner. These observations make current LLMs inadequate theories of human linguistic cognition. Still, LLMs can in principle inform cognitive science by serving as proxies for theories of cognition whose representations and learning are more linguistically-neutral than those of most theories within generative linguistics. I will illustrate this proxy use of LLMs in evaluating learnability and typological arguments and show that, at present, these models provide little support for linguistically-neutral theories of cognition. 

 

--------------------------------------------------------------------------------------------------------------------------

 

Gaps, doublets, and rational learning

(Within the framework of the Phonology Circle at MIT Department of Linguistics)

 

Abstract:

Inflectional gaps (??forgoed/??forwent) and doublets (✓dived/✓dove) can seem surprising in light of common assumptions about morphology and learning. Perhaps understandably, morphologists have troubled themselves with such cases (especially with gaps) and have offered different ways in which they can be accommodated within the morphological component of the grammar, often by making major departures from common assumptions about morphology and learning. I will suggest that these worries and the proposed remedies are premature. The worries arise from the unmotivated assumption that all gaps and doublets are necessarily derived within individual grammars. Once this assumption is abandoned, the observed properties of gaps and doublets become much less puzzling. The only new assumption that is needed is that speakers only use forms that they know (and not just believe) to be correct.

 

Congratulations!

Tel Aviv University makes every effort to respect copyright. If you own copyright to the content contained
here and / or the use of such content is in your opinion infringing Contact us as soon as possible >>