Jean-Philippe Bernardy
Department of philosophy, linguistics and theory of science, Centre for linguistics and studies in probability, Gothenburg Univeristy, Sweden
Rasmus Blanck
Department of philosophy, linguistics and theory of science, Centre for linguistics and studies in probability, Gothenburg Univeristy, Sweden
Stergios Chatzikyriakidis
Department of philosophy, linguistics and theory of science, Centre for linguistics and studies in probability, Gothenburg Univeristy, Sweden
Shalom Lappin
Department of philosophy, linguistics and theory of science, Centre for linguistics and studies in probability, Gothenburg Univeristy, Sweden
Aleksandre Maskharashvili
Department of philosophy, linguistics and theory of science, Centre for linguistics and studies in probability, Gothenburg Univeristy, Sweden
Download articlePublished in: Proceedings of the 22nd Nordic Conference on Computational Linguistics (NoDaLiDa), September 30 - October 2, Turku, Finland
Linköping Electronic Conference Proceedings 167:37, p. 333--337
NEALT Proceedings Series 42:37, p. 333--337
Published: 2019-10-02
ISBN: 978-91-7929-995-8
ISSN: 1650-3686 (print), 1650-3740 (online)
In this paper, we present a Bayesian approach to natural language semantics. Our main focus is on the inference task in an environment where judgments require probabilistic reasoning. We treat nouns, verbs, adjectives, etc. as unary predicates, and we model them as boxes in a bounded domain. We apply Bayesian learning to satisfy constraints expressed as premises. In this way we construct a model, by specifying boxes for the predicates. The probability of the hypothesis (the conclusion) is evaluated against the model that incorporates the premises as constraints.
Bayesian models
probabilistic semantics
generalised quantifiers
vague predicates
compositionality
Inference