LCA Model Posterior Estimation

Psychological research often relies on mathematical models to explain and predict human behavior. Such models aim to formalize cognitive processes by mapping latent psychological constructs to model parameters and specifying how these generate manifest data. In this tutorial, we go through the steps of a principled Bayesian workflow that is imperative when developing and applying cognitive models. This workflow includes the following steps: (I) Prior pushforward and prior predictive checks to assess whether the model is consistent with our domain expertise; (II) Computational faithfulness checks to ensure that our estimation method can accurately approximate the posterior distributions; (III) Model sensitivity to examine if our inferences provide sufficient information for answering our research question; (IV) Posterior retrodictive checks to assess whether our model can capture the relevant structure of the true data-generating process.\r\nTo demonstrate how such a workflow is performed in an amortized manner using BayesFlow, we will take a complex model from the evidence accumulaton model (EAM) family which is intractable for standard Bayesian methods.

Modeling frameworks
Leaky Competing Accumulator

A model of decision making that incorporates both leakage and competition between accumulating evidence.

More details
Evidence Accumulation Models

Models that explain decision making through the accumulation of evidence over time.

More details
Amortized Bayesian Inference

A method of approximate inference that leverages neural networks to perform rapid Bayesian inference.

More details
Psychology disciplines
Cognitive Psychology, Mathematical psychology
DOI
Programming language

Python

Code repository url

https://bayesflow.org/_examples/LCA_Model_Posterior_Estimation.html