Psychological research often relies on mathematical models to explain and predict human behavior. Such models aim to formalize cognitive processes by mapping latent psychological constructs to model parameters and specifying how these generate manifest data. In this tutorial, we go through the steps of a principled Bayesian workflow that is imperative when developing and applying cognitive models. This workflow includes the following steps: (I) Prior pushforward and prior predictive checks to assess whether the model is consistent with our domain expertise; (II) Computational faithfulness checks to ensure that our estimation method can accurately approximate the posterior distributions; (III) Model sensitivity to examine if our inferences provide sufficient information for answering our research question; (IV) Posterior retrodictive checks to assess whether our model can capture the relevant structure of the true data-generating process.\r\nTo demonstrate how such a workflow is performed in an amortized manner using BayesFlow, we will take a complex model from the evidence accumulaton model (EAM) family which is intractable for standard Bayesian methods.
Python
https://bayesflow.org/_examples/LCA_Model_Posterior_Estimation.html