Publications

Copyright Notice: The documents distributed here have been provided as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.

Submitted

  • [PDF] Heck, D. W., & Noventa, S. (2019). A note on representing probabilistic models of knowledge space theory by multinomial processing tree models. Manuscript submitted for publication.
    [BibTeX] [Abstract] [Data and R Scripts]

    Knowledge Space Theory (KST) aims at modeling the hierarchical relations between items or skills in a learning process. For example, when studying mathematics in school, students first need to master the rules of summation before being able to learn multiplication. In KST, the knowledge states of individuals are represented by means of partially ordered latent classes. In probabilistic KST models, conditional probability parameters are introduced to model transitions from latent knowledge states to observed response patterns. Since these models account for discrete data by assuming a finite number of latent states, they can be represented by Multinomial Processing Tree (MPT) models (i.e., binary decision trees with parameters referring to the conditional probabilities of entering different states). Extending previous work on the link between MPT and KST models for procedural assessments of knowledge, we prove that standard probabilistic models of KST such as the Basic Local Independence Model (BLIM) and the Simple Learning Model (SLM) can be represented as specific instances of MPT models. Given this close link, MPT methods may be applied to address theoretical and practical issues in KST. Using a simulation study, we show that model-selection methods recently implemented for MPT models (e.g., the Bayes factor) allow KST researchers to test and account for violations of local independence, a fundamental assumption in Item Response Theory (IRT) and psychological testing in general. By highlighting the MPT-KST link and its implications for IRT, we hope to facilitate an exchange of theoretical results, statistical methods, and software across these different domains of mathematical psychology.

    @unpublished{heck2019note,
    title = {A Note on Representing Probabilistic Models of Knowledge Space Theory by Multinomial Processing Tree Models},
    abstract = {Knowledge Space Theory (KST) aims at modeling the hierarchical relations between items or skills in a learning process. For example, when studying mathematics in school, students first need to master the rules of summation before being able to learn multiplication. In KST, the knowledge states of individuals are represented by means of partially ordered latent classes. In probabilistic KST models, conditional probability parameters are introduced to model transitions from latent knowledge states to observed response patterns. Since these models account for discrete data by assuming a finite number of latent states, they can be represented by Multinomial Processing Tree (MPT) models (i.e., binary decision trees with parameters referring to the conditional probabilities of entering different states). Extending previous work on the link between MPT and KST models for procedural assessments of knowledge, we prove that standard probabilistic models of KST such as the Basic Local Independence Model (BLIM) and the Simple Learning Model (SLM) can be represented as specific instances of MPT models. Given this close link, MPT methods may be applied to address theoretical and practical issues in KST. Using a simulation study, we show that model-selection methods recently implemented for MPT models (e.g., the Bayes factor) allow KST researchers to test and account for violations of local independence, a fundamental assumption in Item Response Theory (IRT) and psychological testing in general. By highlighting the MPT-KST link and its implications for IRT, we hope to facilitate an exchange of theoretical results, statistical methods, and software across these different domains of mathematical psychology.},
    type = {Manuscript submitted for publication},
    howpublished = {Manuscript submitted for publication},
    date = {2019},
    keywords = {submitted},
    author = {Heck, Daniel W and Noventa, Stefano},
    osf = {https://osf.io/4wma7}
    }

2020

  • [PDF] Erdfelder, E., & Heck, D. W. (in press). Detecting evidential value and p-hacking with the p-curve tool: A word of caution. Zeitschrift für Psychologie.
    [BibTeX] [Abstract]

    Simonsohn, Nelson, and Simmons (2014a) proposed p-curve – the distribution of statistically significant p-values for a set of studies – as a tool to assess the evidential value of these studies. They argued that, whereas right-skewed p-curves indicate true underlying effects, left-skewed p-curves indicate selective reporting of significant results when there is no true effect (“p-hacking”). We first review previous research showing that, in contrast to the first claim, null effects may produce right-skewed p-curves under some conditions. We then question the second claim by showing that not only selective reporting but also selective non-reporting of significant results due to a significant outcome of a more popular alternative test of the same hypothesis may produce left-skewed p-curves, even if all studies reflect true effects. Hence, just as right-skewed p-curves do not necessarily imply evidential value, left-skewed p-curves do not necessarily imply p-hacking and absence of true effects in the studies involved.

    @article{erdfelder2020detecting,
    title = {Detecting Evidential Value and P-Hacking with the p-Curve Tool: {{A}} Word of Caution},
    abstract = {Simonsohn, Nelson, and Simmons (2014a) proposed p-curve – the distribution of statistically significant p-values for a set of studies – as a tool to assess the evidential value of these studies. They argued that, whereas right-skewed p-curves indicate true underlying effects, left-skewed p-curves indicate selective reporting of significant results when there is no true effect (“p-hacking”). We first review previous research showing that, in contrast to the first claim, null effects may produce right-skewed p-curves under some conditions. We then question the second claim by showing that not only selective reporting but also selective non-reporting of significant results due to a significant outcome of a more popular alternative test of the same hypothesis may produce left-skewed p-curves, even if all studies reflect true effects. Hence, just as right-skewed p-curves do not necessarily imply evidential value, left-skewed p-curves do not necessarily imply p-hacking and absence of true effects in the studies involved.},
    journaltitle = {Zeitschrift für Psychologie},
    date = {2020},
    author = {Erdfelder, Edgar and Heck, Daniel W},
    pubstate = {inpress}
    }

  • [PDF] Heck, D. W., Seiling, L., & Bröder, A. (in press). The love of large numbers revisited: A coherence model of the popularity bias. Cognition.
    [BibTeX] [Abstract] [Data and R Scripts]

    Preferences are often based on social information such as experiences and recommendations of other people. The reliance on social information is especially relevant in the case of online shopping, where buying decisions for products may often be based on online reviews by other customers. Recently, Powell, Yu, DeWolf, and Holyoak (2017, Psychological Science, 28, 1432-1442) showed that, when deciding between two products, people do not consider the number of product reviews in a statistically appropriate way as predicted by a Bayesian model but rather exhibit a bias for popular products (i.e., products with many reviews). In the present work, we propose a coherence model of the cognitive mechanism underlying this empirical phenomenon. The new model assumes that people strive for a coherent representation of the available information (i.e., the average review score and the number of reviews). To test this theoretical account, we reanalyzed the data of Powell and colleagues and ran an online study with 244 participants using a wider range of stimulus material than in the original study. Besides replicating the popularity bias, the study provided clear evidence for the predicted coherence effect, that is, decisions became more confident and faster when the available information about popularity and quality was congruent.

    @article{heck2020love,
    title = {The Love of Large Numbers Revisited: {{A}} Coherence Model of the Popularity Bias},
    abstract = {Preferences are often based on social information such as experiences and recommendations of other people. The reliance on social information is especially relevant in the case of online shopping, where buying decisions for products may often be based on online reviews by other customers. Recently, Powell, Yu, DeWolf, and Holyoak (2017, Psychological Science, 28, 1432-1442) showed that, when deciding between two products, people do not consider the number of product reviews in a statistically appropriate way as predicted by a Bayesian model but rather exhibit a bias for popular products (i.e., products with many reviews). In the present work, we propose a coherence model of the cognitive mechanism underlying this empirical phenomenon. The new model assumes that people strive for a coherent representation of the available information (i.e., the average review score and the number of reviews). To test this theoretical account, we reanalyzed the data of Powell and colleagues and ran an online study with 244 participants using a wider range of stimulus material than in the original study. Besides replicating the popularity bias, the study provided clear evidence for the predicted coherence effect, that is, decisions became more confident and faster when the available information about popularity and quality was congruent.},
    journaltitle = {Cognition},
    date = {2020},
    author = {Heck, Daniel W and Seiling, Lukas and Bröder, Arndt},
    osf = {https://osf.io/mzb7n},
    pubstate = {inpress}
    }

  • [PDF] Heck, D. W., & Erdfelder, E. (in press). Benefits of response time-extended multinomial processing tree models: A reply to Starns (2018). Psychonomic Bulletin & Review. doi:10.3758/s13423-019-01663-0
    [BibTeX] [Abstract] [Data and R Scripts]

    In his comment on Heck and Erdfelder (2016), Starns (2018) focuses on the response time-extended two-high-threshold (2HT-RT) model for yes-no recognition tasks, a specific example for the general class of response time-extended multinomial processing tree models (MPT-RTs) we proposed. He argues that the 2HT-RT model cannot accommodate the speed-accuracy trade-off, a key mechanism in speeded recognition tasks. As a remedy, he proposes a specific discrete-state model for recognition memory that assumes a race mechanism for detection and guessing. In this reply, we clarify our motivation for using the 2HT-RT model as an example and highlight the importance and benefits of MPT-RTs as a flexible class of general-purpose, simple-to-use models. By binning RTs into discrete categories, the MPT-RT aproach facilitates the joint modeling of discrete responses and response times in a variety of psychological paradigms. In fact, many paradigms either lack a clear-cut accuracy criterion or show performance levels at ceiling, making corrections for incautious responding redundant. Moreover, we show that some forms of speed-accuracy trade-off can in fact not only be accommodated but also be measured by appropriately designed MPT-RTs.

    @article{heck2020benefits,
    title = {Benefits of Response Time-Extended Multinomial Processing Tree Models: {{A}} Reply to {{Starns}} (2018)},
    doi = {10.3758/s13423-019-01663-0},
    abstract = {In his comment on Heck and Erdfelder (2016), Starns (2018) focuses on the response time-extended two-high-threshold (2HT-RT) model for yes-no recognition tasks, a specific example for the general class of response time-extended multinomial processing tree models (MPT-RTs) we proposed. He argues that the 2HT-RT model cannot accommodate the speed-accuracy trade-off, a key mechanism in speeded recognition tasks. As a remedy, he proposes a specific discrete-state model for recognition memory that assumes a race mechanism for detection and guessing. In this reply, we clarify our motivation for using the 2HT-RT model as an example and highlight the importance and benefits of MPT-RTs as a flexible class of general-purpose, simple-to-use models. By binning RTs into discrete categories, the MPT-RT aproach facilitates the joint modeling of discrete responses and response times in a variety of psychological paradigms. In fact, many paradigms either lack a clear-cut accuracy criterion or show performance levels at ceiling, making corrections for incautious responding redundant. Moreover, we show that some forms of speed-accuracy trade-off can in fact not only be accommodated but also be measured by appropriately designed MPT-RTs.},
    journaltitle = {Psychonomic Bulletin \& Review},
    date = {2020},
    author = {Heck, Daniel W and Erdfelder, Edgar},
    osf = {https://osf.io/qkfxz},
    pubstate = {inpress}
    }

  • [PDF] Heck, D. W., Thielmann, I., Klein, S. A., & Hilbig, B. E. (in press). On the limited generality of air pollution and anxiety as causal determinants of unethical behavior: Commentary on Lu, Lee, Gino, & Galinsky (2018). Psychological Science.
    [BibTeX] [Abstract] [Data and R Scripts]

    Lu, Lee, Gino, and Galinsky (2018; LLGG) tested the hypotheses that air pollution causes unethical behavior and that this effect is mediated by increased anxiety. Here, we provide theoretical and empirical arguments against the generality of the effects of air pollution and anxiety on unethical behavior. First, we collected and analyzed monthly longitudinal data on air pollution and crimes for 103 districts in the UK. Contrary to LLGG’s proposition, seasonal trends in air pollution were exactly opposed to monthly crime rates. Moreover, our data provide evidence against the more restrictive hypothesis that air pollution has incremental validity beyond seasonal trends. Second, based on a large-scale reanalysis of incentivized cheating behavior in standard dice-roll and coin-toss tasks, we found that trait anxiety, operationalized by the personality trait Emotionality and its facet Anxiety, are not predictive of dishonesty. Overall, this suggests that LLGG’s theory is too broad and requires further specification.

    @article{heck2020limited,
    title = {On the Limited Generality of Air Pollution and Anxiety as Causal Determinants of Unethical Behavior: {{Commentary}} on {{Lu}}, {{Lee}}, {{Gino}}, \& {{Galinsky}} (2018)},
    abstract = {Lu, Lee, Gino, and Galinsky (2018; LLGG) tested the hypotheses that air pollution causes unethical behavior and that this effect is mediated by increased anxiety. Here, we provide theoretical and empirical arguments against the generality of the effects of air pollution and anxiety on unethical behavior. First, we collected and analyzed monthly longitudinal data on air pollution and crimes for 103 districts in the UK. Contrary to LLGG’s proposition, seasonal trends in air pollution were exactly opposed to monthly crime rates. Moreover, our data provide evidence against the more restrictive hypothesis that air pollution has incremental validity beyond seasonal trends. Second, based on a large-scale reanalysis of incentivized cheating behavior in standard dice-roll and coin-toss tasks, we found that trait anxiety, operationalized by the personality trait Emotionality and its facet Anxiety, are not predictive of dishonesty. Overall, this suggests that LLGG’s theory is too broad and requires further specification.},
    journaltitle = {Psychological Science},
    date = {2020},
    author = {Heck, Daniel W and Thielmann, Isabel and Klein, Sina A and Hilbig, Benjamin E},
    osf = {https://osf.io/k76b2},
    pubstate = {inpress}
    }

  • [PDF] Heck, D. W., & Erdfelder, E. (in press). Maximizing the expected information gain of cognitive modeling via design optimization. Computational Brain & Behavior. doi:10.1007/s42113-019-00035-0
    [BibTeX] [Abstract] [https://psyarxiv.com/6cy9n] [Data and R Scripts]

    To ensure robust scientific conclusions, cognitive modelers should optimize planned experimental designs a priori in order to maximize the expected information gain for answering the substantive question of interest. Both from the perspective of philosophy of science, but also within classical and Bayesian statistics, it is crucial to tailor empirical studies to the specific cognitive models under investigation before collecting any new data. In practice, methods such as design optimization, classical power analysis, and Bayesian design analysis provide indispensable tools for planning and designing informative experiments. Given that cognitive models provide precise predictions for future observations, we especially highlight the benefits of model-based Monte Carlo simulations to judge the expected information gain provided by different possible designs for cognitive modeling.

    @article{heck2020maximizing,
    title = {Maximizing the Expected Information Gain of Cognitive Modeling via Design Optimization},
    url = {https://psyarxiv.com/6cy9n},
    doi = {10.1007/s42113-019-00035-0},
    abstract = {To ensure robust scientific conclusions, cognitive modelers should optimize planned experimental designs a priori in order to maximize the expected information gain for answering the substantive question of interest. Both from the perspective of philosophy of science, but also within classical and Bayesian statistics, it is crucial to tailor empirical studies to the specific cognitive models under investigation before collecting any new data. In practice, methods such as design optimization, classical power analysis, and Bayesian design analysis provide indispensable tools for planning and designing informative experiments. Given that cognitive models provide precise predictions for future observations, we especially highlight the benefits of model-based Monte Carlo simulations to judge the expected information gain provided by different possible designs for cognitive modeling.},
    journaltitle = {Computational Brain \& Behavior},
    date = {2020},
    author = {Heck, Daniel W and Erdfelder, Edgar},
    pubstate = {inpress},
    osf = {https://osf.io/xehk5}
    }

  • [PDF] Kroneisen, M., & Heck, D. W. (in press). Interindividual differences in the sensitivity for consequences, moral norms and preferences for inaction: Relating personality to the CNI model. Personality and Social Psychology Bulletin.
    [BibTeX] [Abstract] [Data and R Scripts]

    Research on moral decision-making usually focuses on two ethical principles: The principle of utilitarianism (=morality of an action is determined by its consequences) and the principle of deontology (=morality of an action is valued according to the adherence to moral norms regardless of the consequences). Criticism on traditional moral dilemma research includes the reproach that consequences and norms are confounded in standard paradigms. As a remedy, a multinomial model (the CNI model) was developed to disentangle and measure sensitivity to consequences (C), sensitivity to moral norms (N), and general preference for inaction versus action (I). In two studies, we examined the link of basic personality traits to moral judgments by fitting a hierarchical Bayesian version of the CNI model. As predicted, high Honesty-Humility was selectively associated with sensitivity for norms, whereas high Emotionality was selectively associated with sensitivity for consequences. However, Conscientiousness was not associated with a preference for inaction.

    @article{kroneisen2020interindividual,
    title = {Interindividual Differences in the Sensitivity for Consequences, Moral Norms and Preferences for Inaction: {{Relating}} Personality to the {{CNI}} Model},
    abstract = {Research on moral decision-making usually focuses on two ethical principles: The principle of utilitarianism (=morality of an action is determined by its consequences) and the principle of deontology (=morality of an action is valued according to the adherence to moral norms regardless of the consequences). Criticism on traditional moral dilemma research includes the reproach that consequences and norms are confounded in standard paradigms. As a remedy, a multinomial model (the CNI model) was developed to disentangle and measure sensitivity to consequences (C), sensitivity to moral norms (N), and general preference for inaction versus action (I). In two studies, we examined the link of basic personality traits to moral judgments by fitting a hierarchical Bayesian version of the CNI model. As predicted, high Honesty-Humility was selectively associated with sensitivity for norms, whereas high Emotionality was selectively associated with sensitivity for consequences. However, Conscientiousness was not associated with a preference for inaction.},
    journaltitle = {Personality and Social Psychology Bulletin},
    date = {2020},
    keywords = {submitted},
    author = {Kroneisen, Meike and Heck, Daniel W},
    osf = {https://osf.io/b7c9z},
    pubstate = {inpress}
    }

  • [PDF] Schild, C., Heck, D. W., Ścigała, K., & Zettler, I. (in press). Revisiting REVISE: (Re)Testing unique and combined effects of REminding, VIsibility, and SElf-engagement manipulations on cheating behavior. Journal of Economic Psychology. doi:10.1016/j.joep.2019.04.001
    [BibTeX] [Abstract] [Data and R Scripts]

    Dishonest behavior poses a crucial threat to individuals and societies at large. To highlight situation factors that potentially reduce the occurrence and/or extent of dishonesty, Ayal, Gino, Barkan, and Ariely (2015) introduced the REVISE framework, consisting of three principles: REminding, VIsibility, and SElf-engagement. The evidence that the three REVISE principles actually reduce dishonesty is not always strong and sometimes even inconsistent, however. We herein thus conceptually replicate three suggested manipulations, each serving as an operationalization of one principle. In a large study with eight conditions and 5,039 participants, we link the REminding, VIsibility, and SElfengagement manipulations to dishonesty, compare their effectiveness with each other, and test for potential interactions between them. Overall, we find that VIsibilty (in terms of overtly monitoring responses) and SElfengagement (in terms of retyping an honesty statement) reduce dishonest behavior. We find no support for the effectiveness of REminding (in terms of ethical priming) or for any interaction between the REVISE principles. We also report two preregistered manipulation-check studies and discuss policy implications of our findings.

    @article{schild2020revisiting,
    title = {Revisiting {{REVISE}}: ({{Re}}){{Testing}} Unique and Combined Effects of {{REminding}}, {{VIsibility}}, and {{SElf}}-Engagement Manipulations on Cheating Behavior},
    doi = {10.1016/j.joep.2019.04.001},
    abstract = {Dishonest behavior poses a crucial threat to individuals and societies at large. To highlight situation factors that potentially reduce the occurrence and/or extent of dishonesty, Ayal, Gino, Barkan, and Ariely (2015) introduced the REVISE framework, consisting of three principles: REminding, VIsibility, and SElf-engagement. The evidence that the three REVISE principles actually reduce dishonesty is not always strong and sometimes even inconsistent, however. We herein thus conceptually replicate three suggested manipulations, each serving as an operationalization of one principle. In a large study with eight conditions and 5,039 participants, we link the REminding, VIsibility, and SElfengagement manipulations to dishonesty, compare their effectiveness with each other, and test for potential interactions between them. Overall, we find that VIsibilty (in terms of overtly monitoring responses) and SElfengagement (in terms of retyping an honesty statement) reduce dishonest behavior. We find no support for the effectiveness of REminding (in terms of ethical priming) or for any interaction between the REVISE principles. We also report two preregistered manipulation-check studies and discuss policy implications of our findings.},
    journaltitle = {Journal of Economic Psychology},
    date = {2020},
    author = {Schild, Christoph and Heck, Daniel W and Ścigała, Karolina and Zettler, Ingo},
    osf = {https://osf.io/m6cnu},
    pubstate = {inpress}
    }

  • [PDF] Starns, J. J., Cataldo, A. M., Rotello, C. M., Annis, J., Aschenbrenner, A., Bröder, A., Cox, G., Criss, A., Curl, R. A., Dobbins, I. G., Dunn, J., Enam, T., Evans, N. J., Farrell, S., Fraundorf, S. H., Gronlund, S. D., Heathcote, A., Heck, D. W., Hicks, J. L., Huff, M. J., Kellen, D., Key, K. N., Kilic, A., Klauer, K. C., Kraemer, K. R., Leite, F. P., Lloyd, M. E., Malejka, S., Mason, A., McAdoo, R. M., McDonough, I. M., Michael, R. B., Mickes, L., Mizrak, E., Morgan, D. P., Mueller, S. T., Osth, A., Reynolds, A., Seale-Carlisle, T. M., Singmann, H., Sloane, J. F., Smith, A. M., Tillman, G., van Ravenzwaaij, D., Weidemann, C. T., Wells, G. L., White, C. N., & Wilson, J. (in press). Assessing theoretical conclusions with blinded inference to investigate a potential inference crisis. Advances in Methods and Practices in Psychological Science.
    [BibTeX] [Abstract] [Data and R Scripts]

    Scientific advances across a range of disciplines hinge on the ability to make inferences about unobservable theoretical entities on the basis of empirical data patterns. Accurate inferences rely on both discovering valid, replicable data patterns and accurately interpreting those patterns in terms of their implications for theoretical constructs. The replication crisis in science has led to widespread efforts to improve the reliability of research findings, but comparatively little attention has been devoted to the validity of inferences based on those findings. Using an example from cognitive psychology, we demonstrate a blinded-inference paradigm for assessing the quality of theoretical inferences from data. Our results reveal substantial variability in experts’ judgments on the very same data, hinting at a possible inference crisis.

    @article{starns2019assessing,
    title = {Assessing Theoretical Conclusions with Blinded Inference to Investigate a Potential Inference Crisis},
    abstract = {Scientific advances across a range of disciplines hinge on the ability to make inferences about unobservable theoretical entities on the basis of empirical data patterns. Accurate inferences rely on both discovering valid, replicable data patterns and accurately interpreting those patterns in terms of their implications for theoretical constructs. The replication crisis in science has led to widespread efforts to improve the reliability of research findings, but comparatively little attention has been devoted to the validity of inferences based on those findings. Using an example from cognitive psychology, we demonstrate a blinded-inference paradigm for assessing the quality of theoretical inferences from data. Our results reveal substantial variability in experts’ judgments on the very same data, hinting at a possible inference crisis.},
    journaltitle = {Advances in Methods and Practices in Psychological Science},
    date = {2020},
    author = {Starns, Jeffrey J. and Cataldo, Andrea M. and Rotello, Caren M. and Annis, Jeffrey and Aschenbrenner, Andrew and Bröder, Arndt and Cox, Gregory and Criss, Amy and Curl, Ryan A. and Dobbins, Ian G. and Dunn, John and Enam, Tasnuva and Evans, Nathan J. and Farrell, Simon and Fraundorf, Scott H. and Gronlund, Scott D. and Heathcote, Andrew and Heck, Daniel W and Hicks, Jason L. and Huff, Mark J. and Kellen, David and Key, Kylie N. and Kilic, Asli and Klauer, Karl Christoph and Kraemer, Kyle R. and Leite, Fábio P. and Lloyd, Marianne E. and Malejka, Simone and Mason, Alice and McAdoo, Ryan M. and McDonough, Ian M. and Michael, Robert B. and Mickes, Laura and Mizrak, Eda and Morgan, David P. and Mueller, Shane T. and Osth, Adam and Reynolds, Angus and Seale-Carlisle, Travis M. and Singmann, Henrik and Sloane, Jennifer F. and Smith, Andrew M. and Tillman, Gabriel and van Ravenzwaaij, Don and Weidemann, Christoph T. and Wells, Gary L. and White, Corey N. and Wilson, Jack},
    options = {useprefix=true},
    pubstate = {inpress},
    osf = {https://osf.io/92ahy}
    }

  • [PDF] Ścigała, K., Schild, C., Heck, D. W., & Zettler, I. (in press). Who deals with the devil: Interdependence, personality, and corrupted collaboration. Social Psychological and Personality Science. doi:10.1177/1948550618813419
    [BibTeX] [Abstract] [Data and R Scripts]

    Corrupted collaboration, i.e., gaining personal profits through collaborative immoral acts, is a common and destructive phenomenon in societies. Despite the societal relevance of corrupted collaboration, the role of one’s own as well as one’s partner’s characteristics has hitherto remained unexplained. In the present study, we test these roles using the sequential dyadic die-rolling paradigm (N = 499 across five conditions). Our results indicate that interacting with a fully dishonest partner leads to higher cheating rates than interacting with a fully honest partner, although being paired with a fully honest partner does not eliminate dishonesty completely. Furthermore, we found that the basic personality dimension of Honesty-Humility is consistently negatively related to collaborative dishonesty irrespective of whether participants interact with fully honest or fully dishonest partners. Overall, our investigation provides a comprehensive view of the role of interaction partner’s characteristics in settings allowing for corrupted collaboration.

    @article{scigala2020who,
    title = {Who Deals with the Devil: {{Interdependence}}, Personality, and Corrupted Collaboration},
    doi = {10.1177/1948550618813419},
    abstract = {Corrupted collaboration, i.e., gaining personal profits through collaborative immoral acts, is a common and destructive phenomenon in societies. Despite the societal relevance of corrupted collaboration, the role of one's own as well as one's partner's characteristics has hitherto remained unexplained. In the present study, we test these roles using the sequential dyadic die-rolling paradigm (N = 499 across five conditions). Our results indicate that interacting with a fully dishonest partner leads to higher cheating rates than interacting with a fully honest partner, although being paired with a fully honest partner does not eliminate dishonesty completely. Furthermore, we found that the basic personality dimension of Honesty-Humility is consistently negatively related to collaborative dishonesty irrespective of whether participants interact with fully honest or fully dishonest partners. Overall, our investigation provides a comprehensive view of the role of interaction partner’s characteristics in settings allowing for corrupted collaboration.},
    journaltitle = {Social Psychological and Personality Science},
    date = {2020},
    author = {Ścigała, Karolina and Schild, Christoph and Heck, Daniel W and Zettler, Ingo},
    osf = {https://osf.io/t7r3h},
    pubstate = {inpress}
    }

2019

  • [PDF] Arnold, N. R., Heck, D. W., Bröder, A., Meiser, T., & Boywitt, D. C. (2019). Testing hypotheses about binding in context memory with a hierarchical multinomial modeling approach: A preregistered study. Experimental Psychology, 66, 239-251. doi:10.1027/1618-3169/a000442
    [BibTeX] [Abstract] [Data and R Scripts]

    In experiments on multidimensional source memory, a stochastic dependency of source memory for different facets of an episode has been repeatedly demonstrated. This may suggest an integrated representation leading to mutual cuing in context retrieval. However, experiments involving a manipulated reinstatement of one source feature have often failed to affect retrieval of the other feature, suggesting unbound features or rather item-feature binding. The stochastic dependency found in former studies might be a spurious correlation due to aggregation across participants varying in memory strength. We test this artifact explanation by applying a hierarchical multinomial model. Observing stochastic dependency when accounting for interindividual differences would rule out the artifact explanation. A second goal is to elucidate the nature of feature binding: Contrasting encoding conditions with integrated feature judgments versus separate feature judgments are expected to induce different levels of stochastic dependency despite comparable overall source memory if integrated representations include feature-feature binding. The experiment replicated the finding of stochastic dependency and, thus, ruled out an artifact interpretation. However, we did not find different levels of stochastic dependency between conditions. Therefore, the current findings do not reveal decisive evidence to distinguish between the feature-feature binding and the item-context binding account.

    @article{arnold2019testing,
    title = {Testing Hypotheses about Binding in Context Memory with a Hierarchical Multinomial Modeling Approach: {{A}} Preregistered Study},
    volume = {66},
    doi = {10.1027/1618-3169/a000442},
    abstract = {In experiments on multidimensional source memory, a stochastic dependency of source memory for different facets of an episode has been repeatedly demonstrated. This may suggest an integrated representation leading to mutual cuing in context retrieval. However, experiments involving a manipulated reinstatement of one source feature have often failed to affect retrieval of the other feature, suggesting unbound features or rather item-feature binding. The stochastic dependency found in former studies might be a spurious correlation due to aggregation across participants varying in memory strength. We test this artifact explanation by applying a hierarchical multinomial model. Observing stochastic dependency when accounting for interindividual differences would rule out the artifact explanation. A second goal is to elucidate the nature of feature binding: Contrasting encoding conditions with integrated feature judgments versus separate feature judgments are expected to induce different levels of stochastic dependency despite comparable overall source memory if integrated representations include feature-feature binding. The experiment replicated the finding of stochastic dependency and, thus, ruled out an artifact interpretation. However, we did not find different levels of stochastic dependency between conditions. Therefore, the current findings do not reveal decisive evidence to distinguish between the feature-feature binding and the item-context binding account.},
    journaltitle = {Experimental Psychology},
    shortjournal = {Experimental Psychology},
    date = {2019},
    pages = {239-251},
    author = {Arnold, Nina R. and Heck, Daniel W. and Bröder, Arndt and Meiser, Thorsten and Boywitt, C. Dennis},
    osf = {https://osf.io/kw3pv}
    }

  • [PDF] Gronau, Q. F., Wagenmakers, E., Heck, D. W., & Matzke, D. (2019). A simple method for comparing complex models: Bayesian model comparison for hierarchical multinomial processing tree models using warp-III bridge sampling. Psychometrika, 84, 261–284. doi:10.1007/s11336-018-9648-3
    [BibTeX] [Abstract] [https://psyarxiv.com/yxhfm/] [Data and R Scripts]

    Multinomial processing trees (MPTs) are a popular class of cognitive models for categorical data. In typical applications, researchers compare several MPTs, each equipped with many parameters, especially when the models are implemented in a hierarchical framework. The principled Bayesian solution is to compute posterior model probabilities and Bayes factors. Both quantities, however, rely on the marginal likelihood, a high-dimensional integral that cannot be evaluated analytically. We show how Warp-III bridge sampling can be used to compute the marginal likelihood for hierarchical MPTs. We illustrate the procedure with two published data sets.

    @article{gronau2019simple,
    title = {A Simple Method for Comparing Complex Models: {{Bayesian}} Model Comparison for Hierarchical Multinomial Processing Tree Models Using Warp-{{III}} Bridge Sampling},
    volume = {84},
    url = {https://psyarxiv.com/yxhfm/},
    doi = {10.1007/s11336-018-9648-3},
    shorttitle = {A Simple Method for Comparing Complex Models},
    abstract = {Multinomial processing trees (MPTs) are a popular class of cognitive models for categorical data. In typical applications, researchers compare several MPTs, each equipped with many parameters, especially when the models are implemented in a hierarchical framework. The principled Bayesian solution is to compute posterior model probabilities and Bayes factors. Both quantities, however, rely on the marginal likelihood, a high-dimensional integral that cannot be evaluated analytically. We show how Warp-III bridge sampling can be used to compute the marginal likelihood for hierarchical MPTs. We illustrate the procedure with two published data sets.},
    journaltitle = {Psychometrika},
    date = {2019},
    pages = {261--284},
    author = {Gronau, Quentin Frederik and Wagenmakers, Eric-Jan and Heck, Daniel W and Matzke, Dora},
    osf = {https://osf.io/rycg6}
    }

  • [PDF] Heck, D. W., Overstall, A., Gronau, Q. F., & Wagenmakers, E. (2019). Quantifying uncertainty in transdimensional Markov chain Monte Carlo using discrete Markov models. Statistics & Computing, 29, 631-643. doi:10.1007/s11222-018-9828-0
    [BibTeX] [Abstract] [Data and R Scripts] [GitHub] [Preprint]

    Bayesian analysis often concerns an evaluation of models with different dimensionality as is necessary in, for example, model selection or mixture models. To facilitate this evaluation, transdimensional Markov chain Monte Carlo (MCMC) relies on sampling a discrete indexing variable to estimate the posterior model probabilities. However, little attention has been paid to the precision of these estimates. If only few switches occur between the models in the transdimensional MCMC output, precision may be low and assessment based on the assumption of independent samples misleading. Here, we propose a new method to estimate the precision based on the observed transition matrix of the model-indexing variable. Assuming a first order Markov model, the method samples from the posterior of the stationary distribution. This allows assessment of the uncertainty in the estimated posterior model probabilities, model ranks, and Bayes factors. Moreover, the method provides an estimate for the effective sample size of the MCMC output. In two model-selection examples, we show that the proposed approach provides a good assessment of the uncertainty associated with the estimated posterior model probabilities.

    @article{heck2019quantifying,
    archivePrefix = {arXiv},
    eprinttype = {arxiv},
    eprint = {1703.10364},
    title = {Quantifying Uncertainty in Transdimensional {{Markov}} Chain {{Monte Carlo}} Using Discrete {{Markov}} Models},
    volume = {29},
    doi = {10.1007/s11222-018-9828-0},
    abstract = {Bayesian analysis often concerns an evaluation of models with different dimensionality as is necessary in, for example, model selection or mixture models. To facilitate this evaluation, transdimensional Markov chain Monte Carlo (MCMC) relies on sampling a discrete indexing variable to estimate the posterior model probabilities. However, little attention has been paid to the precision of these estimates. If only few switches occur between the models in the transdimensional MCMC output, precision may be low and assessment based on the assumption of independent samples misleading. Here, we propose a new method to estimate the precision based on the observed transition matrix of the model-indexing variable. Assuming a first order Markov model, the method samples from the posterior of the stationary distribution. This allows assessment of the uncertainty in the estimated posterior model probabilities, model ranks, and Bayes factors. Moreover, the method provides an estimate for the effective sample size of the MCMC output. In two model-selection examples, we show that the proposed approach provides a good assessment of the uncertainty associated with the estimated posterior model probabilities.},
    journaltitle = {Statistics \& Computing},
    date = {2019},
    pages = {631-643},
    keywords = {heckfirst,Polytope_Sampling},
    author = {Heck, Daniel W and Overstall, Antony and Gronau, Quentin F and Wagenmakers, Eric-Jan},
    osf = {https://osf.io/kjrkz},
    github = {https://github.com/danheck/MCMCprecision}
    }

  • [PDF] Heck, D. W. (2019). A caveat on the Savage-Dickey density ratio: The case of computing Bayes factors for regression parameters. British Journal of Mathematical and Statistical Psychology, 72, 316-333. doi:10.1111/bmsp.12150
    [BibTeX] [Abstract] [https://psyarxiv.com/7dzsj] [Data and R Scripts]

    The Savage–Dickey density ratio is a simple method for computing the Bayes factor for an equality constraint on one or more parameters of a statistical model. In regression analysis, this includes the important scenario of testing whether one or more of the covariates have an effect on the dependent variable. However, the Savage–Dickey ratio only provides the correct Bayes factor if the prior distribution of the nuisance parameters under the nested model is identical to the conditional prior under the full model given the equality constraint. This condition is violated for multiple regression models with a Jeffreys–Zellner–Siow prior, which is often used as a default prior in psychology. Besides linear regression models, the limitation of the Savage–Dickey ratio is especially relevant when analytical solutions for the Bayes factor are not available. This is the case for generalized linear models, non‐linear models, or cognitive process models with regression extensions. As a remedy, the correct Bayes factor can be computed using a generalized version of the Savage–Dickey density ratio.

    @article{heck2019caveat,
    title = {A Caveat on the {{Savage}}-{{Dickey}} Density Ratio: {{The}} Case of Computing {{Bayes}} Factors for Regression Parameters},
    volume = {72},
    url = {https://psyarxiv.com/7dzsj},
    doi = {10.1111/bmsp.12150},
    abstract = {The Savage–Dickey density ratio is a simple method for computing the Bayes factor for an equality constraint on one or more parameters of a statistical model. In regression analysis, this includes the important scenario of testing whether one or more of the covariates have an effect on the dependent variable. However, the Savage–Dickey ratio only provides the correct Bayes factor if the prior distribution of the nuisance parameters under the nested model is identical to the conditional prior under the full model given the equality constraint. This condition is violated for multiple regression models with a Jeffreys–Zellner–Siow prior, which is often used as a default prior in psychology. Besides linear regression models, the limitation of the Savage–Dickey ratio is especially relevant when analytical solutions for the Bayes factor are not available. This is the case for generalized linear models, non‐linear models, or cognitive process models with regression extensions. As a remedy, the correct Bayes factor can be computed using a generalized version of the Savage–Dickey density ratio.},
    journaltitle = {British Journal of Mathematical and Statistical Psychology},
    date = {2019},
    pages = {316-333},
    keywords = {heckfirst,Polytope_Sampling},
    author = {Heck, Daniel W},
    osf = {https://osf.io/5hpuc}
    }

  • [PDF] Heck, D. W., & Davis-Stober, C. P. (2019). Multinomial models with linear inequality constraints: Overview and improvements of computational methods for Bayesian inference. Journal of Mathematical Psychology, 91, 70-87. doi:10.1016/j.jmp.2019.03.004
    [BibTeX] [Abstract] [Data and R Scripts] [GitHub] [Preprint]

    Many psychological theories can be operationalized as linear inequality constraints on the parameters of multinomial distributions (e.g., discrete choice analysis). These constraints can be described in two equivalent ways: Either as the solution set to a system of linear inequalities or as the convex hull of a set of extremal points (vertices). For both representations, we describe a general Gibbs sampler for drawing posterior samples in order to carry out Bayesian analyses. We also summarize alternative sampling methods for estimating Bayes factors for these model representations using the encompassing Bayes factor method. We introduce the R package multinomineq , which provides an easily-accessible interface to a computationally efficient implementation of these techniques.

    @article{heck2019multinomial,
    archivePrefix = {arXiv},
    eprinttype = {arxiv},
    eprint = {1808.07140},
    title = {Multinomial Models with Linear Inequality Constraints: {{Overview}} and Improvements of Computational Methods for {{Bayesian}} Inference},
    volume = {91},
    doi = {10.1016/j.jmp.2019.03.004},
    shorttitle = {Multinomial Models with Linear Inequality Constraints},
    abstract = {Many psychological theories can be operationalized as linear inequality constraints on the parameters of multinomial distributions (e.g., discrete choice analysis). These constraints can be described in two equivalent ways: Either as the solution set to a system of linear inequalities or as the convex hull of a set of extremal points (vertices). For both representations, we describe a general Gibbs sampler for drawing posterior samples in order to carry out Bayesian analyses. We also summarize alternative sampling methods for estimating Bayes factors for these model representations using the encompassing Bayes factor method. We introduce the R package multinomineq , which provides an easily-accessible interface to a computationally efficient implementation of these techniques.},
    journaltitle = {Journal of Mathematical Psychology},
    date = {2019},
    pages = {70-87},
    author = {Heck, Daniel W and Davis-Stober, Clintin P},
    osf = {https://osf.io/xv9u3},
    github = {https://github.com/danheck/multinomineq}
    }

  • [PDF] Heck, D. W. (2019). Accounting for estimation uncertainty and shrinkage in Bayesian within-subject intervals: A comment on Nathoo, Kilshaw, and Masson (2018). Journal of Mathematical Psychology, 88, 27-31. doi:10.1016/j.jmp.2018.11.002
    [BibTeX] [Abstract] [https://psyarxiv.com/whp8t] [Data and R Scripts]

    To facilitate the interpretation of systematic mean differences in within-subject designs, Nathoo, Kilshaw, and Masson (2018) proposed a Bayesian within-subject highest-density interval (HDI). However, their approach rests on independent maximum-likelihood estimates for the random effects which do not take estimation uncertainty and shrinkage into account. I propose an extension of Nathoo et al.’s method using a fully Bayesian, two-step approach. First, posterior samples are drawn for the linear mixed model. Second, the within-subject HDI is computed repeatedly based on the posterior samples, thereby accounting for estimation uncertainty and shrinkage. After marginalizing over the posterior distribution, the two-step approach results in a Bayesian within-subject HDI with a width similar to that of the classical within-subject confidence interval proposed by Loftus and Masson (1994).

    @article{heck2019accounting,
    title = {Accounting for Estimation Uncertainty and Shrinkage in {{Bayesian}} Within-Subject Intervals: {{A}} Comment on {{Nathoo}}, {{Kilshaw}}, and {{Masson}} (2018)},
    volume = {88},
    url = {https://psyarxiv.com/whp8t},
    doi = {10.1016/j.jmp.2018.11.002},
    abstract = {To facilitate the interpretation of systematic mean differences in within-subject designs, Nathoo, Kilshaw, and Masson (2018) proposed a Bayesian within-subject highest-density interval (HDI). However, their approach rests on independent maximum-likelihood estimates for the random effects which do not take estimation uncertainty and shrinkage into account. I propose an extension of Nathoo et al.’s method using a fully Bayesian, two-step approach. First, posterior samples are drawn for the linear mixed model. Second, the within-subject HDI is computed repeatedly based on the posterior samples, thereby accounting for estimation uncertainty and shrinkage. After marginalizing over the posterior distribution, the two-step approach results in a Bayesian within-subject HDI with a width similar to that of the classical within-subject confidence interval proposed by Loftus and Masson (1994).},
    journaltitle = {Journal of Mathematical Psychology},
    date = {2019},
    pages = {27-31},
    author = {Heck, Daniel W},
    osf = {https://osf.io/mrud9}
    }

  • [PDF] Klein, S. A., Heck, D. W., Reese, G., & Hilbig, B. E. (2019). On the relationship between Openness to Experience, political orientation, and pro-environmental behavior. Personality and Individual Differences, 138, 344-348. doi:10.1016/j.paid.2018.10.017
    [BibTeX] [Abstract] [Data and R Scripts]

    Previous research consistently showed that Openness to Experience is positively linked to pro-environmental behavior. However, this does not appear to hold whenever pro-environmental behavior is mutually exclusive with cooperation. The present study aimed to replicate this null effect of Openness and to test political orientation as explanatory variable: Openness is associated with a left-wing/liberal political orientation, which, in turn, is associated with both cooperation and pro-environmental behavior, thus creating a decision conflict whenever the latter are mutually exclusive. In an online study (N = 355) participants played the Greater Good Game, a social dilemma involving choice conflict between pro-environmental behavior and cooperation. Results both replicated prior findings and suggested that political orientation could indeed account for the null effect of Openness.

    @article{klein2019relationship,
    title = {On the Relationship between {{Openness}} to {{Experience}}, Political Orientation, and pro-Environmental Behavior},
    volume = {138},
    doi = {10.1016/j.paid.2018.10.017},
    abstract = {Previous research consistently showed that Openness to Experience is positively linked to pro-environmental behavior. However, this does not appear to hold whenever pro-environmental behavior is mutually exclusive with cooperation. The present study aimed to replicate this null effect of Openness and to test political orientation as explanatory variable: Openness is associated with a left-wing/liberal political orientation, which, in turn, is associated with both cooperation and pro-environmental behavior, thus creating a decision conflict whenever the latter are mutually exclusive. In an online study (N = 355) participants played the Greater Good Game, a social dilemma involving choice conflict between pro-environmental behavior and cooperation. Results both replicated prior findings and suggested that political orientation could indeed account for the null effect of Openness.},
    journaltitle = {Personality and Individual Differences},
    date = {2019},
    pages = {344-348},
    author = {Klein, Sina A and Heck, Daniel W and Reese, Gerhard and Hilbig, Benjamin E},
    osf = {https://osf.io/gxjc9}
    }

2018

  • [PDF] Heck, D. W., & Moshagen, M. (2018). RRreg: An R package for correlation and regression analyses of randomized response data. Journal of Statistical Software, 85(2), 1-29. doi:10.18637/jss.v085.i02
    [BibTeX] [Abstract] [GitHub]

    The randomized-response (RR) technique was developed to improve the validity of measures assessing attitudes, behaviors, and attributes threatened by social desirability bias. The RR removes any direct link between individual responses and the sensitive attribute to maximize the anonymity of respondents and, in turn, to elicit more honest responding. Since multivariate analyses are no longer feasible using standard methods, we present the R package RRreg that allows for multivariate analyses of RR data in a user-friendly way. We show how to compute bivariate correlations, how to predict an RR variable in an adapted logistic regression framework (with or without random effects), and how to use RR predictors in a modified linear regression. In addition, the package allows for power-analysis and robustness simulations. To facilitate the application of these methods, we illustrate the benefits of multivariate methods for RR variables using an empirical example.

    @article{heck2018rrreg,
    title = {{{RRreg}}: {{An R}} Package for Correlation and Regression Analyses of Randomized Response Data},
    volume = {85(2)},
    doi = {10.18637/jss.v085.i02},
    abstract = {The randomized-response (RR) technique was developed to improve the validity of measures assessing attitudes, behaviors, and attributes threatened by social desirability bias. The RR removes any direct link between individual responses and the sensitive attribute to maximize the anonymity of respondents and, in turn, to elicit more honest responding. Since multivariate analyses are no longer feasible using standard methods, we present the R package RRreg that allows for multivariate analyses of RR data in a user-friendly way. We show how to compute bivariate correlations, how to predict an RR variable in an adapted logistic regression framework (with or without random effects), and how to use RR predictors in a modified linear regression. In addition, the package allows for power-analysis and robustness simulations. To facilitate the application of these methods, we illustrate the benefits of multivariate methods for RR variables using an empirical example.},
    journaltitle = {Journal of Statistical Software},
    date = {2018},
    pages = {1-29},
    keywords = {heckfirst},
    author = {Heck, Daniel W and Moshagen, Morten},
    github = {https://github.com/danheck/RRreg}
    }

  • [PDF] Heck, D. W., Erdfelder, E., & Kieslich, P. J. (2018). Generalized processing tree models: Jointly modeling discrete and continuous variables. Psychometrika, 83, 893–918. doi:10.1007/s11336-018-9622-0
    [BibTeX] [Abstract] [Data and R Scripts] [GitHub]

    Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.

    @article{heck2018generalized,
    title = {Generalized Processing Tree Models: {{Jointly}} Modeling Discrete and Continuous Variables},
    volume = {83},
    doi = {10.1007/s11336-018-9622-0},
    abstract = {Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.},
    journaltitle = {Psychometrika},
    date = {2018},
    pages = {893--918},
    keywords = {heckfirst},
    author = {Heck, Daniel W and Erdfelder, Edgar and Kieslich, Pascal J},
    osf = {https://osf.io/fyeum},
    github = {https://github.com/danheck/gpt}
    }

  • [PDF] Heck, D. W., Arnold, N. R., & Arnold, D. (2018). TreeBUGS: An R package for hierarchical multinomial-processing-tree modeling. Behavior Research Methods, 50, 264-284. doi:10.3758/s13428-017-0869-7
    [BibTeX] [Abstract] [Data and R Scripts] [GitHub]

    Multinomial processing tree (MPT) models are a class of measurement models that account for categorical data by assuming a finite number of underlying cognitive processes. Traditionally, data are aggregated across participants and analyzed under the assumption of independently and identically distributed observations. Hierarchical Bayesian extensions of MPT models explicitly account for participant heterogeneity by assuming that the individual parameters follow a continuous hierarchical distribution. We provide an accessible introduction to hierarchical MPT modeling and present the user-friendly and comprehensive R package TreeBUGS, which implements the two most important hierarchical MPT approaches for participant heterogeneity—the beta-MPT approach (Smith & Batchelder, Journal of Mathematical Psychology 54:167-183, 2010) and the latent-trait MPT approach (Klauer, Psychometrika 75:70-98, 2010). TreeBUGS reads standard MPT model files and obtains Markov-chain Monte Carlo samples that approximate the posterior distribution. The functionality and output are tailored to the specific needs of MPT modelers and provide tests for the homogeneity of items and participants, individual and group parameter estimates, fit statistics, and within- and between-subjects comparisons, as well as goodness-of-fit and summary plots. We also propose and implement novel statistical extensions to include continuous and discrete predictors (as either fixed or random effects) in the latent-trait MPT model.

    @article{heck2018treebugs,
    langid = {english},
    title = {{{TreeBUGS}}: {{An R}} Package for Hierarchical Multinomial-Processing-Tree Modeling},
    volume = {50},
    doi = {10.3758/s13428-017-0869-7},
    shorttitle = {{{TreeBUGS}}},
    abstract = {Multinomial processing tree (MPT) models are a class of measurement models that account for categorical data by assuming a finite number of underlying cognitive processes. Traditionally, data are aggregated across participants and analyzed under the assumption of independently and identically distributed observations. Hierarchical Bayesian extensions of MPT models explicitly account for participant heterogeneity by assuming that the individual parameters follow a continuous hierarchical distribution. We provide an accessible introduction to hierarchical MPT modeling and present the user-friendly and comprehensive R package TreeBUGS, which implements the two most important hierarchical MPT approaches for participant heterogeneity—the beta-MPT approach (Smith \& Batchelder, Journal of Mathematical Psychology 54:167-183, 2010) and the latent-trait MPT approach (Klauer, Psychometrika 75:70-98, 2010). TreeBUGS reads standard MPT model files and obtains Markov-chain Monte Carlo samples that approximate the posterior distribution. The functionality and output are tailored to the specific needs of MPT modelers and provide tests for the homogeneity of items and participants, individual and group parameter estimates, fit statistics, and within- and between-subjects comparisons, as well as goodness-of-fit and summary plots. We also propose and implement novel statistical extensions to include continuous and discrete predictors (as either fixed or random effects) in the latent-trait MPT model.},
    journaltitle = {Behavior Research Methods},
    shortjournal = {Behav Res},
    date = {2018},
    pages = {264-284},
    keywords = {heckfirst},
    author = {Heck, Daniel W and Arnold, Nina R. and Arnold, Denis},
    osf = {https://osf.io/s82bw},
    github = {https://github.com/denis-arnold/TreeBUGS}
    }

  • [PDF] Heck, D. W., Thielmann, I., Moshagen, M., & Hilbig, B. E. (2018). Who lies? A large-scale reanalysis linking basic personality traits to unethical decision making. Judgment and Decision Making, 13, 356–371. Retrieved from http://journal.sjdm.org/18/18322/jdm18322.pdf
    [BibTeX] [Abstract] [Data and R Scripts]

    Previous research has established that higher levels of trait Honesty-Humility (HH) are associated with less dishonest behavior in cheating paradigms. However, only imprecise effect size estimates of this HH-cheating link are available. Moreover, evidence is inconclusive on whether other basic personality traits from the HEXACO or Big Five models are associated with unethical decision making and whether such effects have incremental validity beyond HH. We address these issues in a highly powered reanalysis of 16 studies assessing dishonest behavior in an incentivized, one-shot cheating paradigm (N = 5,002). For this purpose, we rely on a newly developed logistic regression approach for the analysis of nested data in cheating paradigms. We also test theoretically derived interactions of HH with other basic personality traits (i.e., Emotionality and Conscientiousness) and situational factors (i.e., the baseline probability of observing a favorable outcome) as well as the incremental validity of HH over demographic characteristics. The results show a medium to large effect of HH (odds ratio = 0.53), which was independent of other personality, situational, or demographic variables. Only one other trait (Big Five Agreeableness) was associated with unethical decision making, although it failed to show any incremental validity beyond HH.

    @article{heck2018who,
    title = {Who Lies? {{A}} Large-Scale Reanalysis Linking Basic Personality Traits to Unethical Decision Making},
    volume = {13},
    url = {http://journal.sjdm.org/18/18322/jdm18322.pdf},
    abstract = {Previous research has established that higher levels of trait Honesty-Humility (HH) are associated with less dishonest behavior in cheating paradigms. However, only imprecise effect size estimates of this HH-cheating link are available. Moreover, evidence is inconclusive on whether other basic personality traits from the HEXACO or Big Five models are associated with unethical decision making and whether such effects have incremental validity beyond HH. We address these issues in a highly powered reanalysis of 16 studies assessing dishonest behavior in an incentivized, one-shot cheating paradigm (N = 5,002). For this purpose, we rely on a newly developed logistic regression approach for the analysis of nested data in cheating paradigms. We also test theoretically derived interactions of HH with other basic personality traits (i.e., Emotionality and Conscientiousness) and situational factors (i.e., the baseline probability of observing a favorable outcome) as well as the incremental validity of HH over demographic characteristics. The results show a medium to large effect of HH (odds ratio = 0.53), which was independent of other personality, situational, or demographic variables. Only one other trait (Big Five Agreeableness) was associated with unethical decision making, although it failed to show any incremental validity beyond HH.},
    journaltitle = {Judgment and Decision Making},
    date = {2018},
    pages = {356--371},
    keywords = {heckfirst},
    author = {Heck, Daniel W and Thielmann, Isabel and Moshagen, Morten and Hilbig, Benjamin E},
    osf = {https://osf.io/56hw4}
    }

  • [PDF] Heck, D. W., Hoffmann, A., & Moshagen, M. (2018). Detecting nonadherence without loss in efficiency: A simple extension of the crosswise model. Behavior Research Methods, 50, 1895-1905. doi:10.3758/s13428-017-0957-8
    [BibTeX] [Abstract] [Data and R Scripts]

    In surveys concerning sensitive behavior or attitudes, respondents often do not answer truthfully, because of social desirability bias. To elicit more honest responding, the randomized-response (RR) technique aims at increasing perceived and actual anonymity by prompting respondents to answer with a randomly modified and thus uninformative response. In the crosswise model, as a particularly promising variant of the RR, this is achieved by adding a second, nonsensitive question and by prompting respondents to answer both questions jointly. Despite increased privacy protection and empirically higher prevalence estimates of socially undesirable behaviors, evidence also suggests that some respondents might still not adhere to the instructions, in turn leading to questionable results. Herein we propose an extension of the crosswise model (ECWM) that makes it possible to detect several types of response biases with adequate power in realistic sample sizes. Importantly, the ECWM allows for testing the validity of the model’s assumptions without any loss in statistical efficiency. Finally, we provide an empirical example supporting the usefulness of the ECWM.

    @article{heck2018detecting,
    langid = {english},
    title = {Detecting Nonadherence without Loss in Efficiency: {{A}} Simple Extension of the Crosswise Model},
    volume = {50},
    doi = {10.3758/s13428-017-0957-8},
    shorttitle = {Detecting Nonadherence without Loss in Efficiency},
    abstract = {In surveys concerning sensitive behavior or attitudes, respondents often do not answer truthfully, because of social desirability bias. To elicit more honest responding, the randomized-response (RR) technique aims at increasing perceived and actual anonymity by prompting respondents to answer with a randomly modified and thus uninformative response. In the crosswise model, as a particularly promising variant of the RR, this is achieved by adding a second, nonsensitive question and by prompting respondents to answer both questions jointly. Despite increased privacy protection and empirically higher prevalence estimates of socially undesirable behaviors, evidence also suggests that some respondents might still not adhere to the instructions, in turn leading to questionable results. Herein we propose an extension of the crosswise model (ECWM) that makes it possible to detect several types of response biases with adequate power in realistic sample sizes. Importantly, the ECWM allows for testing the validity of the model’s assumptions without any loss in statistical efficiency. Finally, we provide an empirical example supporting the usefulness of the ECWM.},
    journaltitle = {Behavior Research Methods},
    shortjournal = {Behav Res},
    date = {2018},
    pages = {1895-1905},
    keywords = {Sensitive questions,Randomized response,Measurement model,Social desirability,heckfirst,Survey design},
    author = {Heck, Daniel W and Hoffmann, Adrian and Moshagen, Morten},
    osf = {https://osf.io/mxjgf}
    }

  • [PDF] Miller, R., Scherbaum, S., Heck, D. W., Goschke, T., & Enge, S. (2018). On the relation between the (censored) shifted Wald and the Wiener distribution as measurement models for choice response times. Applied Psychological Measurement, 42, 116-135. doi:10.1177/0146621617710465
    [BibTeX] [Abstract]

    Inferring processes or constructs from performance data is a major hallmark of cognitive psychometrics. Particularly, diffusion modeling of response times (RTs) from correct and erroneous responses using the Wiener distribution has become a popular measurement tool because it provides a set of psychologically interpretable parameters. However, an important precondition to identify all of these parameters is a sufficient number of RTs from erroneous responses. In the present article, we show by simulation that the parameters of the Wiener distribution can be recovered from tasks yielding very high or even perfect response accuracies using the shifted Wald distribution. Specifically, we argue that error RTs can be modeled as correct RTs that have undergone censoring by using techniques from parametric survival analysis. We illustrate our reasoning by fitting the Wiener and (censored) shifted Wald distribution to RTs from six participants who completed a Go/No-go task. In accordance with our simulations, diffusion modeling using the Wiener and the shifted Wald distribution yielded identical parameter estimates when the number of erroneous responses was predicted to be low. Moreover, the modeling of error RTs as censored correct RTs substantially improved the recovery of these diffusion parameters when premature trial timeout was introduced to increase the number of omission errors. Thus, the censored shifted Wald distribution provides a suitable means for diffusion modeling in situations when the Wiener distribution cannot be fitted without parametric constraints.

    @article{miller2018relation,
    title = {On the Relation between the (Censored) Shifted {{Wald}} and the {{Wiener}} Distribution as Measurement Models for Choice Response Times},
    volume = {42},
    doi = {10.1177/0146621617710465},
    abstract = {Inferring processes or constructs from performance data is a major hallmark of cognitive psychometrics. Particularly, diffusion modeling of response times (RTs) from correct and erroneous responses using the Wiener distribution has become a popular measurement tool because it provides a set of psychologically interpretable parameters. However, an important precondition to identify all of these parameters is a sufficient number of RTs from erroneous responses. In the present article, we show by simulation that the parameters of the Wiener distribution can be recovered from tasks yielding very high or even perfect response accuracies using the shifted Wald distribution. Specifically, we argue that error RTs can be modeled as correct RTs that have undergone censoring by using techniques from parametric survival analysis. We illustrate our reasoning by fitting the Wiener and (censored) shifted Wald distribution to RTs from six participants who completed a Go/No-go task. In accordance with our simulations, diffusion modeling using the Wiener and the shifted Wald distribution yielded identical parameter estimates when the number of erroneous responses was predicted to be low. Moreover, the modeling of error RTs as censored correct RTs substantially improved the recovery of these diffusion parameters when premature trial timeout was introduced to increase the number of omission errors. Thus, the censored shifted Wald distribution provides a suitable means for diffusion modeling in situations when the Wiener distribution cannot be fitted without parametric constraints.},
    journaltitle = {Applied Psychological Measurement},
    date = {2018},
    pages = {116-135},
    author = {Miller, Robert and Scherbaum, S and Heck, Daniel W and Goschke, Thomas and Enge, Soeren}
    }

  • [PDF] Plieninger, H., & Heck, D. W. (2018). A new model for acquiescence at the interface of psychometrics and cognitive psychology. Multivariate Behavioral Research, 53, 633-654. doi:10.1080/00273171.2018.1469966
    [BibTeX] [Abstract] [GitHub]

    When measuring psychological traits, one has to consider that respondents often show content-unrelated response behavior in answering questionnaires. To disentangle the target trait and two such response styles, extreme responding and midpoint responding, Böckenholt (2012, Psychological Methods, 17, 665–678) developed an item response model based on a latent processing tree structure. We propose a theoretically motivated extension of this model to also measure acquiescence, the tendency to agree with both regular and reversed items. Substantively, our approach builds on multinomial processing tree (MPT) models that are used in cognitive psychology to disentangle qualitatively distinct processes. Accordingly, the new model for response styles assumes a mixture distribution of affirmative responses, which are either determined by the underlying target trait or by acquiescence. In order to estimate the model parameters, we rely on Bayesian hierarchical estimation of MPT models. In simulations, we show that the model provides unbiased estimates of response styles and the target trait, and we compare the new model and Böckenholt’s model in a recovery study. An empirical example from personality psychology is used for illustrative purposes.

    @article{plieninger2018new,
    title = {A New Model for Acquiescence at the Interface of Psychometrics and Cognitive Psychology},
    volume = {53},
    doi = {10.1080/00273171.2018.1469966},
    abstract = {When measuring psychological traits, one has to consider that respondents often show content-unrelated response behavior in answering questionnaires. To disentangle the target trait and two such response styles, extreme responding and midpoint responding, Böckenholt (2012, Psychological Methods, 17, 665–678) developed an item response model based on a latent processing tree structure. We propose a theoretically motivated extension of this model to also measure acquiescence, the tendency to agree with both regular and reversed items. Substantively, our approach builds on multinomial processing tree (MPT) models that are used in cognitive psychology to disentangle qualitatively distinct processes. Accordingly, the new model for response styles assumes a mixture distribution of affirmative responses, which are either determined by the underlying target trait or by acquiescence. In order to estimate the model parameters, we rely on Bayesian hierarchical estimation of MPT models. In simulations, we show that the model provides unbiased estimates of response styles and the target trait, and we compare the new model and Böckenholt's model in a recovery study. An empirical example from personality psychology is used for illustrative purposes.},
    journaltitle = {Multivariate Behavioral Research},
    date = {2018},
    pages = {633-654},
    author = {Plieninger, Hansjörg and Heck, Daniel W},
    github = {https://github.com/hplieninger/mpt2irt}
    }

2017

  • [PDF] Gronau, Q. F., Erp, S. V., Heck, D. W., Cesario, J., Jonas, K. J., & Wagenmakers, E. (2017). A Bayesian model-averaged meta-analysis of the power pose effect with informed and default priors: the case of felt power. Comprehensive Results in Social Psychology, 2, 123-138. doi:10.1080/23743603.2017.1326760
    [BibTeX] [Abstract] [Data and R Scripts]

    Earlier work found that – compared to participants who adopted constrictive body postures – participants who adopted expansive body postures reported feeling more powerful, showed an increase in testosterone and a decrease in cortisol, and displayed an increased tolerance for risk. However, these power pose effects have recently come under considerable scrutiny. Here, we present a Bayesian meta-analysis of six preregistered studies from this special issue, focusing on the effect of power posing on felt power. Our analysis improves on standard classical meta-analyses in several ways. First and foremost, we considered only preregistered studies, eliminating concerns about publication bias. Second, the Bayesian approach enables us to quantify evidence for both the alternative and the null hypothesis. Third, we use Bayesian model-averaging to account for the uncertainty with respect to the choice for a fixed-effect model or a random-effect model. Fourth, based on a literature review, we obtained an empirically informed prior distribution for the between-study heterogeneity of effect sizes. This empirically informed prior can serve as a default choice not only for the investigation of the power pose effect but for effects in the field of psychology more generally. For effect size, we considered a default and an informed prior. Our meta-analysis yields very strong evidence for an effect of power posing on felt power. However, when the analysis is restricted to participants unfamiliar with the effect, the meta-analysis yields evidence that is only moderate.

    @article{gronau2017bayesian,
    title = {A {{Bayesian}} Model-Averaged Meta-Analysis of the Power Pose Effect with Informed and Default Priors: The Case of Felt Power},
    volume = {2},
    doi = {10.1080/23743603.2017.1326760},
    shorttitle = {A {{Bayesian}} Model-Averaged Meta-Analysis of the Power Pose Effect with Informed and Default Priors},
    abstract = {Earlier work found that – compared to participants who adopted constrictive body postures – participants who adopted expansive body postures reported feeling more powerful, showed an increase in testosterone and a decrease in cortisol, and displayed an increased tolerance for risk. However, these power pose effects have recently come under considerable scrutiny. Here, we present a Bayesian meta-analysis of six preregistered studies from this special issue, focusing on the effect of power posing on felt power. Our analysis improves on standard classical meta-analyses in several ways. First and foremost, we considered only preregistered studies, eliminating concerns about publication bias. Second, the Bayesian approach enables us to quantify evidence for both the alternative and the null hypothesis. Third, we use Bayesian model-averaging to account for the uncertainty with respect to the choice for a fixed-effect model or a random-effect model. Fourth, based on a literature review, we obtained an empirically informed prior distribution for the between-study heterogeneity of effect sizes. This empirically informed prior can serve as a default choice not only for the investigation of the power pose effect but for effects in the field of psychology more generally. For effect size, we considered a default and an informed prior. Our meta-analysis yields very strong evidence for an effect of power posing on felt power. However, when the analysis is restricted to participants unfamiliar with the effect, the meta-analysis yields evidence that is only moderate.},
    journaltitle = {Comprehensive Results in Social Psychology},
    date = {2017},
    pages = {123-138},
    author = {Gronau, Quentin F. and Erp, Sara Van and Heck, Daniel W and Cesario, Joseph and Jonas, Kai J. and Wagenmakers, Eric-Jan},
    osf = {https://osf.io/k5avt}
    }

  • [PDF] Heck, D. W., Hilbig, B. E., & Moshagen, M. (2017). From information processing to decisions: Formalizing and comparing probabilistic choice models. Cognitive Psychology, 96, 26-40. doi:10.1016/j.cogpsych.2017.05.003
    [BibTeX] [Abstract] [Data and R Scripts]

    Decision strategies explain how people integrate multiple sources of information to make probabilistic inferences. In the past decade, increasingly sophisticated methods have been developed to determine which strategy explains decision behavior best. We extend these efforts to test psychologically more plausible models (i.e., strategies), including a new, probabilistic version of the take-the-best (TTB) heuristic that implements a rank order of error probabilities based on sequential processing. Within a coherent statistical framework, deterministic and probabilistic versions of TTB and other strategies can directly be compared using model selection by minimum description length or the Bayes factor. In an experiment with inferences from given information, only three of 104 participants were best described by the psychologically plausible, probabilistic version of TTB. Similar as in previous studies, most participants were classified as users of weighted-additive, a strategy that integrates all available information and approximates rational decisions.

    @article{heck2017information,
    title = {From Information Processing to Decisions: {{Formalizing}} and Comparing Probabilistic Choice Models},
    volume = {96},
    doi = {10.1016/j.cogpsych.2017.05.003},
    abstract = {Decision strategies explain how people integrate multiple sources of information to make probabilistic inferences. In the past decade, increasingly sophisticated methods have been developed to determine which strategy explains decision behavior best. We extend these efforts to test psychologically more plausible models (i.e., strategies), including a new, probabilistic version of the take-the-best (TTB) heuristic that implements a rank order of error probabilities based on sequential processing. Within a coherent statistical framework, deterministic and probabilistic versions of TTB and other strategies can directly be compared using model selection by minimum description length or the Bayes factor. In an experiment with inferences from given information, only three of 104 participants were best described by the psychologically plausible, probabilistic version of TTB. Similar as in previous studies, most participants were classified as users of weighted-additive, a strategy that integrates all available information and approximates rational decisions.},
    journaltitle = {Cognitive Psychology},
    date = {2017},
    pages = {26-40},
    keywords = {heckfirst,Polytope_Sampling,popularity_bias},
    author = {Heck, Daniel W and Hilbig, Benjamin E and Moshagen, Morten},
    osf = {https://osf.io/jcd2c}
    }

  • [PDF] Heck, D. W., & Erdfelder, E. (2017). Linking process and measurement models of recognition-based decisions. Psychological Review, 124, 442-471. doi:10.1037/rev0000063
    [BibTeX] [Abstract] [Data and R Scripts]

    When making inferences about pairs of objects, one of which is recognized and the other is not, the recognition heuristic states that participants choose the recognized object in a noncompensatory way without considering any further knowledge. In contrast, information-integration theories such as parallel constraint satisfaction (PCS) assume that recognition is merely one of many cues that is integrated with further knowledge in a compensatory way. To test both process models against each other without manipulating recognition or further knowledge, we include response times into the r-model, a popular multinomial processing tree model for memory-based decisions. Essentially, this response-time-extended r-model allows to test a crucial prediction of PCS, namely, that the integration of recognition-congruent knowledge leads to faster decisions compared to the consideration of recognition only—even though more information is processed. In contrast, decisions due to recognition-heuristic use are predicted to be faster than decisions affected by any further knowledge. Using the classical German-cities example, simulations show that the novel measurement model discriminates between both process models based on choices, decision times, and recognition judgments only. In a reanalysis of 29 data sets including more than 400,000 individual trials, noncompensatory choices of the recognized option were estimated to be slower than choices due to recognition-congruent knowledge. This corroborates the parallel information-integration account of memory-based decisions, according to which decisions become faster when the coherence of the available information increases. (PsycINFO Database Record (c) 2017 APA, all rights reserved)

    @article{heck2017linking,
    title = {Linking Process and Measurement Models of Recognition-Based Decisions},
    volume = {124},
    doi = {10.1037/rev0000063},
    abstract = {When making inferences about pairs of objects, one of which is recognized and the other is not, the recognition heuristic states that participants choose the recognized object in a noncompensatory way without considering any further knowledge. In contrast, information-integration theories such as parallel constraint satisfaction (PCS) assume that recognition is merely one of many cues that is integrated with further knowledge in a compensatory way. To test both process models against each other without manipulating recognition or further knowledge, we include response times into the r-model, a popular multinomial processing tree model for memory-based decisions. Essentially, this response-time-extended r-model allows to test a crucial prediction of PCS, namely, that the integration of recognition-congruent knowledge leads to faster decisions compared to the consideration of recognition only—even though more information is processed. In contrast, decisions due to recognition-heuristic use are predicted to be faster than decisions affected by any further knowledge. Using the classical German-cities example, simulations show that the novel measurement model discriminates between both process models based on choices, decision times, and recognition judgments only. In a reanalysis of 29 data sets including more than 400,000 individual trials, noncompensatory choices of the recognized option were estimated to be slower than choices due to recognition-congruent knowledge. This corroborates the parallel information-integration account of memory-based decisions, according to which decisions become faster when the coherence of the available information increases. (PsycINFO Database Record (c) 2017 APA, all rights reserved)},
    journaltitle = {Psychological Review},
    date = {2017},
    pages = {442-471},
    keywords = {heckpaper,heckfirst,popularity_bias},
    author = {Heck, Daniel W and Erdfelder, Edgar},
    osf = {https://osf.io/4kv87}
    }

  • [PDF] Klein, S. A., Hilbig, B. E., & Heck, D. W. (2017). Which is the greater good? A social dilemma paradigm disentangling environmentalism and cooperation. Journal of Environmental Psychology, 53, 40-49. doi:10.1016/j.jenvp.2017.06.001
    [BibTeX] [Abstract] [Data and R Scripts]

    In previous research, pro-environmental behavior (PEB) was almost exclusively aligned with in-group cooperation. However, PEB and in-group cooperation can also be mutually exclusive or directly conflict. To provide first evidence on behavior in these situations, the present work develops the Greater Good Game (GGG), a social dilemma paradigm with a selfish, a cooperative, and a pro-environmental choice option. In Study 1, the GGG and a corresponding measurement model were experimentally validated using different payoff structures. Results show that in-group cooperation is the dominant behavior in a situation of mutual exclusiveness, whereas selfish behavior becomes more dominant in a situation of conflict. Study 2 examined personality influences on choices in the GGG. High Honesty-Humility was associated with less selfishness, whereas Openness was not associated with more PEB. Results corroborate the paradigm as a valid instrument for investigating the conflict between in-group cooperation and PEB and provide first insights into personality influences.

    @article{klein2017which,
    title = {Which Is the Greater Good? {{A}} Social Dilemma Paradigm Disentangling Environmentalism and Cooperation},
    volume = {53},
    doi = {10.1016/j.jenvp.2017.06.001},
    shorttitle = {Which Is the Greater Good?},
    abstract = {In previous research, pro-environmental behavior (PEB) was almost exclusively aligned with in-group cooperation. However, PEB and in-group cooperation can also be mutually exclusive or directly conflict. To provide first evidence on behavior in these situations, the present work develops the Greater Good Game (GGG), a social dilemma paradigm with a selfish, a cooperative, and a pro-environmental choice option. In Study 1, the GGG and a corresponding measurement model were experimentally validated using different payoff structures. Results show that in-group cooperation is the dominant behavior in a situation of mutual exclusiveness, whereas selfish behavior becomes more dominant in a situation of conflict. Study 2 examined personality influences on choices in the GGG. High Honesty-Humility was associated with less selfishness, whereas Openness was not associated with more PEB. Results corroborate the paradigm as a valid instrument for investigating the conflict between in-group cooperation and PEB and provide first insights into personality influences.},
    journaltitle = {Journal of Environmental Psychology},
    shortjournal = {Journal of Environmental Psychology},
    date = {2017},
    pages = {40-49},
    keywords = {HEXACO,Cognitive psychometrics,Externalities,Public goods,Actual behavior},
    author = {Klein, Sina A. and Hilbig, Benjamin E. and Heck, Daniel W},
    osf = {https://osf.io/zw2ze}
    }

2016

  • [PDF] Heck, D. W., & Erdfelder, E. (2016). Extending multinomial processing tree models to measure the relative speed of cognitive processes. Psychonomic Bulletin & Review, 23, 1440-1465. doi:10.3758/s13423-016-1025-6
    [BibTeX] [Abstract]

    Multinomial processing tree (MPT) models account for observed categorical responses by assuming a finite number of underlying cognitive processes. We propose a general method that allows for the inclusion of response times (RTs) into any kind of MPT model to measure the relative speed of the hypothesized processes. The approach relies on the fundamental assumption that observed RT distributions emerge as mixtures of latent RT distributions that correspond to different underlying processing paths. To avoid auxiliary assumptions about the shape of these latent RT distributions, we account for RTs in a distribution-free way by splitting each observed category into several bins from fast to slow responses, separately for each individual. Given these data, latent RT distributions are parameterized by probability parameters for these RT bins, and an extended MPT model is obtained. Hence, all of the statistical results and software available for MPT models can easily be used to fit, test, and compare RT-extended MPT models. We demonstrate the proposed method by applying it to the two-high-threshold model of recognition memory.

    @article{heck2016extending,
    title = {Extending Multinomial Processing Tree Models to Measure the Relative Speed of Cognitive Processes},
    volume = {23},
    doi = {10.3758/s13423-016-1025-6},
    abstract = {Multinomial processing tree (MPT) models account for observed categorical responses by assuming a finite number of underlying cognitive processes. We propose a general method that allows for the inclusion of response times (RTs) into any kind of MPT model to measure the relative speed of the hypothesized processes. The approach relies on the fundamental assumption that observed RT distributions emerge as mixtures of latent RT distributions that correspond to different underlying processing paths. To avoid auxiliary assumptions about the shape of these latent RT distributions, we account for RTs in a distribution-free way by splitting each observed category into several bins from fast to slow responses, separately for each individual. Given these data, latent RT distributions are parameterized by probability parameters for these RT bins, and an extended MPT model is obtained. Hence, all of the statistical results and software available for MPT models can easily be used to fit, test, and compare RT-extended MPT models. We demonstrate the proposed method by applying it to the two-high-threshold model of recognition memory.},
    journaltitle = {Psychonomic Bulletin \& Review},
    date = {2016},
    pages = {1440-1465},
    keywords = {heckpaper,heckfirst},
    author = {Heck, Daniel W and Erdfelder, Edgar}
    }

  • [PDF] Heck, D. W., & Wagenmakers, E. (2016). Adjusted priors for Bayes factors involving reparameterized order constraints. Journal of Mathematical Psychology, 73, 110-116. doi:10.1016/j.jmp.2016.05.004
    [BibTeX] [Abstract] [Data and R Scripts] [Preprint]

    Many psychological theories that are instantiated as statistical models imply order constraints on the model parameters. To fit and test such restrictions, order constraints of the form theta_i {$<$} theta_j can be reparameterized with auxiliary parameters eta in [0,1] to replace the original parameters by theta_i = eta*theta_j. This approach is especially common in multinomial processing tree (MPT) modeling because the reparameterized, less complex model also belongs to the MPT class. Here, we discuss the importance of adjusting the prior distributions for the auxiliary parameters of a reparameterized model. This adjustment is important for computing the Bayes factor, a model selection criterion that measures the evidence in favor of an order constraint by trading off model fit and complexity. We show that uniform priors for the auxiliary parameters result in a Bayes factor that differs from the one that is obtained using a multivariate uniform prior on the order-constrained original parameters. As a remedy, we derive the adjusted priors for the auxiliary parameters of the reparameterized model. The practical relevance of the problem is underscored with a concrete example using the multi-trial pair-clustering model.

    @article{heck2016adjusted,
    archivePrefix = {arXiv},
    eprinttype = {arxiv},
    eprint = {1511.08775},
    title = {Adjusted Priors for {{Bayes}} Factors Involving Reparameterized Order Constraints},
    volume = {73},
    doi = {10.1016/j.jmp.2016.05.004},
    abstract = {Many psychological theories that are instantiated as statistical models imply order constraints on the model parameters. To fit and test such restrictions, order constraints of the form theta\_i {$<$} theta\_j can be reparameterized with auxiliary parameters eta in [0,1] to replace the original parameters by theta\_i = eta*theta\_j. This approach is especially common in multinomial processing tree (MPT) modeling because the reparameterized, less complex model also belongs to the MPT class. Here, we discuss the importance of adjusting the prior distributions for the auxiliary parameters of a reparameterized model. This adjustment is important for computing the Bayes factor, a model selection criterion that measures the evidence in favor of an order constraint by trading off model fit and complexity. We show that uniform priors for the auxiliary parameters result in a Bayes factor that differs from the one that is obtained using a multivariate uniform prior on the order-constrained original parameters. As a remedy, we derive the adjusted priors for the auxiliary parameters of the reparameterized model. The practical relevance of the problem is underscored with a concrete example using the multi-trial pair-clustering model.},
    journaltitle = {Journal of Mathematical Psychology},
    date = {2016},
    pages = {110-116},
    keywords = {heckfirst,Polytope_Sampling},
    author = {Heck, Daniel W and Wagenmakers, Eric-Jan},
    osf = {https://osf.io/cz827}
    }

  • [PDF] Thielmann, I., Heck, D. W., & Hilbig, B. E. (2016). Anonymity and incentives: An investigation of techniques to reduce socially desirable responding in the Trust Game. Judgment and Decision Making, 11, 527-536. Retrieved from http://journal.sjdm.org/16/16613/jdm16613.pdf
    [BibTeX] [Abstract] [Data and R Scripts]

    Economic games offer a convenient approach for the study of prosocial behavior. As an advantage, they allow for straightforward implementation of different techniques to reduce socially desirable responding. We investigated the effectiveness of the most prominent of these techniques, namely providing behavior-contingent incentives and maximizing anonymity in three versions of the Trust Game: (i) a hypothetical version without monetary incentives and with a typical level of anonymity, (ii) an incentivized version with monetary incentives and the same (typical) level of anonymity, and (iii) an indirect questioning version without incentives but with a maximum level of anonymity, rendering responses inconclusive due to adding random noise via the Randomized Response Technique. Results from a large (N = 1,267) and heterogeneous sample showed comparable levels of trust for the hypothetical and incentivized versions using direct questioning. However, levels of trust decreased when maximizing the inconclusiveness of responses through indirect questioning. This implies that levels of trust might be particularly sensitive to changes in individuals’ anonymity but not necessarily to monetary incentives.

    @article{thielmann2016anonymity,
    title = {Anonymity and Incentives: {{An}} Investigation of Techniques to Reduce Socially Desirable Responding in the {{Trust Game}}},
    volume = {11},
    url = {http://journal.sjdm.org/16/16613/jdm16613.pdf},
    abstract = {Economic games offer a convenient approach for the study of prosocial behavior. As an advantage, they allow for straightforward implementation of different techniques to reduce socially desirable responding. We investigated the effectiveness of the most prominent of these techniques, namely providing behavior-contingent incentives and maximizing anonymity in three versions of the Trust Game: (i) a hypothetical version without monetary incentives and with a typical level of anonymity, (ii) an incentivized version with monetary incentives and the same (typical) level of anonymity, and (iii) an indirect questioning version without incentives but with a maximum level of anonymity, rendering responses inconclusive due to adding random noise via the Randomized Response Technique. Results from a large (N = 1,267) and heterogeneous sample showed comparable levels of trust for the hypothetical and incentivized versions using direct questioning. However, levels of trust decreased when maximizing the inconclusiveness of responses through indirect questioning. This implies that levels of trust might be particularly sensitive to changes in individuals’ anonymity but not necessarily to monetary incentives.},
    journaltitle = {Judgment and Decision Making},
    date = {2016},
    pages = {527-536},
    author = {Thielmann, Isabel and Heck, Daniel W and Hilbig, Benjamin E},
    osf = {https://osf.io/h7p5t}
    }

2015

  • [PDF] Erdfelder, E., Castela, M., Michalkiewicz, M., & Heck, D. W. (2015). The advantages of model fitting compared to model simulation in research on preference construction. Frontiers in Psychology, 6, 140. doi:10.3389/fpsyg.2015.00140
    [BibTeX]
    @article{erdfelder2015advantages,
    title = {The Advantages of Model Fitting Compared to Model Simulation in Research on Preference Construction},
    volume = {6},
    doi = {10.3389/fpsyg.2015.00140},
    journaltitle = {Frontiers in Psychology},
    date = {2015},
    pages = {140},
    author = {Erdfelder, Edgar and Castela, Marta and Michalkiewicz, Martha and Heck, Daniel W}
    }

  • [PDF] Heck, D. W., Wagenmakers, E., & Morey, R. D. (2015). Testing order constraints: Qualitative differences between Bayes factors and normalized maximum likelihood. Statistics & Probability Letters, 105, 157-162. doi:10.1016/j.spl.2015.06.014
    [BibTeX] [Abstract] [Preprint]

    We compared Bayes factors to normalized maximum likelihood for the simple case of selecting between an order-constrained versus a full binomial model. This comparison revealed two qualitative differences in testing order constraints regarding data dependence and model preference.

    @article{heck2015testing,
    archivePrefix = {arXiv},
    eprinttype = {arxiv},
    eprint = {1411.2778},
    title = {Testing Order Constraints: {{Qualitative}} Differences between {{Bayes}} Factors and Normalized Maximum Likelihood},
    volume = {105},
    doi = {10.1016/j.spl.2015.06.014},
    shorttitle = {Testing Order Constraints},
    abstract = {We compared Bayes factors to normalized maximum likelihood for the simple case of selecting between an order-constrained versus a full binomial model. This comparison revealed two qualitative differences in testing order constraints regarding data dependence and model preference.},
    journaltitle = {Statistics \& Probability Letters},
    shortjournal = {Statistics \& Probability Letters},
    date = {2015},
    pages = {157-162},
    keywords = {selection,model,model selection,Model selection,Minimum description length,Inequality constraint,Model complexity,Polytope_Sampling},
    author = {Heck, Daniel W and Wagenmakers, Eric-Jan and Morey, Richard D.}
    }

2014

  • [PDF] Heck, D. W., Moshagen, M., & Erdfelder, E. (2014). Model selection by minimum description length: Lower-bound sample sizes for the Fisher information approximation. Journal of Mathematical Psychology, 60, 29–34. doi:10.1016/j.jmp.2014.06.002
    [BibTeX] [Abstract] [GitHub] [Preprint]

    The Fisher information approximation (FIA) is an implementation of the minimum description length principle for model selection. Unlike information criteria such as AIC or BIC, it has the advantage of taking the functional form of a model into account. Unfortunately, FIA can be misleading in finite samples, resulting in an inversion of the correct rank order of complexity terms for competing models in the worst case. As a remedy, we propose a lower-bound N' for the sample size that suffices to preclude such errors. We illustrate the approach using three examples from the family of multinomial processing tree models.

    @article{heck2014model,
    archivePrefix = {arXiv},
    eprinttype = {arxiv},
    eprint = {1808.00212},
    title = {Model Selection by Minimum Description Length: {{Lower}}-Bound Sample Sizes for the {{Fisher}} Information Approximation},
    volume = {60},
    doi = {10.1016/j.jmp.2014.06.002},
    abstract = {The Fisher information approximation (FIA) is an implementation of the minimum description length principle for model selection. Unlike information criteria such as AIC or BIC, it has the advantage of taking the functional form of a model into account. Unfortunately, FIA can be misleading in finite samples, resulting in an inversion of the correct rank order of complexity terms for competing models in the worst case. As a remedy, we propose a lower-bound N' for the sample size that suffices to preclude such errors. We illustrate the approach using three examples from the family of multinomial processing tree models.},
    journaltitle = {Journal of Mathematical Psychology},
    date = {2014},
    pages = {29--34},
    keywords = {heckfirst},
    author = {Heck, Daniel W and Moshagen, Morten and Erdfelder, Edgar},
    github = {https://github.com/danheck/FIAminimumN}
    }

  • [PDF] Platzer, C., Bröder, A., & Heck, D. W. (2014). Deciding with the eye: How the visually manipulated accessibility of information in memory influences decision behavior. Memory & Cognition, 42, 595-608. doi:10.3758/s13421-013-0380-z
    [BibTeX] [Abstract]

    Decision situations are typically characterized by uncertainty: Individuals do not know the values of different options on a criterion dimension. For example, consumers do not know which is the healthiest of several products. To make a decision, individuals can use information about cues that are probabilistically related to the criterion dimension, such as sugar content or the concentration of natural vitamins. In two experiments, we investigated how the accessibility of cue information in memory affects which decision strategy individuals rely on. The accessibility of cue information was manipulated by means of a newly developed paradigm, the spatial-memory-cueing paradigm, which is based on a combination of the looking-at-nothing phenomenon and the spatial-cueing paradigm. The results indicated that people use different decision strategies, depending on the validity of easily accessible information. If the easily accessible information is valid, people stop information search and decide according to a simple take-the-best heuristic. If, however, information that comes to mind easily has a low predictive validity, people are more likely to integrate all available cue information in a compensatory manner.

    @article{platzer2014deciding,
    title = {Deciding with the Eye: {{How}} the Visually Manipulated Accessibility of Information in Memory Influences Decision Behavior},
    volume = {42},
    doi = {10.3758/s13421-013-0380-z},
    abstract = {Decision situations are typically characterized by uncertainty: Individuals do not know the values of different options on a criterion dimension. For example, consumers do not know which is the healthiest of several products. To make a decision, individuals can use information about cues that are probabilistically related to the criterion dimension, such as sugar content or the concentration of natural vitamins. In two experiments, we investigated how the accessibility of cue information in memory affects which decision strategy individuals rely on. The accessibility of cue information was manipulated by means of a newly developed paradigm, the spatial-memory-cueing paradigm, which is based on a combination of the looking-at-nothing phenomenon and the spatial-cueing paradigm. The results indicated that people use different decision strategies, depending on the validity of easily accessible information. If the easily accessible information is valid, people stop information search and decide according to a simple take-the-best heuristic. If, however, information that comes to mind easily has a low predictive validity, people are more likely to integrate all available cue information in a compensatory manner.},
    journaltitle = {Memory \& Cognition},
    date = {2014},
    pages = {595-608},
    keywords = {Decision Making,memory,Spatial attention,Accessibility,Visual salience},
    author = {Platzer, Christine and Bröder, Arndt and Heck, Daniel W}
    }

Conference Presentations and Invited Talks

2019

  • Heck, D. W. (2019). Processing tree models for discrete and continuous variables. Cognition and Perception (Rolf Ulrich). Tübingen, Germany.
    [BibTeX]
    @inproceedings{heck2019processing,
    location = {{Tübingen, Germany}},
    title = {Processing Tree Models for Discrete and Continuous Variables},
    publisher = {{Cognition and Perception (Rolf Ulrich)}},
    date = {2019},
    keywords = {heckinvited},
    author = {Heck, Daniel W}
    }

  • Heck, D. W. (2019). Multinomial models with convex linear inequality constraints. Department of Psychology (Herbert Hoijtink). Utrecht, Netherlands.
    [BibTeX] [Abstract]

    Many theories in psychology make predictions about the relative size of probabilities underlying response frequencies for different stimulus material, experimental conditions, or preexisting groups. In such scenarios, multinomial models with inequality constrains are ideally suited for testing informative hypotheses and theoretical orderings on choice probabilities (e.g., whether choice probabilities monotonically increase across conditions). Even though different research groups have developed custom-tailored methods for specific applications and theories, no standardized methods and software are available for the general class of inequality-constrained multinomial models. To facilitate the application of multinomial models by applied and substantive researchers, the user-friendly R package “multinomineq” (Heck & Davis-Stober, 2018) implements and extends computational methods to fit and test multinomial models with linear inequality constraints. Besides model fitting via Markov chain Monte Carlo sampling, the package facilitates model testing with posterior-predictive p-values and encompassing Bayes factors.

    @inproceedings{heck2019multinomial-3,
    location = {{Utrecht, Netherlands}},
    title = {Multinomial Models with Convex Linear Inequality Constraints},
    abstract = {Many theories in psychology make predictions about the relative size of probabilities underlying response frequencies for different stimulus material, experimental conditions, or preexisting groups. In such scenarios, multinomial models with inequality constrains are ideally suited for testing informative hypotheses and theoretical orderings on choice probabilities (e.g., whether choice probabilities monotonically increase across conditions). Even though different research groups have developed custom-tailored methods for specific applications and theories, no standardized methods and software are available for the general class of inequality-constrained multinomial models. To facilitate the application of multinomial models by applied and substantive researchers, the user-friendly R package “multinomineq” (Heck \& Davis-Stober, 2018) implements and extends computational methods to fit and test multinomial models with linear inequality constraints. Besides model fitting via Markov chain Monte Carlo sampling, the package facilitates model testing with posterior-predictive p-values and encompassing Bayes factors.},
    publisher = {{Department of Psychology (Herbert Hoijtink)}},
    date = {2019},
    keywords = {heckinvited},
    author = {Heck, Daniel W}
    }

  • Heck, D. W. (2019). Cognitive psychometrics with Bayesian hierarchical multinomial processing tree models. Meeting of the Working Group Structural Equation Modeling. Tübingen, Germany.
    [BibTeX]
    @inproceedings{heck2019cognitive,
    location = {{Tübingen, Germany}},
    title = {Cognitive Psychometrics with {{Bayesian}} Hierarchical Multinomial Processing Tree Models},
    booktitle = {Meeting of the {{Working Group Structural Equation Modeling}}},
    date = {2019},
    keywords = {hecktalk},
    author = {Heck, Daniel W}
    }

  • Heck, D. W., Davis-Stober, C. P., & Cavagnaro, D. R. (2019). Testing informative hypotheses about latent classes of strategy users based on probabilistic classifications. 52th Annual Meeting of the Society for Mathematical Psychology. Montreal, Canada.
    [BibTeX]
    @inproceedings{heck2019testing,
    location = {{Montreal, Canada}},
    title = {Testing Informative Hypotheses about Latent Classes of Strategy Users Based on Probabilistic Classifications},
    booktitle = {52th {{Annual Meeting}} of the {{Society}} for {{Mathematical Psychology}}},
    date = {2019},
    keywords = {hecktalk},
    author = {Heck, Daniel W and Davis-Stober, Clintin P. and Cavagnaro, Daniel R.}
    }

  • Heck, D. W., & Davis-Stober, C. P. (2019). Bayesian inference for multinomial models with linear inequality constraints. Meeting of the European Mathematical Psychology Group. Heidelberg, Germany.
    [BibTeX]
    @inproceedings{heck2019bayesian,
    location = {{Heidelberg, Germany}},
    title = {Bayesian Inference for Multinomial Models with Linear Inequality Constraints},
    booktitle = {Meeting of the {{European Mathematical Psychology Group}}},
    date = {2019},
    keywords = {hecktalk},
    author = {Heck, Daniel W and Davis-Stober, Clintin P}
    }

  • Heck, D. W., Noventa, S., & Erdfelder, E. (2019). Representing probabilistic models of knowledge space theory by multinomial processing tree models. 52th Annual Meeting of the Society for Mathematical Psychology. Montreal, Canada.
    [BibTeX]
    @inproceedings{heck2019representing,
    location = {{Montreal, Canada}},
    title = {Representing Probabilistic Models of Knowledge Space Theory by Multinomial Processing Tree Models},
    booktitle = {52th {{Annual Meeting}} of the {{Society}} for {{Mathematical Psychology}}},
    date = {2019},
    keywords = {hecktalk},
    author = {Heck, Daniel W and Noventa, Stefano and Erdfelder, Edgar}
    }

  • Heck, D. W. (2019). Multinomial models with convex linear inequality constraints. Stochastics in Mannheim (Leif Döring). Mannheim, Germany.
    [BibTeX] [Abstract]

    Many theories in psychology make predictions about the relative size of probabilities underlying response frequencies for different stimulus material, experimental conditions, or preexisting groups. In such scenarios, multinomial models with inequality constrains are ideally suited for testing informative hypotheses and theoretical orderings on choice probabilities (e.g., whether choice probabilities monotonically increase across conditions). Even though different research groups have developed custom-tailored methods for specific applications and theories, no standardized methods and software are available for the general class of inequality-constrained multinomial models. To facilitate the application of multinomial models by applied and substantive researchers, the user-friendly R package “multinomineq” (Heck & Davis-Stober, 2018) implements and extends computational methods to fit and test multinomial models with linear inequality constraints. Besides model fitting via Markov chain Monte Carlo sampling, the package facilitates model testing with posterior-predictive p-values and encompassing Bayes factors.

    @inproceedings{heck2019multinomial-1,
    location = {{Mannheim, Germany}},
    title = {Multinomial Models with Convex Linear Inequality Constraints},
    abstract = {Many theories in psychology make predictions about the relative size of probabilities underlying response frequencies for different stimulus material, experimental conditions, or preexisting groups. In such scenarios, multinomial models with inequality constrains are ideally suited for testing informative hypotheses and theoretical orderings on choice probabilities (e.g., whether choice probabilities monotonically increase across conditions). Even though different research groups have developed custom-tailored methods for specific applications and theories, no standardized methods and software are available for the general class of inequality-constrained multinomial models. To facilitate the application of multinomial models by applied and substantive researchers, the user-friendly R package “multinomineq” (Heck \& Davis-Stober, 2018) implements and extends computational methods to fit and test multinomial models with linear inequality constraints. Besides model fitting via Markov chain Monte Carlo sampling, the package facilitates model testing with posterior-predictive p-values and encompassing Bayes factors.},
    publisher = {{Stochastics in Mannheim (Leif Döring)}},
    date = {2019},
    keywords = {heckinvited},
    author = {Heck, Daniel W}
    }

  • Heck, D. W. (2019). Bayesian inference for multinomial models with convex linear inequality constraints. Department of Psychology (Eric-Jan Wagenmakers). Amsterdam, Netherlands.
    [BibTeX] [Abstract]

    Many theories in psychology make predictions about the relative size of probabilities underlying response frequencies for different stimulus material, experimental conditions, or preexisting groups. In such scenarios, multinomial models with inequality constrains are ideally suited for testing informative hypotheses and theoretical orderings on choice probabilities (e.g., whether choice probabilities monotonically increase across conditions). Even though different research groups have developed custom-tailored methods for specific applications and theories, no standardized methods and software are available for the general class of inequality-constrained multinomial models. To facilitate the application of multinomial models by applied and substantive researchers, the user-friendly R package “multinomineq” (Heck & Davis-Stober, 2018) implements and extends computational methods to fit and test multinomial models with linear inequality constraints. Besides model fitting via Markov chain Monte Carlo sampling, the package facilitates model testing with posterior-predictive p-values and encompassing Bayes factors.

    @inproceedings{heck2019bayesian-1,
    location = {{Amsterdam, Netherlands}},
    title = {Bayesian Inference for Multinomial Models with Convex Linear Inequality Constraints},
    abstract = {Many theories in psychology make predictions about the relative size of probabilities underlying response frequencies for different stimulus material, experimental conditions, or preexisting groups. In such scenarios, multinomial models with inequality constrains are ideally suited for testing informative hypotheses and theoretical orderings on choice probabilities (e.g., whether choice probabilities monotonically increase across conditions). Even though different research groups have developed custom-tailored methods for specific applications and theories, no standardized methods and software are available for the general class of inequality-constrained multinomial models. To facilitate the application of multinomial models by applied and substantive researchers, the user-friendly R package “multinomineq” (Heck \& Davis-Stober, 2018) implements and extends computational methods to fit and test multinomial models with linear inequality constraints. Besides model fitting via Markov chain Monte Carlo sampling, the package facilitates model testing with posterior-predictive p-values and encompassing Bayes factors.},
    publisher = {{Department of Psychology (Eric-Jan Wagenmakers)}},
    date = {2019},
    keywords = {heckinvited},
    author = {Heck, Daniel W}
    }

2018

  • Heck, D. W. (2018). TreeBUGS: Hierarchical multinomial processing tree models in R. Psychoco 2018: International Workshop on Psychometric Computing. Tübingen, Germany.
    [BibTeX]
    @inproceedings{heck2018treebugs-3,
    location = {{Tübingen, Germany}},
    title = {{{TreeBUGS}}: {{Hierarchical}} Multinomial Processing Tree Models in {{R}}},
    publisher = {{Psychoco 2018: International Workshop on Psychometric Computing}},
    date = {2018},
    keywords = {heckinvited},
    author = {Heck, Daniel W}
    }

  • Heck, D. W., Erdfelder, E., & Kieslich, P. J. (2018). Jointly modeling mouse-trajectories and accuracies with generalized processing trees. 60. Tagung experimentell arbeitender Psychologen. Marburg, Germany.
    [BibTeX] [Abstract]

    Jointly Modeling Mouse-Trajectories and Accuracies with Generalized Processing Trees

    @inproceedings{heck2018jointly,
    location = {{Marburg, Germany}},
    title = {Jointly Modeling Mouse-Trajectories and Accuracies with Generalized Processing Trees},
    abstract = {Jointly Modeling Mouse-Trajectories and Accuracies with Generalized
    Processing Trees},
    booktitle = {60. {{Tagung}} Experimentell Arbeitender {{Psychologen}}},
    date = {2018},
    keywords = {hecktalk},
    author = {Heck, Daniel W and Erdfelder, E and Kieslich, Pascal J}
    }

  • Heck, D. W. (2018). A caveat on using the Savage-Dickey density ratio in regression models. Department of Psychology (Eric-Jan Wagenmakers). Amsterdam, Netherlands.
    [BibTeX] [Abstract]

    In regression analysis, researchers are usually interested in testing whether one or more covariates have an effect on the dependent variable. To compute the Bayes factor for such an effect, the Savage-Dickey density ratio (SDDR) is often used. However, the SDDR only provides the correct Bayes factor if the prior distribution under the nested model is identical to the conditional prior under the full model. This assumption does not hold for regression models with the Jeffreys-Zellner-Siow (JZS) prior on multiple predictors. Beyond standard linear regression, this limitation of the SDDR is especially relevant when analytical solutions for the Bayes factor are not available (e.g., as in generalized linear models, nonlinear models, or cognitive process models with regression extensions). As a remedy, a generalization of the SDDR allows computing the correct Bayes factor.

    @inproceedings{heck2018caveat-1,
    location = {{Amsterdam, Netherlands}},
    title = {A Caveat on Using the {{Savage}}-{{Dickey}} Density Ratio in Regression Models},
    abstract = {In regression analysis, researchers are usually interested in testing whether one or more covariates have an effect on the dependent variable. To compute the Bayes factor for such an effect, the Savage-Dickey density ratio (SDDR) is often used. However, the SDDR only provides the correct Bayes factor if the prior distribution under the nested model is identical to the conditional prior under the full model. This assumption does not hold for regression models with the Jeffreys-Zellner-Siow (JZS) prior on multiple predictors. Beyond standard linear regression, this limitation of the SDDR is especially relevant when analytical solutions for the Bayes factor are not available (e.g., as in generalized linear models, nonlinear models, or cognitive process models with regression extensions). As a remedy, a generalization of the SDDR allows computing the correct Bayes factor.},
    publisher = {{Department of Psychology (Eric-Jan Wagenmakers)}},
    date = {2018},
    keywords = {heckinvited},
    author = {Heck, Daniel W}
    }

  • Heck, D. W. (2018). Bayesian hierarchical multinomial processing tree models: A general framework for cognitive psychometrics. 51. Kongress der Deutschen Gesellschaft für Psychologie. Frankfurt, Germany.
    [BibTeX]
    @inproceedings{heck2018bayesian,
    location = {{Frankfurt, Germany}},
    title = {Bayesian Hierarchical Multinomial Processing Tree Models: {{A}} General Framework for Cognitive Psychometrics},
    booktitle = {51. {{Kongress}} Der {{Deutschen Gesellschaft}} Für {{Psychologie}}},
    date = {2018},
    keywords = {hecktalk},
    author = {Heck, Daniel W}
    }

  • Heck, D. W. (2018). Computing Bayes factors for cognitive models: A caveat on the Savage-Dickey density ratio. Psychonomic Society 59th Annual Meeting. New Orleans, LA.
    [BibTeX]
    @inproceedings{heck2018psychonomics,
    location = {{New Orleans, LA}},
    title = {Computing {{Bayes}} Factors for Cognitive Models: {{A}} Caveat on the {{Savage}}-{{Dickey}} Density Ratio},
    booktitle = {Psychonomic {{Society}} 59th {{Annual Meeting}}},
    date = {2018},
    keywords = {hecktalk},
    author = {Heck, Daniel W}
    }

  • Heck, D. W. (2018). Towards a measurement model for advice taking. SMiP Winter Retreat. St. Martin, Germany.
    [BibTeX]
    @inproceedings{heck2018measurement,
    location = {{St. Martin, Germany}},
    title = {Towards a Measurement Model for Advice Taking},
    booktitle = {{{SMiP Winter Retreat}}},
    date = {2018},
    keywords = {hecktalk},
    author = {Heck, Daniel W}
    }

  • Heck, D. W. (2018). Multinomial models with convex linear inequality-constraints. SMiP Summer Retreat. Wiesneck, Germany.
    [BibTeX]
    @inproceedings{heck2018multinomial-1,
    location = {{Wiesneck, Germany}},
    title = {Multinomial Models with Convex Linear Inequality-Constraints},
    booktitle = {{{SMiP Summer Retreat}}},
    date = {2018},
    keywords = {hecktalk},
    author = {Heck, Daniel W}
    }

  • Heck, D. W., Seiling, L., & Bröder, A. (2018). The love of large numbers revisited: A coherence model of the popularity bias. Meeting of the Society of Judgment and Decision Making. New Orleans, MA.
    [BibTeX]
    @inproceedings{heck2018love-1,
    location = {{New Orleans, MA}},
    title = {The Love of Large Numbers Revisited: {{A}} Coherence Model of the Popularity Bias},
    booktitle = {Meeting of the {{Society}} of {{Judgment}} and {{Decision Making}}},
    date = {2018},
    keywords = {heckposter},
    author = {Heck, Daniel W and Seiling, Lukas and Bröder, Arndt}
    }

2017

  • Heck, D. W., & Erdfelder, E. (2017). Jointly modeling discrete and continuous variables: A generalized processing tree framework. 59. Tagung experimentell arbeitender Psychologen. Pabst. Dresden, Germany.
    [BibTeX]
    @inproceedings{heck2017jointly,
    location = {{Dresden, Germany}},
    title = {Jointly Modeling Discrete and Continuous Variables: {{A}} Generalized Processing Tree Framework},
    booktitle = {59. {{Tagung}} Experimentell Arbeitender {{Psychologen}}},
    publisher = {{Pabst}},
    date = {2017},
    keywords = {hecktalk},
    author = {Heck, Daniel W and Erdfelder, E.}
    }

  • Heck, D. W., Erdfelder, E., & Kieslich, P. J. (2017). Modeling mouse-tracking trajectories with generalized processing tree models. 50th Annual Meeting of the Society for Mathematical Psychology. Warwick, UK.
    [BibTeX] [Abstract]

    Multinomial processing tree models assume a finite number of cognitive states that determine frequencies of discrete responses. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response-times, process-tracing measures, or neurophysiological variables. Essentially, GPT models assume a finite mixture distribution, where the weights are determined by a processing-tree structure, whereas continuous components are modeled by parameterized distributions such as Gaussians with separate or shared means across states. Using a simple modeling syntax, GPT models can easily be adapted to different experimental designs. We develop and test a GPT model for a mouse-tracking paradigm for a semantic categorization task, which is based on the feature comparison model (Smith, Shoben, & Rips, 1974). The model jointly accounts for response frequencies of correct responses and the maximum-deviation of mouse trajectories relative to a direct path.

    @inproceedings{heck2017modeling,
    location = {{Warwick, UK}},
    title = {Modeling Mouse-Tracking Trajectories with Generalized Processing Tree Models},
    abstract = {Multinomial processing tree models assume a finite number of cognitive states that determine frequencies of discrete responses. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response-times, process-tracing measures, or neurophysiological variables. Essentially, GPT models assume a finite mixture distribution, where the weights are determined by a processing-tree structure, whereas continuous components are modeled by parameterized distributions such as Gaussians with separate or shared means across states. Using a simple modeling syntax, GPT models can easily be adapted to different experimental designs. We develop and test a GPT model for a mouse-tracking paradigm for a semantic categorization task, which is based on the feature comparison model (Smith, Shoben, \& Rips, 1974). The model jointly accounts for response frequencies of correct responses and the maximum-deviation of mouse trajectories relative to a direct path.},
    booktitle = {50th {{Annual Meeting}} of the {{Society}} for {{Mathematical Psychology}}},
    date = {2017},
    keywords = {hecktalk},
    author = {Heck, Daniel W and Erdfelder, E and Kieslich, Pascal J}
    }

  • Heck, D. W. (2017). Quantifying uncertainty in transdimensional Markov chain Monte Carlo. Stochastics in Mannheim (Leif Döring). Mannheim, Germany.
    [BibTeX]
    @inproceedings{heck2017quantifying,
    location = {{Mannheim, Germany}},
    title = {Quantifying Uncertainty in Transdimensional {{Markov}} Chain {{Monte Carlo}}},
    publisher = {{Stochastics in Mannheim (Leif Döring)}},
    date = {2017},
    keywords = {heckinvited},
    author = {Heck, Daniel W}
    }

  • Heck, D. W. (2017). Extending multinomial processing tree models to account for response times and other continuous variables. Social Psychology and Methodology (Christoph Klauer). Freiburg, Germany.
    [BibTeX]
    @inproceedings{heck2017extending,
    location = {{Freiburg, Germany}},
    title = {Extending Multinomial Processing Tree Models to Account for Response Times and Other Continuous Variables},
    publisher = {{Social Psychology and Methodology (Christoph Klauer)}},
    date = {2017},
    keywords = {heckinvited},
    author = {Heck, Daniel W}
    }

  • Heck, D. W., & Erdfelder, E. (2017). Discrete-state modeling of discrete and continuous variables: A generalized processing tree framework. 13. Tagung der Fachgruppe Methoden & Evaluation. Tübingen, Germany.
    [BibTeX]
    @inproceedings{heck2017discretestate,
    location = {{Tübingen, Germany}},
    title = {Discrete-State Modeling of Discrete and Continuous Variables: {{A}} Generalized Processing Tree Framework},
    booktitle = {13. {{Tagung}} Der {{Fachgruppe Methoden}} \& {{Evaluation}}},
    date = {2017},
    keywords = {hecktalk},
    author = {Heck, Daniel W and Erdfelder, E.}
    }

  • Heck, D. W., Hilbig, B. E., & Moshagen, M. (2017). Formalizing and comparing psychologically plausible models of multiattribute decisions. Meeting of the Society of Judgment and Decision Making. Vancouver, BC.
    [BibTeX]
    @inproceedings{heck2017formalizing,
    location = {{Vancouver, BC}},
    title = {Formalizing and Comparing Psychologically Plausible Models of Multiattribute Decisions},
    booktitle = {Meeting of the {{Society}} of {{Judgment}} and {{Decision Making}}},
    date = {2017},
    keywords = {heckposter},
    author = {Heck, Daniel W and Hilbig, Benjamin E and Moshagen, Morten}
    }

  • Heck, D. W., Arnold, N. R., & Arnold, D. (2017). TreeBUGS: A user-friendly software for hierarchical multinomial processing tree modeling. Meeting of the Society of Computers in Psychology. Vancouver, BC.
    [BibTeX]
    @inproceedings{heck2017treebugs-1,
    location = {{Vancouver, BC}},
    title = {{{TreeBUGS}}: {{A}} User-Friendly Software for Hierarchical Multinomial Processing Tree Modeling},
    booktitle = {Meeting of the {{Society}} of {{Computers}} in {{Psychology}}},
    date = {2017},
    keywords = {hecktalk},
    author = {Heck, Daniel W and Arnold, Nina R. and Arnold, Denis}
    }

  • Heck, D. W., & Erdfelder, E. (2017). A generalized processing tree framework for discrete-state modeling of discrete and continuous variables. Psychonomic Society 58th Annual Meeting. Vancouver, BC.
    [BibTeX]
    @inproceedings{heck2017generalized-1,
    location = {{Vancouver, BC}},
    title = {A Generalized Processing Tree Framework for Discrete-State Modeling of Discrete and Continuous Variables},
    booktitle = {Psychonomic {{Society}} 58th {{Annual Meeting}}},
    date = {2017},
    keywords = {heckposter},
    author = {Heck, Daniel W and Erdfelder, Edgar},
    annotation = {Poster Session~3015\\
    Friday, November 10, 2017 ~ |~~4:00 PM~-~7:30 PM\\
    Poster Number:~3222}
    }

  • Heck, D. W. (2017). Extending multinomial processing tree models to response times: The case of the recognition heuristic. Adaptive Rationality (Thorsten Pachur). Max Planck Institute, Berlin, Germany.
    [BibTeX]
    @inproceedings{heck2017extending-1,
    location = {{Max Planck Institute, Berlin, Germany}},
    title = {Extending Multinomial Processing Tree Models to Response Times: {{The}} Case of the Recognition Heuristic},
    publisher = {{Adaptive Rationality (Thorsten Pachur)}},
    date = {2017},
    keywords = {heckinvited},
    author = {Heck, Daniel W}
    }

2016

  • Heck, D. W. (2016). Die Rekognitions-Heuristik als Spezialfall allgemeiner Informationsintegrations-Theorien: Erkenntnisse durch Antwortzeitmodellierung mit MPT Modellen. Department of General Psychology II (Klaus Rothermund). Jena, Germany.
    [BibTeX]
    @inproceedings{heck2016rekognitionsheuristik,
    location = {{Jena, Germany}},
    title = {Die {{Rekognitions}}-{{Heuristik}} Als {{Spezialfall}} Allgemeiner {{Informationsintegrations}}-{{Theorien}}: {{Erkenntnisse}} Durch {{Antwortzeitmodellierung}} Mit {{MPT Modellen}}},
    publisher = {{Department of General Psychology II (Klaus Rothermund)}},
    date = {2016},
    keywords = {heckinvited},
    author = {Heck, Daniel W}
    }

  • Heck, D. W. (2016). RRreg: Ein R Package für Multivariate Analysen der Randomized Response Technik. Lehrstuhl für Diagnostik und Differentielle Psychologie (Jochen Musch). Düsseldorf, Germany.
    [BibTeX]
    @inproceedings{heck2016rrreg-1,
    location = {{Düsseldorf, Germany}},
    title = {{{RRreg}}: {{Ein R Package}} Für {{Multivariate Analysen}} Der {{Randomized Response Technik}}},
    publisher = {{Lehrstuhl für Diagnostik und Differentielle Psychologie (Jochen Musch)}},
    date = {2016},
    keywords = {heckinvited},
    author = {Heck, Daniel W}
    }

  • Heck, D. W., & Erdfelder, E. (2016). Testing between information integration and heuristic accounts of recognition-based decisions. 58. Tagung experimentell arbeitender Psychologen. Pabst. Heidelberg, Germany.
    [BibTeX]
    @inproceedings{heck2016testing,
    location = {{Heidelberg, Germany}},
    title = {Testing between Information Integration and Heuristic Accounts of Recognition-Based Decisions},
    booktitle = {58. {{Tagung}} Experimentell Arbeitender {{Psychologen}}},
    publisher = {{Pabst}},
    date = {2016},
    keywords = {hecktalk},
    author = {Heck, Daniel W and Erdfelder, E.}
    }

  • Heck, D. W., & Erdfelder, E. (2016). Testing between serial and parallel theories of recognition-based heuristic decisions. 2nd International Meeting of the Psychonomic Society. Granada, Spain.
    [BibTeX]
    @inproceedings{heck2016testing-1,
    location = {{Granada, Spain}},
    title = {Testing between Serial and Parallel Theories of Recognition-Based Heuristic Decisions},
    booktitle = {2nd {{International Meeting}} of the {{Psychonomic Society}}},
    date = {2016},
    keywords = {heckposter},
    author = {Heck, Daniel W and Erdfelder, E.}
    }

  • Heck, D. W., & Erdfelder, E. (2016). Generalized processing tree models: Modeling discrete and continuous variables simultaneously. 47th European Mathematical Psychology Group Meeting. Copenhagen, Denmark.
    [BibTeX]
    @inproceedings{heck2016generalized,
    location = {{Copenhagen, Denmark}},
    title = {Generalized Processing Tree Models: {{Modeling}} Discrete and Continuous Variables Simultaneously},
    booktitle = {47th {{European Mathematical Psychology Group Meeting}}},
    date = {2016},
    keywords = {hecktalk},
    author = {Heck, Daniel W and Erdfelder, E.}
    }

  • Heck, D. W., & Erdfelder, E. (2016). Model-based evidence on response-time predictions of the recognition heuristic versus compensatory accounts of recognition use. 50. Kongress der Deutschen Gesellschaft für Psychologie. Leipzig, Germany.
    [BibTeX]
    @inproceedings{heck2016modelbased,
    location = {{Leipzig, Germany}},
    title = {Model-Based Evidence on Response-Time Predictions of the Recognition Heuristic versus Compensatory Accounts of Recognition Use},
    booktitle = {50. {{Kongress}} Der {{Deutschen Gesellschaft}} Für {{Psychologie}}},
    date = {2016},
    keywords = {hecktalk},
    author = {Heck, Daniel W and Erdfelder, E.}
    }

  • Heck, D. W. (2016). A parallel-constraint satisfaction account of recognition-based decisions. Coherence-Based Approaches to Decision Making, Cognition, and Communication. Berlin, Germany.
    [BibTeX]
    @inproceedings{heck2016parallelconstraint,
    location = {{Berlin, Germany}},
    title = {A Parallel-Constraint Satisfaction Account of Recognition-Based Decisions},
    booktitle = {Coherence-{{Based Approaches}} to {{Decision Making}}, {{Cognition}}, and {{Communication}}},
    date = {2016},
    keywords = {hecktalk},
    author = {Heck, Daniel W}
    }

2015

  • Heck, D. W., & Erdfelder, E. (2015). Measuring the relative speed of the recognition heuristic. International Summer School on "Theories and Methods in Judgment and Decision Making Research". Nürnberg, Germany.
    [BibTeX]
    @inproceedings{heck2015measuring,
    location = {{Nürnberg, Germany}},
    title = {Measuring the Relative Speed of the Recognition Heuristic},
    booktitle = {International {{Summer School}} on "{{Theories}} and {{Methods}} in {{Judgment}} and {{Decision Making Research}}"},
    date = {2015},
    keywords = {heckposter},
    author = {Heck, Daniel W and Erdfelder, E.}
    }

  • Heck, D. W., & Erdfelder, E. (2015). Comparing the relative processing speed of the recognition heuristic and information integration: Extending the r-model to response times. 46th European Mathematical Psychology Group Meeting. Padua, Italy.
    [BibTeX]
    @inproceedings{heck2015comparing,
    location = {{Padua, Italy}},
    title = {Comparing the Relative Processing Speed of the Recognition Heuristic and Information Integration: {{Extending}} the r-Model to Response Times},
    booktitle = {46th {{European Mathematical Psychology Group Meeting}}},
    date = {2015},
    keywords = {hecktalk},
    author = {Heck, Daniel W and Erdfelder, E.}
    }

  • Heck, D. W., & Erdfelder, E. (2015). Modeling response times within the multinomial processing tree framework. 12. Tagung der Fachgruppe Methoden & Evaluation. Jena, Germany.
    [BibTeX]
    @inproceedings{heck2015modeling,
    location = {{Jena, Germany}},
    title = {Modeling Response Times within the Multinomial Processing Tree Framework},
    booktitle = {12. {{Tagung}} Der {{Fachgruppe Methoden}} \& {{Evaluation}}},
    date = {2015},
    keywords = {hecktalk},
    author = {Heck, Daniel W and Erdfelder, E.}
    }

  • Heck, D. W., & Erdfelder, E. (2015). Response time modeling for finite-state models of recognition. 57. Tagung experimentell arbeitender Psychologen. Pabst. Hildesheim, Germany.
    [BibTeX]
    @inproceedings{heck2015response,
    location = {{Hildesheim, Germany}},
    title = {Response Time Modeling for Finite-State Models of Recognition},
    booktitle = {57. {{Tagung}} Experimentell Arbeitender {{Psychologen}}},
    publisher = {{Pabst}},
    date = {2015},
    keywords = {hecktalk},
    author = {Heck, Daniel W and Erdfelder, E.}
    }

2014

  • Heck, D. W., Moshagen, M., & Erdfelder, E. (2014). Modellselektion anhand Minimum Description Length: Wie groß muss die Stichprobengröße bei Anwendung der Fisher Information Approximation mindestens sein?. 49. Kongress der Deutschen Gesellschaft für Psychologie. Bochum, Germany.
    [BibTeX]
    @inproceedings{heck2014modellselektion,
    location = {{Bochum, Germany}},
    title = {Modellselektion Anhand {{Minimum Description Length}}: {{Wie}} Groß Muss Die {{Stichprobengröße}} Bei {{Anwendung}} Der {{Fisher Information Approximation}} Mindestens Sein?},
    booktitle = {49. {{Kongress}} Der {{Deutschen Gesellschaft}} Für {{Psychologie}}},
    date = {2014},
    keywords = {hecktalk},
    author = {Heck, Daniel W and Moshagen, Morten and Erdfelder, E.}
    }

  • Heck, D. W., & Erdfelder, E. (2014). Response time modeling for finite-state models of recognition. Third European Summer School on Computational Modeling of Cognition with Applications to Society. Laufen, Germany.
    [BibTeX]
    @inproceedings{heck2014response,
    location = {{Laufen, Germany}},
    title = {Response Time Modeling for Finite-State Models of Recognition},
    booktitle = {Third {{European Summer School}} on {{Computational  Modeling}} of {{Cognition}} with {{Applications}} to {{Society}}},
    date = {2014},
    keywords = {heckposter},
    author = {Heck, Daniel W and Erdfelder, E.}
    }

2013

  • Heck, D. W., & Moshagen, M. (2013). Model selection by minimum description length: Performance of the Fisher information approximation. 46th Annual Meeting of the Society for Mathematical Psychology. Potsdam, Germany.
    [BibTeX]
    @inproceedings{heck2013model,
    location = {{Potsdam, Germany}},
    title = {Model Selection by Minimum Description Length: {{Performance}} of the {{Fisher}} Information Approximation},
    booktitle = {46th {{Annual Meeting}} of the {{Society}} for {{Mathematical Psychology}}},
    date = {2013},
    keywords = {heckposter},
    author = {Heck, Daniel W and Moshagen, Morten}
    }

  • Heck, D. W., & Moshagen, M. (2013). Model selection of multinomial processing tree models – A Monte Carlo simulation. 55. Tagung experimentell arbeitender Psychologen. Pabst. Vienna, Austria.
    [BibTeX]
    @inproceedings{heck2013model-1,
    location = {{Vienna, Austria}},
    title = {Model Selection of Multinomial Processing Tree Models – {{A Monte Carlo}} Simulation},
    booktitle = {55. {{Tagung}} Experimentell Arbeitender {{Psychologen}}},
    publisher = {{Pabst}},
    date = {2013},
    keywords = {heckposter},
    author = {Heck, Daniel W and Moshagen, Morten}
    }