Publications

Copyright Notice: The documents distributed here have been provided as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.

Submitted

  • [PDF] Heck, D. W., & Davis-Stober, C. P. (2018). Multinomial models with linear inequality constraints: Overview and improvements of computational methods for Bayesian inference. Manuscript submitted for publication.
    [BibTeX] [Abstract] [Data and R Scripts] [Preprint]

    Many psychological theories can be operationalized as linear inequality constraints on the parameters of multinomial distributions (e.g., discrete choice analysis). These constraints can be described in two equivalent ways: 1) as the solution set to a system of linear inequalities and 2) as the convex hull of a set of extremal points (vertices). For both representations, we describe a general Gibbs sampler for drawing posterior samples in order to carry out Bayesian analyses. We also summarize alternative sampling methods for estimating Bayes factors for these model representations using the encompassing Bayes factor method. We introduce the R package multinomineq, which provides an easily-accessible interface to a computationally efficient C++ implementation of these techniques.

    @unpublished{heck2018multinomial,
    archivePrefix = {arXiv},
    eprinttype = {arxiv},
    eprint = {1808.07140},
    title = {Multinomial Models with Linear Inequality Constraints: {{Overview}} and Improvements of Computational Methods for {{Bayesian}} Inference},
    shorttitle = {Multinomial Models with Linear Inequality Constraints},
    abstract = {Many psychological theories can be operationalized as linear inequality constraints on the parameters of multinomial distributions (e.g., discrete choice analysis). These constraints can be described in two equivalent ways: 1) as the solution set to a system of linear inequalities and 2) as the convex hull of a set of extremal points (vertices). For both representations, we describe a general Gibbs sampler for drawing posterior samples in order to carry out Bayesian analyses. We also summarize alternative sampling methods for estimating Bayes factors for these model representations using the encompassing Bayes factor method. We introduce the R package multinomineq, which provides an easily-accessible interface to a computationally efficient C++ implementation of these techniques.},
    type = {Manuscript submitted for publication},
    howpublished = {Manuscript submitted for publication},
    date = {2018},
    author = {Heck, Daniel W and Davis-Stober, Clintin P},
    osf = {https://osf.io/xv9u3}
    }

  • [PDF] Heck, D. W. (2018). Accounting for estimation uncertainty and shrinkage in Bayesian within-subject intervals: A comment on Nathoo, Kilshaw, and Masson (2018). Manuscript under revision. Retrieved from https://psyarxiv.com/whp8t
    [BibTeX] [Abstract] [Data and R Scripts]

    To facilitate the interpretation of systematic mean differences in within-subject designs, Nathoo, Kilshaw, and Masson (2018, Journal of Mathematical Psychology, 86, 1-9) proposed a Bayesian within-subject highest-density interval (HDI). However, their approach rests on independent maximum-likelihood estimates for the random effects which do not take estimation uncertainty and shrinkage into account. I propose an extension of Nathoo et al.’s method using a fully Bayesian, two-step approach. First, posterior samples are drawn for the linear mixed model. Second, the within-subject HDI is computed repeatedly based on the posterior samples, thereby accounting for estimation uncertainty and shrinkage. After marginalizing over the posterior distribution, the two-step approach results in a Bayesian within-subject HDI with a width similar to that of the classical within-subject confidence interval proposed by Loftus and Masson (1994, Psychonomic Bulletin & Review, 1, 476-490).

    @unpublished{heck2018accounting,
    title = {Accounting for Estimation Uncertainty and Shrinkage in {{Bayesian}} Within-Subject Intervals: {{A}} Comment on {{Nathoo}}, {{Kilshaw}}, and {{Masson}} (2018)},
    url = {https://psyarxiv.com/whp8t},
    abstract = {To facilitate the interpretation of systematic mean differences in within-subject designs, Nathoo, Kilshaw, and Masson (2018, Journal of Mathematical Psychology, 86, 1-9) proposed a Bayesian within-subject highest-density interval (HDI). However, their approach rests on independent maximum-likelihood estimates for the random effects which do not take estimation uncertainty and shrinkage into account. I propose an extension of Nathoo et al.'s method using a fully Bayesian, two-step approach. First, posterior samples are drawn for the linear mixed model. Second, the within-subject HDI is computed repeatedly based on the posterior samples, thereby accounting for estimation uncertainty and shrinkage. After marginalizing over the posterior distribution, the two-step approach results in a Bayesian within-subject HDI with a width similar to that of the classical within-subject confidence interval proposed by Loftus and Masson (1994, Psychonomic Bulletin \& Review, 1, 476-490).},
    type = {Manuscript under revision},
    howpublished = {Manuscript under revision},
    date = {2018},
    author = {Heck, Daniel W},
    osf = {https://osf.io/mrud9}
    }

Peer-Reviewed Articles

2019

  • [PDF] Heck, D. W., Erdfelder, E., & Kieslich, P. J. (in press). Generalized processing tree models: Jointly modeling discrete and continuous variables. Psychometrika. doi:10.1007/s11336-018-9622-0
    [BibTeX] [Abstract] [Data and R Scripts]

    Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.

    @article{heck2018generalized,
    title = {Generalized Processing Tree Models: {{Jointly}} Modeling Discrete and Continuous Variables},
    doi = {10.1007/s11336-018-9622-0},
    abstract = {Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.},
    journaltitle = {Psychometrika},
    date = {2019},
    keywords = {heckfirst},
    author = {Heck, Daniel W and Erdfelder, Edgar and Kieslich, Pascal J},
    pubstate = {inpress},
    osf = {https://osf.io/fyeum}
    }

  • [PDF] Heck, D. W., Overstall, A., Gronau, Q. F., & Wagenmakers, E. (in press). Quantifying uncertainty in transdimensional Markov chain Monte Carlo using discrete Markov models. Statistics & Computing. doi:10.1007/s11222-018-9828-0
    [BibTeX] [Abstract] [Data and R Scripts] [Preprint]

    Bayesian analysis often concerns an evaluation of models with different dimensionality as is necessary in, for example, model selection or mixture models. To facilitate this evaluation, transdimensional Markov chain Monte Carlo (MCMC) relies on sampling a discrete indexing variable to estimate the posterior model probabilities. However, little attention has been paid to the precision of these estimates. If only few switches occur between the models in the transdimensional MCMC output, precision may be low and assessment based on the assumption of independent samples misleading. Here, we propose a new method to estimate the precision based on the observed transition matrix of the model-indexing variable. Assuming a first order Markov model, the method samples from the posterior of the stationary distribution. This allows assessment of the uncertainty in the estimated posterior model probabilities, model ranks, and Bayes factors. Moreover, the method provides an estimate for the effective sample size of the MCMC output. In two model-selection examples, we show that the proposed approach provides a good assessment of the uncertainty associated with the estimated posterior model probabilities.

    @article{heck2018quantifying,
    archivePrefix = {arXiv},
    eprinttype = {arxiv},
    eprint = {1703.10364},
    title = {Quantifying Uncertainty in Transdimensional {{Markov}} Chain {{Monte Carlo}} Using Discrete {{Markov}} Models},
    doi = {10.1007/s11222-018-9828-0},
    abstract = {Bayesian analysis often concerns an evaluation of models with different dimensionality as is necessary in, for example, model selection or mixture models. To facilitate this evaluation, transdimensional Markov chain Monte Carlo (MCMC) relies on sampling a discrete indexing variable to estimate the posterior model probabilities. However, little attention has been paid to the precision of these estimates. If only few switches occur between the models in the transdimensional MCMC output, precision may be low and assessment based on the assumption of independent samples misleading. Here, we propose a new method to estimate the precision based on the observed transition matrix of the model-indexing variable. Assuming a first order Markov model, the method samples from the posterior of the stationary distribution. This allows assessment of the uncertainty in the estimated posterior model probabilities, model ranks, and Bayes factors. Moreover, the method provides an estimate for the effective sample size of the MCMC output. In two model-selection examples, we show that the proposed approach provides a good assessment of the uncertainty associated with the estimated posterior model probabilities.},
    journaltitle = {Statistics \& Computing},
    date = {2019},
    keywords = {heckfirst},
    author = {Heck, Daniel W and Overstall, Antony and Gronau, Quentin F and Wagenmakers, Eric-Jan},
    pubstate = {inpress},
    osf = {https://osf.io/kjrkz}
    }

  • [PDF] Heck, D. W. (in press). A caveat on the Savage-Dickey density ratio: The case of computing Bayes factors for regression parameters. British Journal of Mathematical and Statistical Psychology. doi:10.31234/osf.io/7dzsj
    [BibTeX] [Abstract] [Data and R Scripts]

    The Savage-Dickey density ratio is a simple method for computing the Bayes factor for an equality constraint on one or more parameters of a statistical model. In regression analysis, this includes the important scenario of testing whether one or more of the covariates have an effect on the dependent variable. However, the Savage-Dickey ratio only provides the correct Bayes factor if the prior distribution of the nuisance parameters under the nested model is identical to the conditional prior under the full model given the equality constraint. This condition is violated for multiple regression models with a Jeffreys-Zellner-Siow (JZS) prior, which is often used as a default prior in psychology. Besides linear regression models, the limitation of the Savage-Dickey ratio is especially relevant when analytical solutions for the Bayes factor are not available. This is the case for generalized linear models, nonlinear models, or cognitive process models with regression extensions. As a remedy, the correct Bayes factor can be computed using a generalized version of the Savage-Dickey density ratio.

    @article{heck2018caveat,
    title = {A Caveat on the {{Savage}}-{{Dickey}} Density Ratio: {{The}} Case of Computing {{Bayes}} Factors for Regression Parameters},
    doi = {10.31234/osf.io/7dzsj},
    abstract = {The Savage-Dickey density ratio is a simple method for computing the Bayes factor for an equality constraint on one or more parameters of a statistical model. In regression analysis, this includes the important scenario of testing whether one or more of the covariates have an effect on the dependent variable. However, the Savage-Dickey ratio only provides the correct Bayes factor if the prior distribution of the nuisance parameters under the nested model is identical to the conditional prior under the full model given the equality constraint. This condition is violated for multiple regression models with a Jeffreys-Zellner-Siow (JZS) prior, which is often used as a default prior in psychology. Besides linear regression models, the limitation of the Savage-Dickey ratio is especially relevant when analytical solutions for the Bayes factor are not available. This is the case for generalized linear models, nonlinear models, or cognitive process models with regression extensions. As a remedy, the correct Bayes factor can be computed using a generalized version of the Savage-Dickey density ratio.},
    journaltitle = {British Journal of Mathematical and Statistical Psychology},
    date = {2019},
    author = {Heck, Daniel W},
    pubstate = {inpress},
    osf = {https://osf.io/5hpuc}
    }

  • [PDF] Klein, S. A., Heck, D. W., Reese, G., & Hilbig, B. E. (in press). On the relationship between Openness to Experience, political orientation, and pro-environmental behavior. Personality and Individual Differences.
    [BibTeX] [Abstract] [Data and R Scripts]

    Previous research consistently showed that Openness to Experience is positively linked to pro-environmental behavior. However, this does not appear to hold whenever pro-environmental behavior is mutually exclusive with cooperation. The present study aimed to replicate this null effect of Openness and to test political orientation as explanatory variable: Openness is associated with a left-wing/liberal political orientation, which, in turn, is associated with both cooperation and pro-environmental behavior, thus creating a decision conflict whenever the latter are mutually exclusive. In an online study (N = 355) participants played the Greater Good Game, a social dilemma involving choice conflict between pro-environmental behavior and cooperation. Results both replicated prior findings and suggested that political orientation could indeed account for the null effect of Openness.

    @article{klein2018relationship,
    title = {On the Relationship between {{Openness}} to {{Experience}}, Political Orientation, and pro-Environmental Behavior},
    abstract = {Previous research consistently showed that Openness to Experience is positively linked to pro-environmental behavior. However, this does not appear to hold whenever pro-environmental behavior is mutually exclusive with cooperation. The present study aimed to replicate this null effect of Openness and to test political orientation as explanatory variable: Openness is associated with a left-wing/liberal political orientation, which, in turn, is associated with both cooperation and pro-environmental behavior, thus creating a decision conflict whenever the latter are mutually exclusive. In an online study (N = 355) participants played the Greater Good Game, a social dilemma involving choice conflict between pro-environmental behavior and cooperation. Results both replicated prior findings and suggested that political orientation could indeed account for the null effect of Openness.},
    journaltitle = {Personality and Individual Differences},
    date = {2019},
    author = {Klein, Sina A and Heck, Daniel W and Reese, Gerhard and Hilbig, Benjamin E},
    pubstate = {inpress},
    osf = {https://osf.io/gxjc9}
    }

  • [PDF] Plieninger, H., & Heck, D. W. (in press). A new model for acquiescence at the interface of psychometrics and cognitive psychology. Multivariate Behavioral Research. doi:10.1080/00273171.2018.1469966
    [BibTeX] [Abstract] [GitHub]

    When measuring psychological traits, one has to consider that respondents often show content-unrelated response behavior in answering questionnaires. To disentangle the target trait and two such response styles, extreme responding and midpoint responding, Böckenholt (2012, Psychological Methods, 17, 665–678) developed an item response model based on a latent processing tree structure. We propose a theoretically motivated extension of this model to also measure acquiescence, the tendency to agree with both regular and reversed items. Substantively, our approach builds on multinomial processing tree (MPT) models that are used in cognitive psychology to disentangle qualitatively distinct processes. Accordingly, the new model for response styles assumes a mixture distribution of affirmative responses, which are either determined by the underlying target trait or by acquiescence. In order to estimate the model parameters, we rely on Bayesian hierarchical estimation of MPT models. In simulations, we show that the model provides unbiased estimates of response styles and the target trait, and we compare the new model and Böckenholt’s model in a recovery study. An empirical example from personality psychology is used for illustrative purposes.

    @article{plieninger2018new,
    title = {A New Model for Acquiescence at the Interface of Psychometrics and Cognitive Psychology},
    doi = {10.1080/00273171.2018.1469966},
    abstract = {When measuring psychological traits, one has to consider that respondents often show content-unrelated response behavior in answering questionnaires. To disentangle the target trait and two such response styles, extreme responding and midpoint responding, Böckenholt (2012, Psychological Methods, 17, 665–678) developed an item response model based on a latent processing tree structure. We propose a theoretically motivated extension of this model to also measure acquiescence, the tendency to agree with both regular and reversed items. Substantively, our approach builds on multinomial processing tree (MPT) models that are used in cognitive psychology to disentangle qualitatively distinct processes. Accordingly, the new model for response styles assumes a mixture distribution of affirmative responses, which are either determined by the underlying target trait or by acquiescence. In order to estimate the model parameters, we rely on Bayesian hierarchical estimation of MPT models. In simulations, we show that the model provides unbiased estimates of response styles and the target trait, and we compare the new model and Böckenholt's model in a recovery study. An empirical example from personality psychology is used for illustrative purposes.},
    journaltitle = {Multivariate Behavioral Research},
    date = {2019},
    author = {Plieninger, Hansjörg and Heck, Daniel W},
    pubstate = {inpress},
    github = {https://github.com/hplieninger/mpt2irt}
    }

  • Ścigała, K., Schild, C., Heck, D. W., & Zettler, I. (in press). Who deals with the devil: Interdependence, personality, and corrupted collaboration. Social Psychological and Personality Science.
    [BibTeX] [Abstract] [Data and R Scripts]

    Corrupted collaboration, i.e., gaining personal profits through collaborative immoral acts, is a common and destructive phenomenon in societies. Despite the societal relevance of corrupted collaboration, the role of one’s own as well as one’s partner’s characteristics has hitherto remained unexplained. In the present study, we test these roles using the sequential dyadic die-rolling paradigm (N = 499 across five conditions). Our results indicate that interacting with a fully dishonest partner leads to higher cheating rates than interacting with a fully honest partner, although being paired with a fully honest partner does not eliminate dishonesty completely. Furthermore, we found that the basic personality dimension of Honesty-Humility is consistently negatively related to collaborative dishonesty irrespective of whether participants interact with fully honest or fully dishonest partners. Overall, our investigation provides a comprehensive view of the role of interaction partner’s characteristics in settings allowing for corrupted collaboration.

    @article{scigala2019who,
    title = {Who Deals with the Devil: {{Interdependence}}, Personality, and Corrupted Collaboration},
    abstract = {Corrupted collaboration, i.e., gaining personal profits through collaborative immoral acts, is a common and destructive phenomenon in societies. Despite the societal relevance of corrupted collaboration, the role of one's own as well as one's partner's characteristics has hitherto remained unexplained. In the present study, we test these roles using the sequential dyadic die-rolling paradigm (N = 499 across five conditions). Our results indicate that interacting with a fully dishonest partner leads to higher cheating rates than interacting with a fully honest partner, although being paired with a fully honest partner does not eliminate dishonesty completely. Furthermore, we found that the basic personality dimension of Honesty-Humility is consistently negatively related to collaborative dishonesty irrespective of whether participants interact with fully honest or fully dishonest partners. Overall, our investigation provides a comprehensive view of the role of interaction partner’s characteristics in settings allowing for corrupted collaboration.},
    journaltitle = {Social Psychological and Personality Science},
    date = {2019},
    author = {Ścigała, Karolina and Schild, Christoph and Heck, Daniel W and Zettler, Ingo},
    osf = {https://osf.io/t7r3h},
    pubstate = {inpress}
    }

2018

  • [PDF] Heck, D. W., & Moshagen, M. (2018). RRreg: An R package for correlation and regression analyses of randomized response data. Journal of Statistical Software, 85(2), 1-29. doi:10.18637/jss.v085.i02
    [BibTeX] [Abstract] [GitHub]

    The randomized-response (RR) technique was developed to improve the validity of measures assessing attitudes, behaviors, and attributes threatened by social desirability bias. The RR removes any direct link between individual responses and the sensitive attribute to maximize the anonymity of respondents and, in turn, to elicit more honest responding. Since multivariate analyses are no longer feasible using standard methods, we present the R package RRreg that allows for multivariate analyses of RR data in a user-friendly way. We show how to compute bivariate correlations, how to predict an RR variable in an adapted logistic regression framework (with or without random effects), and how to use RR predictors in a modified linear regression. In addition, the package allows for power-analysis and robustness simulations. To facilitate the application of these methods, we illustrate the benefits of multivariate methods for RR variables using an empirical example.

    @article{heck2018rrreg,
    title = {{{RRreg}}: {{An R}} Package for Correlation and Regression Analyses of Randomized Response Data},
    volume = {85(2)},
    doi = {10.18637/jss.v085.i02},
    abstract = {The randomized-response (RR) technique was developed to improve the validity of measures assessing attitudes, behaviors, and attributes threatened by social desirability bias. The RR removes any direct link between individual responses and the sensitive attribute to maximize the anonymity of respondents and, in turn, to elicit more honest responding. Since multivariate analyses are no longer feasible using standard methods, we present the R package RRreg that allows for multivariate analyses of RR data in a user-friendly way. We show how to compute bivariate correlations, how to predict an RR variable in an adapted logistic regression framework (with or without random effects), and how to use RR predictors in a modified linear regression. In addition, the package allows for power-analysis and robustness simulations. To facilitate the application of these methods, we illustrate the benefits of multivariate methods for RR variables using an empirical example.},
    journaltitle = {Journal of Statistical Software},
    date = {2018},
    pages = {1-29},
    keywords = {heckfirst},
    author = {Heck, Daniel W and Moshagen, Morten},
    github = {https://github.com/danheck/RRreg}
    }

  • [PDF] Heck, D. W., Arnold, N. R., & Arnold, D. (2018). TreeBUGS: An R package for hierarchical multinomial-processing-tree modeling. Behavior Research Methods, 50, 264-284. doi:10.3758/s13428-017-0869-7
    [BibTeX] [Abstract] [Data and R Scripts]

    Multinomial processing tree (MPT) models are a class of measurement models that account for categorical data by assuming a finite number of underlying cognitive processes. Traditionally, data are aggregated across participants and analyzed under the assumption of independently and identically distributed observations. Hierarchical Bayesian extensions of MPT models explicitly account for participant heterogeneity by assuming that the individual parameters follow a continuous hierarchical distribution. We provide an accessible introduction to hierarchical MPT modeling and present the user-friendly and comprehensive R package TreeBUGS, which implements the two most important hierarchical MPT approaches for participant heterogeneity—the beta-MPT approach (Smith & Batchelder, Journal of Mathematical Psychology 54:167-183, 2010) and the latent-trait MPT approach (Klauer, Psychometrika 75:70-98, 2010). TreeBUGS reads standard MPT model files and obtains Markov-chain Monte Carlo samples that approximate the posterior distribution. The functionality and output are tailored to the specific needs of MPT modelers and provide tests for the homogeneity of items and participants, individual and group parameter estimates, fit statistics, and within- and between-subjects comparisons, as well as goodness-of-fit and summary plots. We also propose and implement novel statistical extensions to include continuous and discrete predictors (as either fixed or random effects) in the latent-trait MPT model.

    @article{heck2018treebugs,
    langid = {english},
    title = {{{TreeBUGS}}: {{An R}} Package for Hierarchical Multinomial-Processing-Tree Modeling},
    volume = {50},
    doi = {10.3758/s13428-017-0869-7},
    shorttitle = {{{TreeBUGS}}},
    abstract = {Multinomial processing tree (MPT) models are a class of measurement models that account for categorical data by assuming a finite number of underlying cognitive processes. Traditionally, data are aggregated across participants and analyzed under the assumption of independently and identically distributed observations. Hierarchical Bayesian extensions of MPT models explicitly account for participant heterogeneity by assuming that the individual parameters follow a continuous hierarchical distribution. We provide an accessible introduction to hierarchical MPT modeling and present the user-friendly and comprehensive R package TreeBUGS, which implements the two most important hierarchical MPT approaches for participant heterogeneity—the beta-MPT approach (Smith \& Batchelder, Journal of Mathematical Psychology 54:167-183, 2010) and the latent-trait MPT approach (Klauer, Psychometrika 75:70-98, 2010). TreeBUGS reads standard MPT model files and obtains Markov-chain Monte Carlo samples that approximate the posterior distribution. The functionality and output are tailored to the specific needs of MPT modelers and provide tests for the homogeneity of items and participants, individual and group parameter estimates, fit statistics, and within- and between-subjects comparisons, as well as goodness-of-fit and summary plots. We also propose and implement novel statistical extensions to include continuous and discrete predictors (as either fixed or random effects) in the latent-trait MPT model.},
    journaltitle = {Behavior Research Methods},
    shortjournal = {Behav Res},
    date = {2018},
    pages = {264-284},
    keywords = {heckfirst},
    author = {Heck, Daniel W and Arnold, Nina R. and Arnold, Denis},
    osf = {https://osf.io/s82bw}
    }

  • [PDF] Heck, D. W., Thielmann, I., Moshagen, M., & Hilbig, B. E. (2018). Who lies? A large-scale reanalysis linking basic personality traits to unethical decision making. Judgment and Decision Making, 13, 356–371. Retrieved from http://journal.sjdm.org/18/18322/jdm18322.pdf
    [BibTeX] [Abstract] [Data and R Scripts]

    Previous research has established that higher levels of trait Honesty-Humility (HH) are associated with less dishonest behavior in cheating paradigms. However, only imprecise effect size estimates of this HH-cheating link are available. Moreover, evidence is inconclusive on whether other basic personality traits from the HEXACO or Big Five models are associated with unethical decision making and whether such effects have incremental validity beyond HH. We address these issues in a highly powered reanalysis of 16 studies assessing dishonest behavior in an incentivized, one-shot cheating paradigm (N = 5,002). For this purpose, we rely on a newly developed logistic regression approach for the analysis of nested data in cheating paradigms. We also test theoretically derived interactions of HH with other basic personality traits (i.e., Emotionality and Conscientiousness) and situational factors (i.e., the baseline probability of observing a favorable outcome) as well as the incremental validity of HH over demographic characteristics. The results show a medium to large effect of HH (odds ratio = 0.53), which was independent of other personality, situational, or demographic variables. Only one other trait (Big Five Agreeableness) was associated with unethical decision making, although it failed to show any incremental validity beyond HH.

    @article{heck2018who,
    title = {Who Lies? {{A}} Large-Scale Reanalysis Linking Basic Personality Traits to Unethical Decision Making},
    volume = {13},
    url = {http://journal.sjdm.org/18/18322/jdm18322.pdf},
    abstract = {Previous research has established that higher levels of trait Honesty-Humility (HH) are associated with less dishonest behavior in cheating paradigms. However, only imprecise effect size estimates of this HH-cheating link are available. Moreover, evidence is inconclusive on whether other basic personality traits from the HEXACO or Big Five models are associated with unethical decision making and whether such effects have incremental validity beyond HH. We address these issues in a highly powered reanalysis of 16 studies assessing dishonest behavior in an incentivized, one-shot cheating paradigm (N = 5,002). For this purpose, we rely on a newly developed logistic regression approach for the analysis of nested data in cheating paradigms. We also test theoretically derived interactions of HH with other basic personality traits (i.e., Emotionality and Conscientiousness) and situational factors (i.e., the baseline probability of observing a favorable outcome) as well as the incremental validity of HH over demographic characteristics. The results show a medium to large effect of HH (odds ratio = 0.53), which was independent of other personality, situational, or demographic variables. Only one other trait (Big Five Agreeableness) was associated with unethical decision making, although it failed to show any incremental validity beyond HH.},
    journaltitle = {Judgment and Decision Making},
    date = {2018},
    pages = {356--371},
    author = {Heck, Daniel W and Thielmann, Isabel and Moshagen, Morten and Hilbig, Benjamin E},
    osf = {https://osf.io/56hw4}
    }

  • [PDF] Heck, D. W., Hoffmann, A., & Moshagen, M. (2018). Detecting nonadherence without loss in efficiency: A simple extension of the crosswise model. Behavior Research Methods, 50, 1895-1905. doi:10.3758/s13428-017-0957-8
    [BibTeX] [Abstract] [Data and R Scripts]

    In surveys concerning sensitive behavior or attitudes, respondents often do not answer truthfully, because of social desirability bias. To elicit more honest responding, the randomized-response (RR) technique aims at increasing perceived and actual anonymity by prompting respondents to answer with a randomly modified and thus uninformative response. In the crosswise model, as a particularly promising variant of the RR, this is achieved by adding a second, nonsensitive question and by prompting respondents to answer both questions jointly. Despite increased privacy protection and empirically higher prevalence estimates of socially undesirable behaviors, evidence also suggests that some respondents might still not adhere to the instructions, in turn leading to questionable results. Herein we propose an extension of the crosswise model (ECWM) that makes it possible to detect several types of response biases with adequate power in realistic sample sizes. Importantly, the ECWM allows for testing the validity of the model’s assumptions without any loss in statistical efficiency. Finally, we provide an empirical example supporting the usefulness of the ECWM.

    @article{heck2018detecting,
    langid = {english},
    title = {Detecting Nonadherence without Loss in Efficiency: {{A}} Simple Extension of the Crosswise Model},
    volume = {50},
    doi = {10.3758/s13428-017-0957-8},
    shorttitle = {Detecting Nonadherence without Loss in Efficiency},
    abstract = {In surveys concerning sensitive behavior or attitudes, respondents often do not answer truthfully, because of social desirability bias. To elicit more honest responding, the randomized-response (RR) technique aims at increasing perceived and actual anonymity by prompting respondents to answer with a randomly modified and thus uninformative response. In the crosswise model, as a particularly promising variant of the RR, this is achieved by adding a second, nonsensitive question and by prompting respondents to answer both questions jointly. Despite increased privacy protection and empirically higher prevalence estimates of socially undesirable behaviors, evidence also suggests that some respondents might still not adhere to the instructions, in turn leading to questionable results. Herein we propose an extension of the crosswise model (ECWM) that makes it possible to detect several types of response biases with adequate power in realistic sample sizes. Importantly, the ECWM allows for testing the validity of the model’s assumptions without any loss in statistical efficiency. Finally, we provide an empirical example supporting the usefulness of the ECWM.},
    journaltitle = {Behavior Research Methods},
    shortjournal = {Behav Res},
    date = {2018},
    pages = {1895-1905},
    keywords = {Sensitive questions,Randomized response,Measurement model,Social desirability,Survey design},
    author = {Heck, Daniel W. and Hoffmann, Adrian and Moshagen, Morten},
    osf = {https://osf.io/mxjgf}
    }

  • [PDF] Miller, R., Scherbaum, S., Heck, D. W., Goschke, T., & Enge, S. (2018). On the relation between the (censored) shifted Wald and the Wiener distribution as measurement models for choice response times. Applied Psychological Measurement, 42, 116-135. doi:10.1177/0146621617710465
    [BibTeX] [Abstract]

    Inferring processes or constructs from performance data is a major hallmark of cognitive psychometrics. Particularly, diffusion modeling of response times (RTs) from correct and erroneous responses using the Wiener distribution has become a popular measurement tool because it provides a set of psychologically interpretable parameters. However, an important precondition to identify all of these parameters is a sufficient number of RTs from erroneous responses. In the present article, we show by simulation that the parameters of the Wiener distribution can be recovered from tasks yielding very high or even perfect response accuracies using the shifted Wald distribution. Specifically, we argue that error RTs can be modeled as correct RTs that have undergone censoring by using techniques from parametric survival analysis. We illustrate our reasoning by fitting the Wiener and (censored) shifted Wald distribution to RTs from six participants who completed a Go/No-go task. In accordance with our simulations, diffusion modeling using the Wiener and the shifted Wald distribution yielded identical parameter estimates when the number of erroneous responses was predicted to be low. Moreover, the modeling of error RTs as censored correct RTs substantially improved the recovery of these diffusion parameters when premature trial timeout was introduced to increase the number of omission errors. Thus, the censored shifted Wald distribution provides a suitable means for diffusion modeling in situations when the Wiener distribution cannot be fitted without parametric constraints.

    @article{miller2018relation,
    title = {On the Relation between the (Censored) Shifted {{Wald}} and the {{Wiener}} Distribution as Measurement Models for Choice Response Times},
    volume = {42},
    doi = {10.1177/0146621617710465},
    abstract = {Inferring processes or constructs from performance data is a major hallmark of cognitive psychometrics. Particularly, diffusion modeling of response times (RTs) from correct and erroneous responses using the Wiener distribution has become a popular measurement tool because it provides a set of psychologically interpretable parameters. However, an important precondition to identify all of these parameters is a sufficient number of RTs from erroneous responses. In the present article, we show by simulation that the parameters of the Wiener distribution can be recovered from tasks yielding very high or even perfect response accuracies using the shifted Wald distribution. Specifically, we argue that error RTs can be modeled as correct RTs that have undergone censoring by using techniques from parametric survival analysis. We illustrate our reasoning by fitting the Wiener and (censored) shifted Wald distribution to RTs from six participants who completed a Go/No-go task. In accordance with our simulations, diffusion modeling using the Wiener and the shifted Wald distribution yielded identical parameter estimates when the number of erroneous responses was predicted to be low. Moreover, the modeling of error RTs as censored correct RTs substantially improved the recovery of these diffusion parameters when premature trial timeout was introduced to increase the number of omission errors. Thus, the censored shifted Wald distribution provides a suitable means for diffusion modeling in situations when the Wiener distribution cannot be fitted without parametric constraints.},
    journaltitle = {Applied Psychological Measurement},
    date = {2018},
    pages = {116-135},
    author = {Miller, Robert and Scherbaum, S and Heck, Daniel W and Goschke, Thomas and Enge, Soeren}
    }

2017

  • [PDF] Gronau, Q. F., Erp, S. V., Heck, D. W., Cesario, J., Jonas, K. J., & Wagenmakers, E. (2017). A Bayesian model-averaged meta-analysis of the power pose effect with informed and default priors: the case of felt power. Comprehensive Results in Social Psychology, 2, 123-138. doi:10.1080/23743603.2017.1326760
    [BibTeX] [Abstract] [Data and R Scripts]

    Earlier work found that – compared to participants who adopted constrictive body postures – participants who adopted expansive body postures reported feeling more powerful, showed an increase in testosterone and a decrease in cortisol, and displayed an increased tolerance for risk. However, these power pose effects have recently come under considerable scrutiny. Here, we present a Bayesian meta-analysis of six preregistered studies from this special issue, focusing on the effect of power posing on felt power. Our analysis improves on standard classical meta-analyses in several ways. First and foremost, we considered only preregistered studies, eliminating concerns about publication bias. Second, the Bayesian approach enables us to quantify evidence for both the alternative and the null hypothesis. Third, we use Bayesian model-averaging to account for the uncertainty with respect to the choice for a fixed-effect model or a random-effect model. Fourth, based on a literature review, we obtained an empirically informed prior distribution for the between-study heterogeneity of effect sizes. This empirically informed prior can serve as a default choice not only for the investigation of the power pose effect but for effects in the field of psychology more generally. For effect size, we considered a default and an informed prior. Our meta-analysis yields very strong evidence for an effect of power posing on felt power. However, when the analysis is restricted to participants unfamiliar with the effect, the meta-analysis yields evidence that is only moderate.

    @article{gronau2017bayesian,
    title = {A {{Bayesian}} Model-Averaged Meta-Analysis of the Power Pose Effect with Informed and Default Priors: The Case of Felt Power},
    volume = {2},
    doi = {10.1080/23743603.2017.1326760},
    shorttitle = {A {{Bayesian}} Model-Averaged Meta-Analysis of the Power Pose Effect with Informed and Default Priors},
    abstract = {Earlier work found that – compared to participants who adopted constrictive body postures – participants who adopted expansive body postures reported feeling more powerful, showed an increase in testosterone and a decrease in cortisol, and displayed an increased tolerance for risk. However, these power pose effects have recently come under considerable scrutiny. Here, we present a Bayesian meta-analysis of six preregistered studies from this special issue, focusing on the effect of power posing on felt power. Our analysis improves on standard classical meta-analyses in several ways. First and foremost, we considered only preregistered studies, eliminating concerns about publication bias. Second, the Bayesian approach enables us to quantify evidence for both the alternative and the null hypothesis. Third, we use Bayesian model-averaging to account for the uncertainty with respect to the choice for a fixed-effect model or a random-effect model. Fourth, based on a literature review, we obtained an empirically informed prior distribution for the between-study heterogeneity of effect sizes. This empirically informed prior can serve as a default choice not only for the investigation of the power pose effect but for effects in the field of psychology more generally. For effect size, we considered a default and an informed prior. Our meta-analysis yields very strong evidence for an effect of power posing on felt power. However, when the analysis is restricted to participants unfamiliar with the effect, the meta-analysis yields evidence that is only moderate.},
    journaltitle = {Comprehensive Results in Social Psychology},
    date = {2017},
    pages = {123-138},
    author = {Gronau, Quentin F. and Erp, Sara Van and Heck, Daniel W and Cesario, Joseph and Jonas, Kai J. and Wagenmakers, Eric-Jan},
    osf = {https://osf.io/k5avt}
    }

  • [PDF] Heck, D. W., Hilbig, B. E., & Moshagen, M. (2017). From information processing to decisions: Formalizing and comparing probabilistic choice models. Cognitive Psychology, 96, 26-40. doi:10.1016/j.cogpsych.2017.05.003
    [BibTeX] [Abstract] [Data and R Scripts]

    Decision strategies explain how people integrate multiple sources of information to make probabilistic inferences. In the past decade, increasingly sophisticated methods have been developed to determine which strategy explains decision behavior best. We extend these efforts to test psychologically more plausible models (i.e., strategies), including a new, probabilistic version of the take-the-best (TTB) heuristic that implements a rank order of error probabilities based on sequential processing. Within a coherent statistical framework, deterministic and probabilistic versions of TTB and other strategies can directly be compared using model selection by minimum description length or the Bayes factor. In an experiment with inferences from given information, only three of 104 participants were best described by the psychologically plausible, probabilistic version of TTB. Similar as in previous studies, most participants were classified as users of weighted-additive, a strategy that integrates all available information and approximates rational decisions.

    @article{heck2017information,
    title = {From Information Processing to Decisions: {{Formalizing}} and Comparing Probabilistic Choice Models},
    volume = {96},
    doi = {10.1016/j.cogpsych.2017.05.003},
    abstract = {Decision strategies explain how people integrate multiple sources of information to make probabilistic inferences. In the past decade, increasingly sophisticated methods have been developed to determine which strategy explains decision behavior best. We extend these efforts to test psychologically more plausible models (i.e., strategies), including a new, probabilistic version of the take-the-best (TTB) heuristic that implements a rank order of error probabilities based on sequential processing. Within a coherent statistical framework, deterministic and probabilistic versions of TTB and other strategies can directly be compared using model selection by minimum description length or the Bayes factor. In an experiment with inferences from given information, only three of 104 participants were best described by the psychologically plausible, probabilistic version of TTB. Similar as in previous studies, most participants were classified as users of weighted-additive, a strategy that integrates all available information and approximates rational decisions.},
    journaltitle = {Cognitive Psychology},
    date = {2017},
    pages = {26-40},
    keywords = {heckfirst},
    author = {Heck, Daniel W and Hilbig, Benjamin E and Moshagen, Morten},
    osf = {https://osf.io/jcd2c}
    }

  • [PDF] Heck, D. W., & Erdfelder, E. (2017). Linking process and measurement models of recognition-based decisions. Psychological Review, 124, 442-471. doi:10.1037/rev0000063
    [BibTeX] [Abstract] [Data and R Scripts]

    When making inferences about pairs of objects, one of which is recognized and the other is not, the recognition heuristic states that participants choose the recognized object in a noncompensatory way without considering any further knowledge. In contrast, information-integration theories such as parallel constraint satisfaction (PCS) assume that recognition is merely one of many cues that is integrated with further knowledge in a compensatory way. To test both process models against each other without manipulating recognition or further knowledge, we include response times into the r-model, a popular multinomial processing tree model for memory-based decisions. Essentially, this response-time-extended r-model allows to test a crucial prediction of PCS, namely, that the integration of recognition-congruent knowledge leads to faster decisions compared to the consideration of recognition only—even though more information is processed. In contrast, decisions due to recognition-heuristic use are predicted to be faster than decisions affected by any further knowledge. Using the classical German-cities example, simulations show that the novel measurement model discriminates between both process models based on choices, decision times, and recognition judgments only. In a reanalysis of 29 data sets including more than 400,000 individual trials, noncompensatory choices of the recognized option were estimated to be slower than choices due to recognition-congruent knowledge. This corroborates the parallel information-integration account of memory-based decisions, according to which decisions become faster when the coherence of the available information increases. (PsycINFO Database Record (c) 2017 APA, all rights reserved)

    @article{heck2017linking,
    title = {Linking Process and Measurement Models of Recognition-Based Decisions},
    volume = {124},
    doi = {10.1037/rev0000063},
    abstract = {When making inferences about pairs of objects, one of which is recognized and the other is not, the recognition heuristic states that participants choose the recognized object in a noncompensatory way without considering any further knowledge. In contrast, information-integration theories such as parallel constraint satisfaction (PCS) assume that recognition is merely one of many cues that is integrated with further knowledge in a compensatory way. To test both process models against each other without manipulating recognition or further knowledge, we include response times into the r-model, a popular multinomial processing tree model for memory-based decisions. Essentially, this response-time-extended r-model allows to test a crucial prediction of PCS, namely, that the integration of recognition-congruent knowledge leads to faster decisions compared to the consideration of recognition only—even though more information is processed. In contrast, decisions due to recognition-heuristic use are predicted to be faster than decisions affected by any further knowledge. Using the classical German-cities example, simulations show that the novel measurement model discriminates between both process models based on choices, decision times, and recognition judgments only. In a reanalysis of 29 data sets including more than 400,000 individual trials, noncompensatory choices of the recognized option were estimated to be slower than choices due to recognition-congruent knowledge. This corroborates the parallel information-integration account of memory-based decisions, according to which decisions become faster when the coherence of the available information increases. (PsycINFO Database Record (c) 2017 APA, all rights reserved)},
    journaltitle = {Psychological Review},
    date = {2017},
    pages = {442-471},
    keywords = {heckpaper,heckfirst},
    author = {Heck, Daniel W and Erdfelder, Edgar},
    osf = {https://osf.io/4kv87}
    }

  • [PDF] Klein, S. A., Hilbig, B. E., & Heck, D. W. (2017). Which is the greater good? A social dilemma paradigm disentangling environmentalism and cooperation. Journal of Environmental Psychology, 53, 40-49. doi:10.1016/j.jenvp.2017.06.001
    [BibTeX] [Abstract] [Data and R Scripts]

    In previous research, pro-environmental behavior (PEB) was almost exclusively aligned with in-group cooperation. However, PEB and in-group cooperation can also be mutually exclusive or directly conflict. To provide first evidence on behavior in these situations, the present work develops the Greater Good Game (GGG), a social dilemma paradigm with a selfish, a cooperative, and a pro-environmental choice option. In Study 1, the GGG and a corresponding measurement model were experimentally validated using different payoff structures. Results show that in-group cooperation is the dominant behavior in a situation of mutual exclusiveness, whereas selfish behavior becomes more dominant in a situation of conflict. Study 2 examined personality influences on choices in the GGG. High Honesty-Humility was associated with less selfishness, whereas Openness was not associated with more PEB. Results corroborate the paradigm as a valid instrument for investigating the conflict between in-group cooperation and PEB and provide first insights into personality influences.

    @article{klein2017which,
    title = {Which Is the Greater Good? {{A}} Social Dilemma Paradigm Disentangling Environmentalism and Cooperation},
    volume = {53},
    doi = {10.1016/j.jenvp.2017.06.001},
    shorttitle = {Which Is the Greater Good?},
    abstract = {In previous research, pro-environmental behavior (PEB) was almost exclusively aligned with in-group cooperation. However, PEB and in-group cooperation can also be mutually exclusive or directly conflict. To provide first evidence on behavior in these situations, the present work develops the Greater Good Game (GGG), a social dilemma paradigm with a selfish, a cooperative, and a pro-environmental choice option. In Study 1, the GGG and a corresponding measurement model were experimentally validated using different payoff structures. Results show that in-group cooperation is the dominant behavior in a situation of mutual exclusiveness, whereas selfish behavior becomes more dominant in a situation of conflict. Study 2 examined personality influences on choices in the GGG. High Honesty-Humility was associated with less selfishness, whereas Openness was not associated with more PEB. Results corroborate the paradigm as a valid instrument for investigating the conflict between in-group cooperation and PEB and provide first insights into personality influences.},
    journaltitle = {Journal of Environmental Psychology},
    shortjournal = {Journal of Environmental Psychology},
    date = {2017},
    pages = {40-49},
    keywords = {HEXACO,Cognitive psychometrics,Externalities,Public goods,Actual behavior},
    author = {Klein, Sina A. and Hilbig, Benjamin E. and Heck, Daniel W},
    osf = {https://osf.io/zw2ze}
    }

2016

  • [PDF] Heck, D. W., & Erdfelder, E. (2016). Extending multinomial processing tree models to measure the relative speed of cognitive processes. Psychonomic Bulletin & Review, 23, 1440-1465. doi:10.3758/s13423-016-1025-6
    [BibTeX] [Abstract]

    Multinomial processing tree (MPT) models account for observed categorical responses by assuming a finite number of underlying cognitive processes. We propose a general method that allows for the inclusion of response times (RTs) into any kind of MPT model to measure the relative speed of the hypothesized processes. The approach relies on the fundamental assumption that observed RT distributions emerge as mixtures of latent RT distributions that correspond to different underlying processing paths. To avoid auxiliary assumptions about the shape of these latent RT distributions, we account for RTs in a distribution-free way by splitting each observed category into several bins from fast to slow responses, separately for each individual. Given these data, latent RT distributions are parameterized by probability parameters for these RT bins, and an extended MPT model is obtained. Hence, all of the statistical results and software available for MPT models can easily be used to fit, test, and compare RT-extended MPT models. We demonstrate the proposed method by applying it to the two-high-threshold model of recognition memory.

    @article{heck2016extending,
    title = {Extending Multinomial Processing Tree Models to Measure the Relative Speed of Cognitive Processes},
    volume = {23},
    doi = {10.3758/s13423-016-1025-6},
    abstract = {Multinomial processing tree (MPT) models account for observed categorical responses by assuming a finite number of underlying cognitive processes. We propose a general method that allows for the inclusion of response times (RTs) into any kind of MPT model to measure the relative speed of the hypothesized processes. The approach relies on the fundamental assumption that observed RT distributions emerge as mixtures of latent RT distributions that correspond to different underlying processing paths. To avoid auxiliary assumptions about the shape of these latent RT distributions, we account for RTs in a distribution-free way by splitting each observed category into several bins from fast to slow responses, separately for each individual. Given these data, latent RT distributions are parameterized by probability parameters for these RT bins, and an extended MPT model is obtained. Hence, all of the statistical results and software available for MPT models can easily be used to fit, test, and compare RT-extended MPT models. We demonstrate the proposed method by applying it to the two-high-threshold model of recognition memory.},
    journaltitle = {Psychonomic Bulletin \& Review},
    date = {2016},
    pages = {1440-1465},
    keywords = {heckpaper,heckfirst},
    author = {Heck, Daniel W and Erdfelder, Edgar}
    }

  • [PDF] Heck, D. W., & Wagenmakers, E. (2016). Adjusted priors for Bayes factors involving reparameterized order constraints. Journal of Mathematical Psychology, 73, 110-116. doi:10.1016/j.jmp.2016.05.004
    [BibTeX] [Abstract] [Data and R Scripts] [Preprint]

    Many psychological theories that are instantiated as statistical models imply order constraints on the model parameters. To fit and test such restrictions, order constraints of the form theta_i $<$ theta_j can be reparameterized with auxiliary parameters eta in [0,1] to replace the original parameters by theta_i = eta*theta_j. This approach is especially common in multinomial processing tree (MPT) modeling because the reparameterized, less complex model also belongs to the MPT class. Here, we discuss the importance of adjusting the prior distributions for the auxiliary parameters of a reparameterized model. This adjustment is important for computing the Bayes factor, a model selection criterion that measures the evidence in favor of an order constraint by trading off model fit and complexity. We show that uniform priors for the auxiliary parameters result in a Bayes factor that differs from the one that is obtained using a multivariate uniform prior on the order-constrained original parameters. As a remedy, we derive the adjusted priors for the auxiliary parameters of the reparameterized model. The practical relevance of the problem is underscored with a concrete example using the multi-trial pair-clustering model.

    @article{heck2016adjusted,
    archivePrefix = {arXiv},
    eprinttype = {arxiv},
    eprint = {1511.08775},
    title = {Adjusted Priors for {{Bayes}} Factors Involving Reparameterized Order Constraints},
    volume = {73},
    doi = {10.1016/j.jmp.2016.05.004},
    abstract = {Many psychological theories that are instantiated as statistical models imply order constraints on the model parameters. To fit and test such restrictions, order constraints of the form theta\_i $<$ theta\_j can be reparameterized with auxiliary parameters eta in [0,1] to replace the original parameters by theta\_i = eta*theta\_j. This approach is especially common in multinomial processing tree (MPT) modeling because the reparameterized, less complex model also belongs to the MPT class. Here, we discuss the importance of adjusting the prior distributions for the auxiliary parameters of a reparameterized model. This adjustment is important for computing the Bayes factor, a model selection criterion that measures the evidence in favor of an order constraint by trading off model fit and complexity. We show that uniform priors for the auxiliary parameters result in a Bayes factor that differs from the one that is obtained using a multivariate uniform prior on the order-constrained original parameters. As a remedy, we derive the adjusted priors for the auxiliary parameters of the reparameterized model. The practical relevance of the problem is underscored with a concrete example using the multi-trial pair-clustering model.},
    journaltitle = {Journal of Mathematical Psychology},
    date = {2016},
    pages = {110-116},
    keywords = {heckfirst},
    author = {Heck, Daniel W and Wagenmakers, Eric-Jan},
    osf = {https://osf.io/cz827}
    }

  • [PDF] Thielmann, I., Heck, D. W., & Hilbig, B. E. (2016). Anonymity and incentives: An investigation of techniques to reduce socially desirable responding in the Trust Game. Judgment and Decision Making, 11, 527-536. Retrieved from http://journal.sjdm.org/18/18322/jdm18322.html
    [BibTeX] [Abstract] [Data and R Scripts]

    Economic games offer a convenient approach for the study of prosocial behavior. As an advantage, they allow for straightforward implementation of different techniques to reduce socially desirable responding. We investigated the effectiveness of the most prominent of these techniques, namely providing behavior-contingent incentives and maximizing anonymity in three versions of the Trust Game: (i) a hypothetical version without monetary incentives and with a typical level of anonymity, (ii) an incentivized version with monetary incentives and the same (typical) level of anonymity, and (iii) an indirect questioning version without incentives but with a maximum level of anonymity, rendering responses inconclusive due to adding random noise via the Randomized Response Technique. Results from a large (N = 1,267) and heterogeneous sample showed comparable levels of trust for the hypothetical and incentivized versions using direct questioning. However, levels of trust decreased when maximizing the inconclusiveness of responses through indirect questioning. This implies that levels of trust might be particularly sensitive to changes in individuals’ anonymity but not necessarily to monetary incentives.

    @article{thielmann2016anonymity,
    title = {Anonymity and Incentives: {{An}} Investigation of Techniques to Reduce Socially Desirable Responding in the {{Trust Game}}},
    volume = {11},
    url = {http://journal.sjdm.org/18/18322/jdm18322.html},
    abstract = {Economic games offer a convenient approach for the study of prosocial behavior. As an advantage, they allow for straightforward implementation of different techniques to reduce socially desirable responding. We investigated the effectiveness of the most prominent of these techniques, namely providing behavior-contingent incentives and maximizing anonymity in three versions of the Trust Game: (i) a hypothetical version without monetary incentives and with a typical level of anonymity, (ii) an incentivized version with monetary incentives and the same (typical) level of anonymity, and (iii) an indirect questioning version without incentives but with a maximum level of anonymity, rendering responses inconclusive due to adding random noise via the Randomized Response Technique. Results from a large (N = 1,267) and heterogeneous sample showed comparable levels of trust for the hypothetical and incentivized versions using direct questioning. However, levels of trust decreased when maximizing the inconclusiveness of responses through indirect questioning. This implies that levels of trust might be particularly sensitive to changes in individuals’ anonymity but not necessarily to monetary incentives.},
    journaltitle = {Judgment and Decision Making},
    date = {2016},
    pages = {527-536},
    author = {Thielmann, Isabel and Heck, Daniel W and Hilbig, Benjamin E},
    osf = {https://osf.io/h7p5t}
    }

2015

  • [PDF] Erdfelder, E., Castela, M., Michalkiewicz, M., & Heck, D. W. (2015). The advantages of model fitting compared to model simulation in research on preference construction. Frontiers in Psychology, 6, 140. doi:10.3389/fpsyg.2015.00140
    [BibTeX]
    @article{erdfelder2015advantages,
    title = {The Advantages of Model Fitting Compared to Model Simulation in Research on Preference Construction},
    volume = {6},
    doi = {10.3389/fpsyg.2015.00140},
    journaltitle = {Frontiers in Psychology},
    date = {2015},
    pages = {140},
    author = {Erdfelder, Edgar and Castela, Marta and Michalkiewicz, Martha and Heck, Daniel W}
    }

  • [PDF] Heck, D. W., Wagenmakers, E., & Morey, R. D. (2015). Testing order constraints: Qualitative differences between Bayes factors and normalized maximum likelihood. Statistics & Probability Letters, 105, 157-162. doi:10.1016/j.spl.2015.06.014
    [BibTeX] [Abstract] [Preprint]

    We compared Bayes factors to normalized maximum likelihood for the simple case of selecting between an order-constrained versus a full binomial model. This comparison revealed two qualitative differences in testing order constraints regarding data dependence and model preference.

    @article{heck2015testing,
    archivePrefix = {arXiv},
    eprinttype = {arxiv},
    eprint = {1411.2778},
    title = {Testing Order Constraints: {{Qualitative}} Differences between {{Bayes}} Factors and Normalized Maximum Likelihood},
    volume = {105},
    doi = {10.1016/j.spl.2015.06.014},
    shorttitle = {Testing Order Constraints},
    abstract = {We compared Bayes factors to normalized maximum likelihood for the simple case of selecting between an order-constrained versus a full binomial model. This comparison revealed two qualitative differences in testing order constraints regarding data dependence and model preference.},
    journaltitle = {Statistics \& Probability Letters},
    shortjournal = {Statistics \& Probability Letters},
    date = {2015},
    pages = {157-162},
    keywords = {selection,model,model selection,Model selection,Minimum description length,Inequality constraint,Model complexity,heckfirst},
    author = {Heck, Daniel W and Wagenmakers, Eric-Jan and Morey, Richard D.}
    }

2014

  • [PDF] Heck, D. W., Moshagen, M., & Erdfelder, E. (2014). Model selection by minimum description length: Lower-bound sample sizes for the Fisher information approximation. Journal of Mathematical Psychology, 60, 29–34. doi:10.1016/j.jmp.2014.06.002
    [BibTeX] [Abstract] [GitHub] [Preprint]

    The Fisher information approximation (FIA) is an implementation of the minimum description length principle for model selection. Unlike information criteria such as AIC or BIC, it has the advantage of taking the functional form of a model into account. Unfortunately, FIA can be misleading in finite samples, resulting in an inversion of the correct rank order of complexity terms for competing models in the worst case. As a remedy, we propose a lower-bound N' for the sample size that suffices to preclude such errors. We illustrate the approach using three examples from the family of multinomial processing tree models.

    @article{heck2014model,
    archivePrefix = {arXiv},
    eprinttype = {arxiv},
    eprint = {1808.00212},
    title = {Model Selection by Minimum Description Length: {{Lower}}-Bound Sample Sizes for the {{Fisher}} Information Approximation},
    volume = {60},
    doi = {10.1016/j.jmp.2014.06.002},
    abstract = {The Fisher information approximation (FIA) is an implementation of the minimum description length principle for model selection. Unlike information criteria such as AIC or BIC, it has the advantage of taking the functional form of a model into account. Unfortunately, FIA can be misleading in finite samples, resulting in an inversion of the correct rank order of complexity terms for competing models in the worst case. As a remedy, we propose a lower-bound N' for the sample size that suffices to preclude such errors. We illustrate the approach using three examples from the family of multinomial processing tree models.},
    journaltitle = {Journal of Mathematical Psychology},
    date = {2014},
    pages = {29--34},
    keywords = {heckfirst},
    author = {Heck, Daniel W and Moshagen, Morten and Erdfelder, Edgar},
    github = {https://github.com/danheck/FIAminimumN}
    }

  • [PDF] Platzer, C., Bröder, A., & Heck, D. W. (2014). Deciding with the eye: How the visually manipulated accessibility of information in memory influences decision behavior. Memory & Cognition, 42, 595-608. doi:10.3758/s13421-013-0380-z
    [BibTeX] [Abstract]

    Decision situations are typically characterized by uncertainty: Individuals do not know the values of different options on a criterion dimension. For example, consumers do not know which is the healthiest of several products. To make a decision, individuals can use information about cues that are probabilistically related to the criterion dimension, such as sugar content or the concentration of natural vitamins. In two experiments, we investigated how the accessibility of cue information in memory affects which decision strategy individuals rely on. The accessibility of cue information was manipulated by means of a newly developed paradigm, the spatial-memory-cueing paradigm, which is based on a combination of the looking-at-nothing phenomenon and the spatial-cueing paradigm. The results indicated that people use different decision strategies, depending on the validity of easily accessible information. If the easily accessible information is valid, people stop information search and decide according to a simple take-the-best heuristic. If, however, information that comes to mind easily has a low predictive validity, people are more likely to integrate all available cue information in a compensatory manner.

    @article{platzer2014deciding,
    title = {Deciding with the Eye: {{How}} the Visually Manipulated Accessibility of Information in Memory Influences Decision Behavior},
    volume = {42},
    doi = {10.3758/s13421-013-0380-z},
    abstract = {Decision situations are typically characterized by uncertainty: Individuals do not know the values of different options on a criterion dimension. For example, consumers do not know which is the healthiest of several products. To make a decision, individuals can use information about cues that are probabilistically related to the criterion dimension, such as sugar content or the concentration of natural vitamins. In two experiments, we investigated how the accessibility of cue information in memory affects which decision strategy individuals rely on. The accessibility of cue information was manipulated by means of a newly developed paradigm, the spatial-memory-cueing paradigm, which is based on a combination of the looking-at-nothing phenomenon and the spatial-cueing paradigm. The results indicated that people use different decision strategies, depending on the validity of easily accessible information. If the easily accessible information is valid, people stop information search and decide according to a simple take-the-best heuristic. If, however, information that comes to mind easily has a low predictive validity, people are more likely to integrate all available cue information in a compensatory manner.},
    journaltitle = {Memory \& Cognition},
    date = {2014},
    pages = {595-608},
    keywords = {Decision Making,memory,Spatial attention,Accessibility,Visual salience},
    author = {Platzer, Christine and Bröder, Arndt and Heck, Daniel W}
    }

Invited Talks

Conference Presentations

Conference Posters