Publications

Copyright Notice: The documents distributed here have been provided as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.

Preprints

  • [PDF] Mayer, M., Heck, D. W., & Mocnik, F. (2022). Using OpenStreetMap as a data source in psychology and the social sciences. Psyarxiv. https://psyarxiv.com/h3npa/
    [Abstract] [BibTeX] [Data & R Scripts]

    Big data are not yet commonly used in psychological research as they are often difficult to access and process. One source of behavioral data containing both spatial and thematic information is OpenStreetMap, a collaborative online project aiming to develop a comprehensive world map. Besides spatial and thematic information about buildings, streets, and other geographical features, the collected data also contains information about the contribution process itself. Even though such data can be potentially useful for studying individual judgments and group processes within a natural context, behavioral data generated in OpenStreetMap have not yet been easily accessible for scholars in psychology and the social sciences. To overcome this obstacle, we developed a software package which makes OpenSteetMap data more accessible and allows researchers to extract data sets from the OpenStreetMap database as CSV or JSON files. Furthermore, we show how to select relevant map sections in which contributor activity is high and how to model and predict the behavior of contributors in OpenStreetMap. Moreover, we discuss opportunities and possible limitations of using behavioral data from OpenStreetMap as a data source.

    @report{mayer2022using,
    title = {Using {{OpenStreetMap}} as a Data Source in Psychology and the Social Sciences},
    author = {Mayer, Maren and Heck, Daniel W and Mocnik, Franz-Benjamin},
    date = {2022},
    location = {PsyArXiv},
    url = {https://psyarxiv.com/h3npa/},
    abstract = {Big data are not yet commonly used in psychological research as they are often difficult to access and process. One source of behavioral data containing both spatial and thematic information is OpenStreetMap, a collaborative online project aiming to develop a comprehensive world map. Besides spatial and thematic information about buildings, streets, and other geographical features, the collected data also contains information about the contribution process itself. Even though such data can be potentially useful for studying individual judgments and group processes within a natural context, behavioral data generated in OpenStreetMap have not yet been easily accessible for scholars in psychology and the social sciences. To overcome this obstacle, we developed a software package which makes OpenSteetMap data more accessible and allows researchers to extract data sets from the OpenStreetMap database as CSV or JSON files. Furthermore, we show how to select relevant map sections in which contributor activity is high and how to model and predict the behavior of contributors in OpenStreetMap. Moreover, we discuss opportunities and possible limitations of using behavioral data from OpenStreetMap as a data source.},
    osf = {https://osf.io/3jzg9},
    keywords = {submitted}
    }

  • [PDF] Siepe, B. S., & Heck, D. W. (2023). Multiverse analysis for dynamic network models: Investigating the influence of plausible alternative modeling choices. Osfpreprints. https://osf.io/etm3u/
    [Abstract] [BibTeX] [Data & R Scripts]

    The analysis of time series data has become very popular in psychology. Specifying complex time series models involves many researchers’ degrees of freedom, meaning that a wide range of plausible analysis strategies are possible. However, researchers typically perform and report only a single, preferred analysis while ignoring alternative assumptions and specifications that may lead to different conclusions. As a remedy, we propose multiverse analysis to investigate the robustness of time series network analysis to arbitrary, auxiliary modeling choices. We focus on group iterative multiple model estimation (GIMME), a highly data-driven modeling approach, and re-analyze two data sets (combined n=199) that were originally analyzed with GIMME. For each data set, we vary seven model parameters in a factorial design, resulting in 3,888 fitted models. We report the robustness of results at the group, subgroup, and individual levels and provide a web application to interactively explore our results. Group-level and, to a lesser extent, subgroup-level results were mostly stable across the multiverse with some differences between the two data sets. Individual-level estimates were more heterogeneous. Some modeling decisions (e.g., number of fit indices required for convergence) influenced results and conclusions more strongly. Overall, the robustness of GIMME to alternative modeling choices depends on the level of analysis. At the individual level, results may differ strongly even when changing the algorithm only slightly, which is highly relevant for applications such as clinical treatment selection and intervention. Multiverse analysis therefore is a valuable tool for checking the robustness of results from time series models.

    @report{siepe2023multiverse,
    title = {Multiverse Analysis for Dynamic Network Models: {{Investigating}} the Influence of Plausible Alternative Modeling Choices},
    author = {Siepe, Björn S. and Heck, Daniel W},
    date = {2023},
    location = {OSFpreprints},
    url = {https://osf.io/etm3u/},
    abstract = {The analysis of time series data has become very popular in psychology. Specifying complex time series models involves many researchers' degrees of freedom, meaning that a wide range of plausible analysis strategies are possible. However, researchers typically perform and report only a single, preferred analysis while ignoring alternative assumptions and specifications that may lead to different conclusions. As a remedy, we propose multiverse analysis to investigate the robustness of time series network analysis to arbitrary, auxiliary modeling choices. We focus on group iterative multiple model estimation (GIMME), a highly data-driven modeling approach, and re-analyze two data sets (combined n=199) that were originally analyzed with GIMME. For each data set, we vary seven model parameters in a factorial design, resulting in 3,888 fitted models. We report the robustness of results at the group, subgroup, and individual levels and provide a web application to interactively explore our results. Group-level and, to a lesser extent, subgroup-level results were mostly stable across the multiverse with some differences between the two data sets. Individual-level estimates were more heterogeneous. Some modeling decisions (e.g., number of fit indices required for convergence) influenced results and conclusions more strongly. Overall, the robustness of GIMME to alternative modeling choices depends on the level of analysis. At the individual level, results may differ strongly even when changing the algorithm only slightly, which is highly relevant for applications such as clinical treatment selection and intervention. Multiverse analysis therefore is a valuable tool for checking the robustness of results from time series models.},
    osf = {https://osf.io/xvrz5},
    keywords = {Bayesian estimation,dynamic network,idiographic,network analysis,Quantitative Methods,Quantitative Psychology,Social and Behavioral Sciences,Statistical Methods,submitted,Time series analysis}
    }

  • [PDF] Siepe, B. S., Bartoš, F., Morris, T., Boulesteix, A., Heck, D. W., & Pawel, S. (2023). Simulation studies for methodological research in psychology: A standardized template for planning, preregistration, and reporting. Psyarxiv. https://osf.io/preprints/psyarxiv/ufgy6/
    [Abstract] [BibTeX] [Data & R Scripts] [GitHub]

    Simulation studies are widely used for evaluating the performance of statistical methods in psychology. However, the quality of simulation studies can vary widely in terms of their design, execution, and reporting. In order to assess the quality of typical simulation studies in psychology, we reviewed 321 articles published in Psychological Methods, Behavioral Research Methods, and Multivariate Behavioral Research in 2021 and 2022, among which 100/321 = 31.2\% report a simulation study. We find that many articles do not provide complete and transparent information about key aspects of the study, such as justifications for the number of simulation repetitions, Monte Carlo uncertainty estimates, or code and data to reproduce the simulation studies. To address this problem, we provide a summary of the ADEMP (Aims, Data-generating mechanism, Estimands and other targets, Methods, Performance measures) design and reporting framework from Morris, White, and Crowther (2019) adapted to simulation studies in psychology. Based on this framework, we provide ADEMP-PreReg, a step-by-step template for researchers to use when designing, potentially preregistering, and reporting their simulation studies. We give formulae for estimating common performance measures, their Monte Carlo standard errors, and for calculating the number of simulation repetitions to achieve a desired Monte Carlo standard error. Finally, we give a detailed tutorial on how to apply the ADEMP framework in practice using an example simulation study on the evaluation of methods for the analysis of pre–post measurement experiments.

    @report{siepe2023simulation,
    title = {Simulation Studies for Methodological Research in Psychology: {{A}} Standardized Template for Planning, Preregistration, and Reporting},
    author = {Siepe, Björn S. and Bartoš, František and Morris, Tim and Boulesteix, Anne-Laure and Heck, Daniel W and Pawel, Samuel},
    date = {2023},
    location = {PsyArXiv},
    url = {https://osf.io/preprints/psyarxiv/ufgy6/},
    abstract = {Simulation studies are widely used for evaluating the performance of statistical methods in psychology. However, the quality of simulation studies can vary widely in terms of their design, execution, and reporting. In order to assess the quality of typical simulation studies in psychology, we reviewed 321 articles published in Psychological Methods, Behavioral Research Methods, and Multivariate Behavioral Research in 2021 and 2022, among which 100/321 = 31.2\% report a simulation study. We find that many articles do not provide complete and transparent information about key aspects of the study, such as justifications for the number of simulation repetitions, Monte Carlo uncertainty estimates, or code and data to reproduce the simulation studies. To address this problem, we provide a summary of the ADEMP (Aims, Data-generating mechanism, Estimands and other targets, Methods, Performance measures) design and reporting framework from Morris, White, and Crowther (2019) adapted to simulation studies in psychology. Based on this framework, we provide ADEMP-PreReg, a step-by-step template for researchers to use when designing, potentially preregistering, and reporting their simulation studies. We give formulae for estimating common performance measures, their Monte Carlo standard errors, and for calculating the number of simulation repetitions to achieve a desired Monte Carlo standard error. Finally, we give a detailed tutorial on how to apply the ADEMP framework in practice using an example simulation study on the evaluation of methods for the analysis of pre–post measurement experiments.},
    github = {https://github.com/bsiepe/ADEMP-PreReg},
    langid = {american},
    osf = {https://osf.io/dfgvu},
    keywords = {Meta-Research,Meta-science,Monte Carlo Experiments,Preregistration,Psychological Methods,Quantitative Methods,Quantitative Psychology,Reporting,Simulation Study,Social and Behavioral Sciences,Statistical Methods,submitted}
    }

2024

  • [PDF] Berkhout, S. W., Haaf, J. M., Gronau, Q. F., Heck, D. W., & Wagenmakers, E. (2024). A tutorial on Bayesian model-averaged meta-analysis in JASP. Behavior Research Methods, 56, 1260–1282. https://doi.org/10.3758/s13428-023-02093-6
    [Abstract] [BibTeX] [Preprint] [Data & R Scripts]

    Researchers conduct a meta-analysis in order to synthesize information across different studies. Compared to standard meta-analytic methods, Bayesian model-averaged meta-analysis offers several practical advantages including the ability to quantify evidence in favor of the absence of an effect, the ability to monitor evidence as individual studies accumulate indefinitely, and the ability to draw inferences based on multiple models simultaneously. This tutorial introduces the concepts and logic underlying Bayesian model-averaged meta-analysis and illustrates its application using the open-source software JASP. As a running example, we perform a Bayesian meta-analysis on language development in children. We show how to conduct a Bayesian model-averaged meta-analysis and how to interpret the results.

    @article{berkhout2024tutorial,
    title = {A Tutorial on {{Bayesian}} Model-Averaged Meta-Analysis in {{JASP}}},
    author = {Berkhout, Sophie W. and Haaf, Julia M. and Gronau, Quentin Frederik and Heck, Daniel W and Wagenmakers, Eric-Jan},
    date = {2024},
    journaltitle = {Behavior Research Methods},
    volume = {56},
    pages = {1260--1282},
    doi = {10.3758/s13428-023-02093-6},
    url = {https://psyarxiv.com/ne8dw/},
    abstract = {Researchers conduct a meta-analysis in order to synthesize information across different studies. Compared to standard meta-analytic methods, Bayesian model-averaged meta-analysis offers several practical advantages including the ability to quantify evidence in favor of the absence of an effect, the ability to monitor evidence as individual studies accumulate indefinitely, and the ability to draw inferences based on multiple models simultaneously. This tutorial introduces the concepts and logic underlying Bayesian model-averaged meta-analysis and illustrates its application using the open-source software JASP. As a running example, we perform a Bayesian meta-analysis on language development in children. We show how to conduct a Bayesian model-averaged meta-analysis and how to interpret the results.},
    osf = {https://osf.io/84gbu},
    keywords = {Bayes factor,evidence synthesis,JASP,meta-analysis,model-averaging,Quantitative Methods,Social and Behavioral Sciences,Statistical Methods}
    }

  • [PDF] Erdfelder, E., Nagel, J., Heck, D. W., & Petras, N. (2024). Uncovering null effects in null fields: The case of homeopathy. Journal of Clinical Epidemiology, 166, 111216. https://doi.org/10.1016/j.jclinepi.2023.11.006
    [Abstract] [BibTeX] [Preprint] [Data & R Scripts] [GitHub]

    Objective: Sigurdson, Sainani, and Ioannidis (this journal) discussed homeopathy as a prototypical example of a “null field” where true effects are nonexistent and positive effect sizes reflect bias only. Based on a sample of published randomized placebo-controlled trials, they observed a surprisingly large effect in favor of homeopathy (Hedges’ g = 0.36). In this comment, we propose selective publication of significant results as a parsimonious explanation of the overall bias evident in this field. Study Design: We re-analyzed the data of Sigurdson and collaborators using a meta-analytic mixture model that accounts for selective publishing with two parameters only, (1) the true homeopathy effect and (2) the proportion of results published only when statistically significant in the predicted direction. Results: The mixture model fitted the data. As expected, the estimate of the true homeopathy effect reduces to almost zero (dˆ= 0.05, 95\% CI: [-0.05 – 0.16]) when taking selective publishing into account. Conclusion: Inclusion of effect size measures adjusting for selective publication practices should become routine practice in meta-analyses. Null fields not only provide useful benchmarks for the overall bias evident in a field. They are also important for testing explanations of this bias and validating adjusted effect size measures.

    @article{erdfelder2024uncovering,
    title = {Uncovering Null Effects in Null Fields: {{The}} Case of Homeopathy},
    author = {Erdfelder, Edgar and Nagel, Juliane and Heck, Daniel W and Petras, Nils},
    date = {2024},
    journaltitle = {Journal of Clinical Epidemiology},
    volume = {166},
    pages = {111216},
    doi = {10.1016/j.jclinepi.2023.11.006},
    url = {https://psyarxiv.com/x6buj/},
    abstract = {Objective: Sigurdson, Sainani, and Ioannidis (this journal) discussed homeopathy as a prototypical example of a “null field” where true effects are nonexistent and positive effect sizes reflect bias only. Based on a sample of published randomized placebo-controlled trials, they observed a surprisingly large effect in favor of homeopathy (Hedges’ g = 0.36). In this comment, we propose selective publication of significant results as a parsimonious explanation of the overall bias evident in this field. Study Design: We re-analyzed the data of Sigurdson and collaborators using a meta-analytic mixture model that accounts for selective publishing with two parameters only, (1) the true homeopathy effect and (2) the proportion of results published only when statistically significant in the predicted direction. Results: The mixture model fitted the data. As expected, the estimate of the true homeopathy effect reduces to almost zero (dˆ= 0.05, 95\% CI: [-0.05 - 0.16]) when taking selective publishing into account. Conclusion: Inclusion of effect size measures adjusting for selective publication practices should become routine practice in meta-analyses. Null fields not only provide useful benchmarks for the overall bias evident in a field. They are also important for testing explanations of this bias and validating adjusted effect size measures.},
    github = {https://github.com/NilsPetras/metamix},
    osf = {https://osf.io/wuq2h}
    }

  • [PDF] Kloft, M., Snijder, J., & Heck, D. W. (2024). Measuring the variability of personality traits with interval responses: Psychometric properties of the dual-range slider response format. Behavior Research Methods. https://psyarxiv.com/pa4m3/
    [Abstract] [BibTeX] [Data & R Scripts]

    Measuring the variability in persons’ behaviors and experiences using ecological momentary assessment is time-consuming and costly. We investigate whether interval responses provided through a dual-range slider (DRS) response format can be used as a simple and efficient alternative: Respondents indicate variability in their behavior in a retrospective rating by choosing a lower and an upper bound on a continuous, bounded scale. We investigate the psychometric properties of this response format as a prerequisite for further validation. First, we assess the test-retest reliability of factor-score estimates for the width of DRS intervals. Second, we test whether factor-score estimates of the visual analog scale (VAS) and the location of DRS intervals show convergent validity. Third, we investigate whether factor-score estimates for the DRS are uncorrelated between different personality scales. We present a longitudinal multitrait-multimethod study using two personality scales (Extraversion, Conscientiousness) and two response formats (VAS, DRS) at two measurement occasions (six to eight weeks apart) for which we estimate factor-score correlations in a joint item response theory model. The test-retest reliability of the width of DRS intervals was high (ρ ≥ .73). Also, convergent validity between location scores of VAS and DRS was high (ρ ≥ .88). Conversely, discriminant validity of the width of DRS intervals between Extraversion and Conscientiousness was poor (ρ ≥ .94). In conclusion, the DRS seems to be a reliable response format that could be used to measure the central tendency of a trait equivalently to the VAS. However, it might not be well suited for measuring intra-individual variability in personality traits.

    @article{kloft2024measuring,
    title = {Measuring the Variability of Personality Traits with Interval Responses: {{Psychometric}} Properties of the Dual-Range Slider Response Format},
    author = {Kloft, Matthias and Snijder, Jean-Paul and Heck, Daniel W},
    date = {2024},
    journaltitle = {Behavior Research Methods},
    url = {https://psyarxiv.com/pa4m3/},
    abstract = {Measuring the variability in persons’ behaviors and experiences using ecological momentary assessment is time-consuming and costly. We investigate whether interval responses provided through a dual-range slider (DRS) response format can be used as a simple and efficient alternative: Respondents indicate variability in their behavior in a retrospective rating by choosing a lower and an upper bound on a continuous, bounded scale. We investigate the psychometric properties of this response format as a prerequisite for further validation. First, we assess the test-retest reliability of factor-score estimates for the width of DRS intervals. Second, we test whether factor-score estimates of the visual analog scale (VAS) and the location of DRS intervals show convergent validity. Third, we investigate whether factor-score estimates for the DRS are uncorrelated between different personality scales. We present a longitudinal multitrait-multimethod study using two personality scales (Extraversion, Conscientiousness) and two response formats (VAS, DRS) at two measurement occasions (six to eight weeks apart) for which we estimate factor-score correlations in a joint item response theory model. The test-retest reliability of the width of DRS intervals was high (ρ ≥ .73). Also, convergent validity between location scores of VAS and DRS was high (ρ ≥ .88). Conversely, discriminant validity of the width of DRS intervals between Extraversion and Conscientiousness was poor (ρ ≥ .94). In conclusion, the DRS seems to be a reliable response format that could be used to measure the central tendency of a trait equivalently to the VAS. However, it might not be well suited for measuring intra-individual variability in personality traits.},
    osf = {https://osf.io/gfzew},
    annotation = {pubstate=inpress}
    }

  • [PDF] Mayer, M., & Heck, D. W. (2024). Sequential collaboration: The accuracy of dependent, incremental judgments. Decision, 11, 212–237. https://doi.org/10.1037/dec0000193
    [Abstract] [BibTeX] [Preprint] [Data & R Scripts]

    Online collaborative projects in which users contribute to extensive knowledge bases such as Wikipedia or OpenStreetMap have become increasingly popular while yielding highly accurate information. Collaboration in such projects is organized sequentially with one contributor creating an entry and the following contributors deciding whether to adjust or to maintain the presented information. We refer to this process as sequential collaboration since individual judgments directly depend on the previous judgment. As sequential collaboration has not yet been examined systematically, we investigate whether dependent, sequential judgments become increasingly more accurate. Moreover, we test whether final sequential judgments are more accurate than the unweighted average of independent judgments from equally large groups. We conducted three studies with groups of four to six contributors who either answered general knowledge questions (Experiments 1 and 2) or located cities on maps (Experiment 3). As expected, individual judgments became more accurate across the course of sequential chains and final estimates were similarly accurate as unweighted averaging of independent judgments. These results show that sequential collaboration profits from dependent, incremental judgments, thereby shedding light on the contribution process underlying large-scale online collaborative projects.

    @article{mayer2024sequential,
    title = {Sequential Collaboration: {{The}} Accuracy of Dependent, Incremental Judgments},
    author = {Mayer, Maren and Heck, Daniel W},
    date = {2024},
    journaltitle = {Decision},
    volume = {11},
    pages = {212--237},
    doi = {10.1037/dec0000193},
    url = {https://psyarxiv.com/w4xdk/},
    abstract = {Online collaborative projects in which users contribute to extensive knowledge bases such as Wikipedia or OpenStreetMap have become increasingly popular while yielding highly accurate information. Collaboration in such projects is organized sequentially with one contributor creating an entry and the following contributors deciding whether to adjust or to maintain the presented information. We refer to this process as sequential collaboration since individual judgments directly depend on the previous judgment. As sequential collaboration has not yet been examined systematically, we investigate whether dependent, sequential judgments become increasingly more accurate. Moreover, we test whether final sequential judgments are more accurate than the unweighted average of independent judgments from equally large groups. We conducted three studies with groups of four to six contributors who either answered general knowledge questions (Experiments 1 and 2) or located cities on maps (Experiment 3). As expected, individual judgments became more accurate across the course of sequential chains and final estimates were similarly accurate as unweighted averaging of independent judgments. These results show that sequential collaboration profits from dependent, incremental judgments, thereby shedding light on the contribution process underlying large-scale online collaborative projects.},
    osf = {https://osf.io/96nsk},
    keywords = {Cognitive Psychology,group decision making,judgment and decision making,Judgment and Decision Making,mass collaboration,Social and Behavioral Sciences,teamwork,wisdom of crowds}
    }

  • [PDF] Schmidt, O., & Heck, D. W. (2024). The relevance of syntactic complexity for truth judgments: A registered report. Consciousness and Cognition, 117, 103623. https://doi.org/10.1016/j.concog.2023.103623
    [Abstract] [BibTeX] [Data & R Scripts]

    Fluency theories predict higher truth judgments for easily processed statements. We investigated two factors relevant for processing fluency: repetition and syntactic complexity. In three online experiments, we manipulated syntactic complexity by creating simple and complex versions of trivia statements. Experiments 1 and 2 replicated the repetition-based truth effect. However, syntactic complexity did not affect truth judgments although complex statements were processed slower than simple statements. This null effect is surprising given that both studies had high statistical power and varied in the relative salience of syntactic complexity. Experiment 3 provides a preregistered test of the discounting explanation by using improved trivia statements of equal length and by manipulating the salience of complexity in a randomized design. As predicted by fluency theories, simple statements were more likely judged as true than complex ones, while this effect was small and not moderated by the salience of complexity.

    @article{schmidt2024relevance,
    title = {The Relevance of Syntactic Complexity for Truth Judgments: {{A}} Registered Report},
    author = {Schmidt, Oliver and Heck, Daniel W},
    date = {2024},
    journaltitle = {Consciousness and Cognition},
    volume = {117},
    pages = {103623},
    doi = {10.1016/j.concog.2023.103623},
    abstract = {Fluency theories predict higher truth judgments for easily processed statements. We investigated two factors relevant for processing fluency: repetition and syntactic complexity. In three online experiments, we manipulated syntactic complexity by creating simple and complex versions of trivia statements. Experiments 1 and 2 replicated the repetition-based truth effect. However, syntactic complexity did not affect truth judgments although complex statements were processed slower than simple statements. This null effect is surprising given that both studies had high statistical power and varied in the relative salience of syntactic complexity. Experiment 3 provides a preregistered test of the discounting explanation by using improved trivia statements of equal length and by manipulating the salience of complexity in a randomized design. As predicted by fluency theories, simple statements were more likely judged as true than complex ones, while this effect was small and not moderated by the salience of complexity.},
    osf = {https://osf.io/vp2nu}
    }

  • [PDF] Schnuerch, M., Heck, D. W., & Erdfelder, E. (2024). Waldian t tests: Sequential Bayesian t tests with controlled error probabilities. Psychological Methods, 29, 99–116. https://doi.org/10.1037/met0000492
    [Abstract] [BibTeX] [Preprint] [Data & R Scripts]

    Bayesian t tests have become increasingly popular alternatives to null-hypothesis significance testing (NHST) in psychological research. In contrast to NHST, they allow for the quantification of evidence in favor of the null hypothesis and for optional stopping. A major drawback of Bayesian t tests, however, is that error probabilities of statistical decisions remain uncontrolled. Previous approaches in the literature to remedy this problem require time-consuming simulations to calibrate decision thresholds. In this article, we propose a sequential probability ratio test that combines Bayesian t tests with simple decision criteria developed by Abraham Wald in 1947. We discuss this sequential procedure, which we call Waldian t test, in the context of three recently proposed specifications of Bayesian t tests. Waldian t tests preserve the key idea of Bayesian t tests by assuming a distribution for the effect size under the alternative hypothesis. At the same time, they control expected frequentist error probabilities, with the nominal Type I and Type II error probabilities serving as upper bounds to the actual expected error rates under the specified statistical models. Thus, Waldian t tests are fully justified from both a Bayesian and a frequentist point of view. We highlight the relationship between Bayesian and frequentist error probabilities and critically discuss the implications of conventional stopping criteria for sequential Bayesian t tests. Finally, we provide a user-friendly web application that implements the proposed procedure for interested researchers.

    @article{schnuerch2024waldian,
    title = {Waldian t Tests: {{Sequential Bayesian}} t Tests with Controlled Error Probabilities},
    author = {Schnuerch, Martin and Heck, Daniel W and Erdfelder, Edgar},
    date = {2024},
    journaltitle = {Psychological Methods},
    volume = {29},
    pages = {99--116},
    doi = {10.1037/met0000492},
    url = {https://psyarxiv.com/x4ybm/},
    abstract = {Bayesian t tests have become increasingly popular alternatives to null-hypothesis significance testing (NHST) in psychological research. In contrast to NHST, they allow for the quantification of evidence in favor of the null hypothesis and for optional stopping. A major drawback of Bayesian t tests, however, is that error probabilities of statistical decisions remain uncontrolled. Previous approaches in the literature to remedy this problem require time-consuming simulations to calibrate decision thresholds. In this article, we propose a sequential probability ratio test that combines Bayesian t tests with simple decision criteria developed by Abraham Wald in 1947. We discuss this sequential procedure, which we call Waldian t test, in the context of three recently proposed specifications of Bayesian t tests. Waldian t tests preserve the key idea of Bayesian t tests by assuming a distribution for the effect size under the alternative hypothesis. At the same time, they control expected frequentist error probabilities, with the nominal Type I and Type II error probabilities serving as upper bounds to the actual expected error rates under the specified statistical models. Thus, Waldian t tests are fully justified from both a Bayesian and a frequentist point of view. We highlight the relationship between Bayesian and frequentist error probabilities and critically discuss the implications of conventional stopping criteria for sequential Bayesian t tests. Finally, we provide a user-friendly web application that implements the proposed procedure for interested researchers.},
    osf = {https://osf.io/z5vsy}
    }

  • [PDF] Siepe, B. S., Kloft, M., & Heck, D. W. (in press). Bayesian estimation and comparison of idiographic network models. Psychological Methods. https://psyarxiv.com/uwfjc/
    [Abstract] [BibTeX] [Data & R Scripts] [GitHub]

    Idiographic network models are estimated on time-series data of a single individual and allow researchers to investigate person-specific associations between multiple variables over time. The most common approach for fitting graphical vector autoregressive (GVAR) models uses LASSO regularization to estimate a contemporaneous and a temporal network. However, estimation of idiographic networks can be unstable in relatively small data sets typical for psychological research. This bears the risk of misinterpreting differences in estimated networks as spurious heterogeneity between individuals. As a remedy, we evaluate the performance of a Bayesian alternative for fitting GVAR models that allows for regularization of parameters while accounting for estimation uncertainty. We also develop a novel test, implemented in the tsnet package in R, which assesses whether differences between estimated networks are reliable based on matrix norms. We first compare Bayesian and LASSO approaches across a range of conditions in a simulation study. Overall, LASSO estimation performs well, while a Bayesian GVAR without edge selection may perform better when the true network is dense. In an additional simulation study, the novel test is conservative and shows good false-positive rates. Finally, we apply Bayesian estimation and testing in an empirical example using daily data on clinical symptoms for 40 individuals. We additionally provide functionality to estimate Bayesian GVAR models in Stan within tsnet. Overall, Bayesian GVAR modelling facilitates the assessment of estimation uncertainty which is important for studying inter-individual differences of intra-individual dynamics. In doing so, the novel test serves as a safeguard against premature conclusions of heterogeneity.

    @article{siepe2024bayesian,
    title = {Bayesian Estimation and Comparison of Idiographic Network Models},
    author = {Siepe, Björn S. and Kloft, Matthias and Heck, Daniel W},
    date = {2024},
    journaltitle = {Psychological Methods},
    url = {https://psyarxiv.com/uwfjc/},
    abstract = {Idiographic network models are estimated on time-series data of a single individual and allow researchers to investigate person-specific associations between multiple variables over time. The most common approach for fitting graphical vector autoregressive (GVAR) models uses LASSO regularization to estimate a contemporaneous and a temporal network. However, estimation of idiographic networks can be unstable in relatively small data sets typical for psychological research. This bears the risk of misinterpreting differences in estimated networks as spurious heterogeneity between individuals. As a remedy, we evaluate the performance of a Bayesian alternative for fitting GVAR models that allows for regularization of parameters while accounting for estimation uncertainty. We also develop a novel test, implemented in the tsnet package in R, which assesses whether differences between estimated networks are reliable based on matrix norms. We first compare Bayesian and LASSO approaches across a range of conditions in a simulation study. Overall, LASSO estimation performs well, while a Bayesian GVAR without edge selection may perform better when the true network is dense. In an additional simulation study, the novel test is conservative and shows good false-positive rates. Finally, we apply Bayesian estimation and testing in an empirical example using daily data on clinical symptoms for 40 individuals. We additionally provide functionality to estimate Bayesian GVAR models in Stan within tsnet. Overall, Bayesian GVAR modelling facilitates the assessment of estimation uncertainty which is important for studying inter-individual differences of intra-individual dynamics. In doing so, the novel test serves as a safeguard against premature conclusions of heterogeneity.},
    github = {https://github.com/bsiepe/tsnet},
    osf = {https://osf.io/9byaj},
    pubstate = {inpress},
    keywords = {Bayesian estimation,dynamic network,idiographic,network analysis,Quantitative Methods,Quantitative Psychology,Social and Behavioral Sciences,Statistical Methods,Time series analysis}
    }

  • [PDF] Singmann, H., Heck, D. W., Barth, M., Erdfelder, E., Arnold, N. R., Aust, F., Calanchini, J., Gümüsdagli, F. E., Horn, S. S., Kellen, D., Klauer, K. C., Matzke, D., Meissner, F., Michalkiewicz, M., Schaper, M. L., Stahl, C., Kuhlmann, B. G., & Groß, J. (in press). Evaluating the robustness of parameter estimates in cognitive models: A meta-analytic review of multinomial processing tree models across the multiverse of estimation methods. Psychological Bulletin. https://osf.io/preprints/psyarxiv/sd4xp
    [Abstract] [BibTeX] [Data & R Scripts]

    Researchers have become increasingly aware that data-analysis decisions affect results. Here, we examine this issue systematically for multinomial processing tree (MPT) models, a popular class of cognitive models for categorical data. Specifically, we examine the robustness of MPT model parameter estimates that arise from two important decisions: the level of data aggregation (complete pooling, no pooling, or partial pooling) and the statistical framework (frequentist or Bayesian). These decisions span a multiverse of estimation methods. We synthesized the data from 13,956 participants (164 published data sets) with a meta-analytic strategy and analyzed the magnitude of divergence between estimation methods for the parameters of nine popular multinomial processing tree (MPT) models in psychology (e.g., process dissociation, source monitoring). We further examined moderators as potential sources of divergence. We found that the absolute divergence between estimation methods was small on average (< .04; with MPT parameters ranging between 0 and 1); in some cases, however, divergence amounted to nearly the maximum possible range (.97). Divergence was partly explained by few moderators (e.g., the specific MPT model parameter, uncertainty in parameter estimation), but not by other plausible candidate moderators (e.g., parameter trade-offs, parameter correlations) or their interactions. Partial-pooling methods showed the smallest divergence within and across levels of pooling and thus seem to be an appropriate default method. Using MPT models as an example, we show how transparency and robustness can be increased in the field of cognitive modeling.

    @article{singmann2024evaluating,
    title = {Evaluating the Robustness of Parameter Estimates in Cognitive Models: {{A}} Meta-Analytic Review of Multinomial Processing Tree Models across the Multiverse of Estimation Methods},
    author = {Singmann, Henrik and Heck, Daniel W and Barth, Marius and Erdfelder, Edgar and Arnold, Nina R. and Aust, Frederik and Calanchini, Jimmy and Gümüsdagli, F E and Horn, Sebastian S. and Kellen, David and Klauer, Karl C. and Matzke, Dora and Meissner, Franziska and Michalkiewicz, Martha and Schaper, Marie Luisa and Stahl, Christoph and Kuhlmann, Beatrice G. and Groß, Julia},
    date = {2024},
    journaltitle = {Psychological Bulletin},
    url = {https://osf.io/preprints/psyarxiv/sd4xp},
    abstract = {Researchers have become increasingly aware that data-analysis decisions affect results. Here, we examine this issue systematically for multinomial processing tree (MPT) models, a popular class of cognitive models for categorical data. Specifically, we examine the robustness of MPT model parameter estimates that arise from two important decisions: the level of data aggregation (complete pooling, no pooling, or partial pooling) and the statistical framework (frequentist or Bayesian). These decisions span a multiverse of estimation methods. We synthesized the data from 13,956 participants (164 published data sets) with a meta-analytic strategy and analyzed the magnitude of divergence between estimation methods for the parameters of nine popular multinomial processing tree (MPT) models in psychology (e.g., process dissociation, source monitoring). We further examined moderators as potential sources of divergence. We found that the absolute divergence between estimation methods was small on average (\< .04; with MPT parameters ranging between 0 and 1); in some cases, however, divergence amounted to nearly the maximum possible range (.97). Divergence was partly explained by few moderators (e.g., the specific MPT model parameter, uncertainty in parameter estimation), but not by other plausible candidate moderators (e.g., parameter trade-offs, parameter correlations) or their interactions. Partial-pooling methods showed the smallest divergence within and across levels of pooling and thus seem to be an appropriate default method. Using MPT models as an example, we show how transparency and robustness can be increased in the field of cognitive modeling.},
    osf = {https://osf.io/waen6},
    pubstate = {inpress}
    }

  • [PDF] Thielmann, I., Hilbig, B. E., Klein, S. A., Seidl, A., & Heck, D. W. (in press). Cheating to benefit others? On the relation between Honesty-Humility and prosocial lies. Journal of Personality. https://doi.org/10.1111/jopy.12835
    [Abstract] [BibTeX] [Data & R Scripts]

    Objective: Among basic personality traits, Honesty-Humility yields the most consistent, negative link with dishonest behavior. The theoretical conceptualization of Honesty-Humility, however, suggests a potential boundary condition of this relation, namely, when lying is prosocial. We therefore tested the hypothesis that the association between Honesty-Humility and dishonesty weakens once lying benefits someone else, particularly so if this other is needy. Methods: In two online studies (Study 1: N = 775 in Germany; Study 2: N = 737 in the UK, preregistered), we measured self-reported Honesty-Humility and dishonest behavior in incentivized cheating paradigms in which the beneficiary of participants’ dishonesty was either the participants themselves, a “non-needy” other (e.g., another participant), or a “needy” other (e.g., a charity). Results: We found support for the robustness of the negative association between Honesty-Humility and dishonesty, even if lying was prosocial. Conclusion: Individuals high in Honesty-Humility largely prioritize honesty, even if there is a strong moral imperative to lie; those low in Honesty-Humility, by contrast, tend to lie habitually and thus even if they themselves do not directly profit monetarily. This suggests that (un)truthfulness may be an absolute rather than a relative aspect of Honesty-Humility, although further systematic tests of this proposition are needed.

    @article{thielmann2024cheating,
    title = {Cheating to Benefit Others? {{On}} the Relation between {{Honesty-Humility}} and Prosocial Lies},
    author = {Thielmann, Isabel and Hilbig, Benjamin E. and Klein, Sina A and Seidl, Alicia and Heck, Daniel W},
    date = {2024},
    journaltitle = {Journal of Personality},
    doi = {10.1111/jopy.12835},
    abstract = {Objective: Among basic personality traits, Honesty-Humility yields the most consistent, negative link with dishonest behavior. The theoretical conceptualization of Honesty-Humility, however, suggests a potential boundary condition of this relation, namely, when lying is prosocial. We therefore tested the hypothesis that the association between Honesty-Humility and dishonesty weakens once lying benefits someone else, particularly so if this other is needy. Methods: In two online studies (Study 1: N = 775 in Germany; Study 2: N = 737 in the UK, preregistered), we measured self-reported Honesty-Humility and dishonest behavior in incentivized cheating paradigms in which the beneficiary of participants’ dishonesty was either the participants themselves, a “non-needy” other (e.g., another participant), or a “needy” other (e.g., a charity). Results: We found support for the robustness of the negative association between Honesty-Humility and dishonesty, even if lying was prosocial. Conclusion: Individuals high in Honesty-Humility largely prioritize honesty, even if there is a strong moral imperative to lie; those low in Honesty-Humility, by contrast, tend to lie habitually and thus even if they themselves do not directly profit monetarily. This suggests that (un)truthfulness may be an absolute rather than a relative aspect of Honesty-Humility, although further systematic tests of this proposition are needed.},
    osf = {https://osf.io/g8bqh},
    pubstate = {inpress}
    }

2023

  • [PDF] Heck, D. W., & Bockting, F. (2023). Benefits of Bayesian model averaging for mixed-effects modeling. Computational Brain & Behavior, 6, 35–49. https://doi.org/10.1007/s42113-021-00118-x
    [Abstract] [BibTeX] [Preprint] [Data & R Scripts]

    Bayes factors allow researchers to test the effects of experimental manipulations in within-subjects designs using mixed-effects models. van Doorn et al. (2021) showed that such hypothesis tests can be performed by comparing different pairs of models which vary in the specification of the fixed- and random-effect structure for the within-subjects factor. To discuss the question of which model comparison is most appropriate, van Doorn et al. compared three corresponding Bayes factors using a case study. We argue that researchers should not only focus on pairwise comparisons of two nested models but rather use Bayesian model selection for the direct comparison of a larger set of mixed models reflecting different auxiliary assumptions regarding the heterogeneity of effect sizes across individuals. In a standard one-factorial, repeated-measures design, the comparison should include four mixed-effects models: fixed-effects H0, fixed-effects H1, random-effects H0, and random-effects H1. Thereby, one can test both the average effect of condition and the heterogeneity of effect sizes across individuals. Bayesian model averaging provides an inclusion Bayes factor which quantifies the evidence for or against the presence of an average effect of condition while taking model-selection uncertainty about the heterogeneity of individual effects into account. We present a simulation study showing that model averaging among a larger set of mixed models performs well in recovering the true, data-generating model.

    @article{heck2023benefits,
    title = {Benefits of {{Bayesian}} Model Averaging for Mixed-Effects Modeling},
    author = {Heck, Daniel W and Bockting, Florence},
    date = {2023},
    journaltitle = {Computational Brain \& Behavior},
    volume = {6},
    pages = {35--49},
    doi = {10.1007/s42113-021-00118-x},
    url = {https://psyarxiv.com/zusd2},
    abstract = {Bayes factors allow researchers to test the effects of experimental manipulations in within-subjects designs using mixed-effects models. van Doorn et al. (2021) showed that such hypothesis tests can be performed by comparing different pairs of models which vary in the specification of the fixed- and random-effect structure for the within-subjects factor. To discuss the question of which model comparison is most appropriate, van Doorn et al. compared three corresponding Bayes factors using a case study. We argue that researchers should not only focus on pairwise comparisons of two nested models but rather use Bayesian model selection for the direct comparison of a larger set of mixed models reflecting different auxiliary assumptions regarding the heterogeneity of effect sizes across individuals. In a standard one-factorial, repeated-measures design, the comparison should include four mixed-effects models: fixed-effects H0, fixed-effects H1, random-effects H0, and random-effects H1. Thereby, one can test both the average effect of condition and the heterogeneity of effect sizes across individuals. Bayesian model averaging provides an inclusion Bayes factor which quantifies the evidence for or against the presence of an average effect of condition while taking model-selection uncertainty about the heterogeneity of individual effects into account. We present a simulation study showing that model averaging among a larger set of mixed models performs well in recovering the true, data-generating model.},
    osf = {https://osf.io/tavnf}
    }

  • [PDF] Heck, D. W., Boehm, U., Böing-Messing, F., Bürkner, P., Derks, K., Dienes, Z., Fu, Q., Gu, X., Karimova, D., Kiers, H., Klugkist, I., Kuiper, R. M., Lee, M. D., Leenders, R., Leplaa, H. J., Linde, M., Ly, A., Meijerink-Bosman, M., Moerbeek, M., Mulder, J., Palfi, B., Schönbrodt, F., Tendeiro, J., van den Bergh, D., van Lissa, C. J., van Ravenzwaaij, D., Vanpaemel, W., Wagenmakers, E., Williams, D. R., Zondervan-Zwijnenburg, M., & Hoijtink, H. (2023). A review of applications of the Bayes factor in psychological research. Psychological Methods, 28, 558–579. https://doi.org/10.1037/met0000454
    [Abstract] [BibTeX] [Preprint] [Data & R Scripts]

    The last 25 years have shown a steady increase in attention for the Bayes factor as a tool for hypothesis evaluation and model selection. The present review highlights the potential of the Bayes factor in psychological research. We discuss six types of applications: Bayesian evaluation of point null, interval, and informative hypotheses, Bayesian evidence synthesis, Bayesian variable selection and model averaging, and Bayesian evaluation of cognitive models. We elaborate what each application entails, give illustrative examples, and provide an overview of key references and software with links to other applications. The paper is concluded with a discussion of the opportunities and pitfalls of Bayes factor applications and a sketch of corresponding future research lines.

    @article{heck2023review,
    title = {A Review of Applications of the {{Bayes}} Factor in Psychological Research},
    author = {Heck, Daniel W and Boehm, Udo and Böing-Messing, Florian and Bürkner, Paul-Christian and Derks, Koen and Dienes, Zoltan and Fu, Qianrao and Gu, Xin and Karimova, Diana and Kiers, Henk and Klugkist, Irene and Kuiper, Rebecca M. and Lee, Michael D. and Leenders, Roger and Leplaa, Hidde Jelmer and Linde, Maximilian and Ly, Alexander and Meijerink-Bosman, Marlyne and Moerbeek, Mirjam and Mulder, Joris and Palfi, Bence and Schönbrodt, Felix and Tendeiro, Jorge and van den Bergh, Don and van Lissa, Caspar J. and van Ravenzwaaij, Don and Vanpaemel, Wolf and Wagenmakers, Eric-Jan and Williams, Donald R. and Zondervan-Zwijnenburg, Marielle and Hoijtink, Herbert},
    date = {2023},
    journaltitle = {Psychological Methods},
    volume = {28},
    pages = {558--579},
    doi = {10.1037/met0000454},
    url = {https://psyarxiv.com/cu43g},
    abstract = {The last 25 years have shown a steady increase in attention for the Bayes factor as a tool for hypothesis evaluation and model selection. The present review highlights the potential of the Bayes factor in psychological research. We discuss six types of applications: Bayesian evaluation of point null, interval, and informative hypotheses, Bayesian evidence synthesis, Bayesian variable selection and model averaging, and Bayesian evaluation of cognitive models. We elaborate what each application entails, give illustrative examples, and provide an overview of key references and software with links to other applications. The paper is concluded with a discussion of the opportunities and pitfalls of Bayes factor applications and a sketch of corresponding future research lines.},
    osf = {https://osf.io/k9c5q},
    keywords = {Bayes factor,Computational Modeling,evidence,hypothesis testing,informative hypotheses,Meta-science,model selection,Quantitative Methods,Quantitative Psychology,Social and Behavioral Sciences,statistical inference,Statistical Methods,theory evaluation}
    }

  • [PDF] Kloft, M., Hartmann, R., Voss, A., & Heck, D. W. (2023). The Dirichlet dual response model: An item response model for continuous bounded interval responses. Psychometrika, 88, 888–916. https://doi.org/10.1007/s11336-023-09924-7
    [Abstract] [BibTeX] [Preprint] [Data & R Scripts]

    Standard response formats such as rating or visual analogue scales require respondents to condense distributions of latent states or behaviors into a single value. Whereas this is suitable to measure central tendency, it neglects the variance of distributions. As a remedy, variability may be measured using interval-response formats, more specifically the dual-range slider (RS2). Given the lack of an appropriate item response model for the RS2, we develop the Dirichlet dual response model (DDRM), an extension of the beta response model (BRM; Noel & Dauvier, 2007). We evaluate the DDRM’s performance by assessing parameter recovery in a simulation study. Results indicate overall good parameter recovery, although parameters concerning interval width (which reflect variability in behavior or states) perform worse than parameters concerning central tendency. We also test the model empirically by jointly fitting the BRM and the DDRM to single-range slider (RS1) and RS2 responses for two extraversion scales. While the DDRM has an acceptable fit, it shows some misfit regarding the RS2 interval widths. Nonetheless, the model indicates substantial differences between respondents concerning variability in behavior. High correlations between person parameters of the BRM and DDRM suggest convergent validity between the RS1 and the RS2 interval location. Both the simulation and the empirical study demonstrate that the latent parameter space of the DDRM addresses an important issue of the RS2 response format, namely, the scale-inherent interdependence of interval location and interval width (i.e., intervals at the boundaries are necessarily smaller).

    @article{kloft2023dirichlet,
    title = {The {{Dirichlet}} Dual Response Model: {{An}} Item Response Model for Continuous Bounded Interval Responses},
    author = {Kloft, Matthias and Hartmann, Raphael and Voss, Andreas and Heck, Daniel W},
    date = {2023},
    journaltitle = {Psychometrika},
    volume = {88},
    pages = {888--916},
    doi = {10.1007/s11336-023-09924-7},
    url = {https://psyarxiv.com/h4f8a/},
    abstract = {Standard response formats such as rating or visual analogue scales require respondents to condense distributions of latent states or behaviors into a single value. Whereas this is suitable to measure central tendency, it neglects the variance of distributions. As a remedy, variability may be measured using interval-response formats, more specifically the dual-range slider (RS2). Given the lack of an appropriate item response model for the RS2, we develop the Dirichlet dual response model (DDRM), an extension of the beta response model (BRM; Noel \& Dauvier, 2007). We evaluate the DDRM’s performance by assessing parameter recovery in a simulation study. Results indicate overall good parameter recovery, although parameters concerning interval width (which reflect variability in behavior or states) perform worse than parameters concerning central tendency. We also test the model empirically by jointly fitting the BRM and the DDRM to single-range slider (RS1) and RS2 responses for two extraversion scales. While the DDRM has an acceptable fit, it shows some misfit regarding the RS2 interval widths. Nonetheless, the model indicates substantial differences between respondents concerning variability in behavior. High correlations between person parameters of the BRM and DDRM suggest convergent validity between the RS1 and the RS2 interval location. Both the simulation and the empirical study demonstrate that the latent parameter space of the DDRM addresses an important issue of the RS2 response format, namely, the scale-inherent interdependence of interval location and interval width (i.e., intervals at the boundaries are necessarily smaller).},
    osf = {https://osf.io/br8fa}
    }

  • [PDF] Laukenmann, R., Erdfelder, E., Heck, D. W., & Moshagen, M. (2023). Cognitive processes underlying the weapon identification task: A comparison of models accounting for both response frequencies and response times. Social Cognition, 41, 137–164. https://doi.org/10.1521/soco.2023.41.2.137
    [Abstract] [BibTeX] [Data & R Scripts]

    The weapon identification task (WIT) is a sequential priming paradigm designed to assess effects of racial priming on visual discrimination between weapons (guns) and innocuous objects (tools). We compare four process models that differ in their assumptions on the nature and interplay of cognitive processes underlying prime-related weapon-bias effects in the WIT. All four models are variants of the process dissociation procedure, a widely used measurement model to disentangle effects of controlled and automatic processes. We formalized these models as response time-extended multinomial processing tree models and applied them to eight data sets. Overall, the default interventionist model (DIM) and the preemptive conflict-resolution model (PCRM) provided good model fit. Both assume fast automatic and slow controlled process routes. Additional comparisons favored the former model. In line with the DIM, we thus conclude that automatically evoked stereotype associations interfere with correct object identification from the outset of each WIT trial.

    @article{laukenmann2023cognitive,
    title = {Cognitive Processes Underlying the Weapon Identification Task: {{A}} Comparison of Models Accounting for Both Response Frequencies and Response Times},
    author = {Laukenmann, Ruben and Erdfelder, Edgar and Heck, Daniel W and Moshagen, Morten},
    date = {2023},
    journaltitle = {Social Cognition},
    volume = {41},
    pages = {137--164},
    doi = {10.1521/soco.2023.41.2.137},
    abstract = {The weapon identification task (WIT) is a sequential priming paradigm designed to assess effects of racial priming on visual discrimination between weapons (guns) and innocuous objects (tools). We compare four process models that differ in their assumptions on the nature and interplay of cognitive processes underlying prime-related weapon-bias effects in the WIT. All four models are variants of the process dissociation procedure, a widely used measurement model to disentangle effects of controlled and automatic processes. We formalized these models as response time-extended multinomial processing tree models and applied them to eight data sets. Overall, the default interventionist model (DIM) and the preemptive conflict-resolution model (PCRM) provided good model fit. Both assume fast automatic and slow controlled process routes. Additional comparisons favored the former model. In line with the DIM, we thus conclude that automatically evoked stereotype associations interfere with correct object identification from the outset of each WIT trial.},
    osf = {https://osf.io/7vjrq}
    }

  • [PDF] Mayer, M., & Heck, D. W. (2023). Cultural consensus theory for two-dimensional location judgments. Journal of Mathematical Psychology, 113, 102742. https://doi.org/10.1016/j.jmp.2022.102742
    [Abstract] [BibTeX] [Preprint] [Data & R Scripts]

    Cultural consensus theory is a model-based approach for analyzing responses of informants when correct answers are unknown. The model provides aggregate estimates of the latent consensus knowledge at the group level while accounting for heterogeneity in informant competence and item difficulty. We develop a new version of cultural consensus theory for two-dimensional continuous judgments which are obtained when asking informants to locate a set of unknown sites on a geographic map. The new model is fitted using hierarchical Bayesian modeling. A simulation study shows satisfactory parameter recovery for realistic numbers of informants and items. We also assess the accuracy of the aggregate location estimates by comparing the new model against simply computing the unweighted average of the informants’ judgments. A simulation study shows that, due to weighing judgments by the inferred competence of the informants, cultural consensus theory provides more accurate location estimates than unweighted averaging. The new model also showed a higher accuracy in an empirical study in which individuals judged the location of 57 European cities on maps.

    @article{mayer2023cultural,
    title = {Cultural Consensus Theory for Two-Dimensional Location Judgments},
    author = {Mayer, Maren and Heck, Daniel W},
    date = {2023},
    journaltitle = {Journal of Mathematical Psychology},
    volume = {113},
    pages = {102742},
    doi = {10.1016/j.jmp.2022.102742},
    url = {https://psyarxiv.com/unhvc/},
    abstract = {Cultural consensus theory is a model-based approach for analyzing responses of informants when correct answers are unknown. The model provides aggregate estimates of the latent consensus knowledge at the group level while accounting for heterogeneity in informant competence and item difficulty. We develop a new version of cultural consensus theory for two-dimensional continuous judgments which are obtained when asking informants to locate a set of unknown sites on a geographic map. The new model is fitted using hierarchical Bayesian modeling. A simulation study shows satisfactory parameter recovery for realistic numbers of informants and items. We also assess the accuracy of the aggregate location estimates by comparing the new model against simply computing the unweighted average of the informants’ judgments. A simulation study shows that, due to weighing judgments by the inferred competence of the informants, cultural consensus theory provides more accurate location estimates than unweighted averaging. The new model also showed a higher accuracy in an empirical study in which individuals judged the location of 57 European cities on maps.},
    osf = {https://osf.io/jbzk7}
    }

  • [PDF] Mayer, M., Broß, M., & Heck, D. W. (2023). Expertise determines frequency and accuracy of contributions in sequential collaboration. Judgment and Decision Making, 18, e2. https://doi.org/10.1017/jdm.2023.3
    [Abstract] [BibTeX] [Preprint] [Data & R Scripts]

    Many collaborative online projects such as Wikipedia and OpenStreetMap organize collaboration among their contributors sequentially. In sequential collaboration, one contributor creates an entry which is then consecutively encountered by other contributors who decide whether to adjust or maintain the presented entry. For numeric and geographical judgments, sequential collaboration yields improved judgments over the course of a sequential chain and results in accurate final estimates. We hypothesize that these benefits emerge since contributors adjust entries according to their expertise, implying that judgments of experts have a larger impact compared with those of novices. In three preregistered studies, we measured and manipulated expertise to investigate whether expertise leads to higher change probabilities and larger improvements in judgment accuracy. Moreover, we tested whether expertise results in an increase in accuracy over the course of a sequential chain. As expected, experts adjusted entries more frequently, made larger improvements, and contributed more to the final estimates of sequential chains. Overall, our findings suggest that the high accuracy of sequential collaboration is due to an implicit weighting of judgments by expertise.

    @article{mayer2023expertise,
    title = {Expertise Determines Frequency and Accuracy of Contributions in Sequential Collaboration},
    author = {Mayer, Maren and Broß, Marcel and Heck, Daniel W},
    date = {2023},
    journaltitle = {Judgment and Decision Making},
    volume = {18},
    pages = {e2},
    doi = {10.1017/jdm.2023.3},
    url = {https://psyarxiv.com/s7vtg/},
    abstract = {Many collaborative online projects such as Wikipedia and OpenStreetMap organize collaboration among their contributors sequentially. In sequential collaboration, one contributor creates an entry which is then consecutively encountered by other contributors who decide whether to adjust or maintain the presented entry. For numeric and geographical judgments, sequential collaboration yields improved judgments over the course of a sequential chain and results in accurate final estimates. We hypothesize that these benefits emerge since contributors adjust entries according to their expertise, implying that judgments of experts have a larger impact compared with those of novices. In three preregistered studies, we measured and manipulated expertise to investigate whether expertise leads to higher change probabilities and larger improvements in judgment accuracy. Moreover, we tested whether expertise results in an increase in accuracy over the course of a sequential chain. As expected, experts adjusted entries more frequently, made larger improvements, and contributed more to the final estimates of sequential chains. Overall, our findings suggest that the high accuracy of sequential collaboration is due to an implicit weighting of judgments by expertise.},
    osf = {https://osf.io/z2cxv}
    }

  • [PDF] Schmidt, O., Erdfelder, E., & Heck, D. W. (2023). How to develop, test, and extend multinomial processing tree models: A tutorial. Psychological Methods, 28, 558–579. https://doi.org/10.1037/met0000561
    [Abstract] [BibTeX] [Preprint] [Data & R Scripts]

    Many psychological theories assume that observable responses are determined by multiple latent processes. Multinomial processing tree (MPT) models are a class of cognitive models for discrete responses that allow researchers to disentangle and measure such processes. Before applying MPT models to specific psychological theories, it is necessary to tailor a model to specific experimental designs. In this tutorial, we explain how to develop, fit, and test MPT models using the classical pair-clustering model as a running example. The first part covers the required data structures, model equations, identifiability, model validation, maximum-likelihood estimation, hypothesis tests, and power analyses using the software multiTree. The second part introduces hierarchical MPT modeling which allows researchers to account for individual differences and to estimate the correlations of latent processes among each other and with additional covariates using the TreeBUGS package in R. All examples including data and annotated analysis scripts are provided at the Open Science Framework (https://osf.io24pbm/).

    @article{schmidt2023how,
    title = {How to Develop, Test, and Extend Multinomial Processing Tree Models: {{A}} Tutorial},
    author = {Schmidt, Oliver and Erdfelder, Edgar and Heck, Daniel W},
    date = {2023},
    journaltitle = {Psychological Methods},
    volume = {28},
    pages = {558--579},
    doi = {10.1037/met0000561},
    url = {https://psyarxiv.com/gh8md/},
    abstract = {Many psychological theories assume that observable responses are determined by multiple latent processes. Multinomial processing tree (MPT) models are a class of cognitive models for discrete responses that allow researchers to disentangle and measure such processes. Before applying MPT models to specific psychological theories, it is necessary to tailor a model to specific experimental designs. In this tutorial, we explain how to develop, fit, and test MPT models using the classical pair-clustering model as a running example. The first part covers the required data structures, model equations, identifiability, model validation, maximum-likelihood estimation, hypothesis tests, and power analyses using the software multiTree. The second part introduces hierarchical MPT modeling which allows researchers to account for individual differences and to estimate the correlations of latent processes among each other and with additional covariates using the TreeBUGS package in R. All examples including data and annotated analysis scripts are provided at the Open Science Framework (https://osf.io24pbm/).},
    osf = {https://osf.io/24pbm}
    }

  • [PDF] van Doorn, J., Haaf, J. M., Stefan, A. M., Wagenmakers, E., Cox, G. E., Davis-Stober, C., Heathcote, A., Heck, D. W., Kalish, M., Kellen, D., Matzke, D., Morey, R. D., Nicenboim, B., van Ravenzwaaij, D., Rouder, J., Schad, D., Shiffrin, R., Singmann, H., Vasishth, S., Verıssimo, J., Bockting, F., Chandramouli, S., Dunn, J. C., Gronau, Q. F., Linde, M., McMullin, S. D., Navarro, D., Schnuerch, M., Yadav, H., & Aust, F. (2023). Bayes factors for mixed models: A discussion. Computational Brain & Behavior, 6, 140–158. https://doi.org/10.1007/s42113-022-00160-3
    [Abstract] [BibTeX] [Preprint]

    van Doorn et al. (2021) outlined various questions that arise when conducting Bayesian model comparison for mixed effects models. Seven response articles offered their own perspective on the preferred setup for mixed model comparison, on the most appropriate specification of prior distributions, and on the desirability of default recommendations. This article presents a round-table discussion that aims to clarify outstanding issues, explore common ground, and outline practical considerations for any researcher wishing to conduct a Bayesian mixed effects model comparison.

    @article{vandoorn2023bayes,
    title = {Bayes Factors for Mixed Models: {{A}} Discussion},
    author = {van Doorn, Johnny and Haaf, Julia M. and Stefan, Angelika M. and Wagenmakers, Eric-Jan and Cox, Gregory Edward and Davis-Stober, Clintin and Heathcote, Andrew and Heck, Daniel W and Kalish, Michael and Kellen, David and Matzke, Dora and Morey, Richard D. and Nicenboim, Bruno and van Ravenzwaaij, Don and Rouder, Jeffrey and Schad, Daniel and Shiffrin, Richard and Singmann, Henrik and Vasishth, Shravan and Verıssimo, Joao and Bockting, Florence and Chandramouli, Suyog and Dunn, John C. and Gronau, Quentin Frederik and Linde, Maximilian and McMullin, Sara D. and Navarro, Danielle and Schnuerch, Martin and Yadav, Himanshu and Aust, Frederik},
    options = {useprefix=true},
    date = {2023},
    journaltitle = {Computational Brain \& Behavior},
    volume = {6},
    pages = {140--158},
    doi = {10.1007/s42113-022-00160-3},
    url = {https://psyarxiv.com/yjs95/},
    urldate = {2022-10-18},
    abstract = {van Doorn et al. (2021) outlined various questions that arise when conducting Bayesian model comparison for mixed effects models. Seven response articles offered their own perspective on the preferred setup for mixed model comparison, on the most appropriate specification of prior distributions, and on the desirability of default recommendations. This article presents a round-table discussion that aims to clarify outstanding issues, explore common ground, and outline practical considerations for any researcher wishing to conduct a Bayesian mixed effects model comparison.},
    langid = {american},
    keywords = {Bayes factor,Mixed effects model,Quantitative Methods,Social and Behavioral Sciences,Statistical Methods}
    }

2022

  • [PDF] Kaufmann, T. H., Lilleholt, L., Böhm, R., Zettler, I., & Heck, D. W. (2022). Sensitive attitudes and adherence to recommendations during the COVID-19 pandemic: Comparing direct and indirect questioning techniques. Personality and Individual Differences, 190, 111525. https://doi.org/10.1016/j.paid.2022.111525
    [Abstract] [BibTeX] [Preprint] [Data & R Scripts]

    During the COVID-19 pandemic, various behavioral measures were imposed to curb the spread of the virus. In a preregistered study based on a quota-representative sample of adult Danish citizens (N = 1,031), we compared the prevalence estimates of self-reported handwashing, physical distancing, and attitudes toward the behavioral measures between people surveyed with a direct and an indirect questioning approach (i.e., the crosswise model). Moreover, we investigated two possible predictors of sensitive behaviors and attitudes, namely empathy for people vulnerable to the virus and Honesty-Humility from the HEXACO Model of Personality. We also examined the interaction of both predictors with the questioning format. Survey participants reported more violation of guidelines regarding handwashing and physical distancing when asked indirectly rather than directly, whereas attitudes regarding the behavioral measures did not differ between the two questioning formats. Respondents with less empathy for people vulnerable to COVID-19 reported more violations of handwashing and physical-distancing, and those low on Honesty-Humility reported more violations of physical distancing.

    @article{kaufmann2022sensitive,
    title = {Sensitive Attitudes and Adherence to Recommendations during the {{COVID-19}} Pandemic: {{Comparing}} Direct and Indirect Questioning Techniques},
    author = {Kaufmann, Tabea Hanna and Lilleholt, Lau and Böhm, Robert and Zettler, Ingo and Heck, Daniel W},
    date = {2022},
    journaltitle = {Personality and Individual Differences},
    volume = {190},
    pages = {111525},
    doi = {10.1016/j.paid.2022.111525},
    url = {https://psyarxiv.com/tp6ja},
    abstract = {During the COVID-19 pandemic, various behavioral measures were imposed to curb the spread of the virus. In a preregistered study based on a quota-representative sample of adult Danish citizens (N = 1,031), we compared the prevalence estimates of self-reported handwashing, physical distancing, and attitudes toward the behavioral measures between people surveyed with a direct and an indirect questioning approach (i.e., the crosswise model). Moreover, we investigated two possible predictors of sensitive behaviors and attitudes, namely empathy for people vulnerable to the virus and Honesty-Humility from the HEXACO Model of Personality. We also examined the interaction of both predictors with the questioning format. Survey participants reported more violation of guidelines regarding handwashing and physical distancing when asked indirectly rather than directly, whereas attitudes regarding the behavioral measures did not differ between the two questioning formats. Respondents with less empathy for people vulnerable to COVID-19 reported more violations of handwashing and physical-distancing, and those low on Honesty-Humility reported more violations of physical distancing.},
    osf = {https://osf.io/m6kdy}
    }

  • [PDF] Malejka, S., Heck, D. W., & Erdfelder, E. (2022). Recognition-memory models and ranking tasks: The importance of auxiliary assumptions for tests of the two-high-threshold model. Journal of Memory and Language, 127, 104356. https://doi.org/10.1016/j.jml.2022.104356
    [Abstract] [BibTeX] [Data & R Scripts]

    The question of whether recognition memory should be measured assuming continuous memory strength (signal detection theory) or discrete memory states (threshold theory) has become a prominent point of discussion. In light of limitations associated with receiver operating characteristics, comparisons of the rival models based on simple qualitative predictions derived from their core properties were proposed. In particular, K-alternative ranking tasks (KARTs) yield a conditional probability of targets being assigned Rank 2, given that they were not assigned Rank 1, which is higher for strong than for weak targets. This finding has been argued to be incompatible with the two-high-threshold (2HT) model (Kellen & Klauer, 2014). However, we show that the incompatibility only holds under the auxiliary assumption that the probability of detecting lures is invariant under target-strength manipulations. We tested this assumption in two different ways: by developing new model versions of 2HT theory tailored to KARTs and by employing novel forced-choice-then-ranking tasks. Our results show that 2HT models can explain increases in the conditional probability of targets being assigned Rank 2 with target strength. This effect is due to larger 2HT lure-detection probabilities in test displays in which lures are ranked jointly with strong (as compared to weak) targets. We conclude that lure-detection probabilities vary with target strength and recommend that 2HT models should allow for this variation. As such models are compatible with KART performance, our work highlights the importance of carefully adapting measurement models to new paradigms.

    @article{malejka2022recognitionmemory,
    title = {Recognition-Memory Models and Ranking Tasks: {{The}} Importance of Auxiliary Assumptions for Tests of the Two-High-Threshold Model},
    author = {Malejka, Simone and Heck, Daniel W and Erdfelder, Edgar},
    date = {2022},
    journaltitle = {Journal of Memory and Language},
    volume = {127},
    pages = {104356},
    doi = {10.1016/j.jml.2022.104356},
    abstract = {The question of whether recognition memory should be measured assuming continuous memory strength (signal detection theory) or discrete memory states (threshold theory) has become a prominent point of discussion. In light of limitations associated with receiver operating characteristics, comparisons of the rival models based on simple qualitative predictions derived from their core properties were proposed. In particular, K-alternative ranking tasks (KARTs) yield a conditional probability of targets being assigned Rank 2, given that they were not assigned Rank 1, which is higher for strong than for weak targets. This finding has been argued to be incompatible with the two-high-threshold (2HT) model (Kellen \& Klauer, 2014). However, we show that the incompatibility only holds under the auxiliary assumption that the probability of detecting lures is invariant under target-strength manipulations. We tested this assumption in two different ways: by developing new model versions of 2HT theory tailored to KARTs and by employing novel forced-choice-then-ranking tasks. Our results show that 2HT models can explain increases in the conditional probability of targets being assigned Rank 2 with target strength. This effect is due to larger 2HT lure-detection probabilities in test displays in which lures are ranked jointly with strong (as compared to weak) targets. We conclude that lure-detection probabilities vary with target strength and recommend that 2HT models should allow for this variation. As such models are compatible with KART performance, our work highlights the importance of carefully adapting measurement models to new paradigms.},
    osf = {https://osf.io/ca8dp}
    }

2021

  • [PDF] Bröder, A., Platzer, C., & Heck, D. W. (2021). Salience effects in memory-based decisions: An improved replication. Journal of Cognitive Psychology, 33, 64–76. https://doi.org/10.1080/20445911.2020.1869752
    [Abstract] [BibTeX] [Data & R Scripts]

    A brief experimental report by Platzer and Bröder (2012) claimed that in memory-based decisions, salient attributes or cues are often not ignored even if they are less valid than other cues. When the rank order of cue validities was congruent with their salience hierarchy, people predominantly used a noncompensatory take-the-best strategy (TTB) based on the most valid cue that was also most salient, whereas they used more compensatory strategies when hierarchies were incongruent (i.e. the least valid cue was most salient). Given the recent replication crisis in psychology and methodological shortcomings of the original study, a better-controlled replication with new stimuli and a larger sample was conducted. Two different tasks in a pilot study established convergent evidence for an unequivocal visual salience hierarchy of the cues used. The main experiment clearly replicated the salience effect at the strategy selection level and the longer response times for compensatory strategies compared to TTB. A response time interaction of strategy and condition did not replicate.

    @article{broder2021salience,
    title = {Salience Effects in Memory-Based Decisions: {{An}} Improved Replication},
    author = {Bröder, Arndt and Platzer, Christine and Heck, Daniel W},
    date = {2021},
    journaltitle = {Journal of Cognitive Psychology},
    volume = {33},
    pages = {64--76},
    doi = {10.1080/20445911.2020.1869752},
    abstract = {A brief experimental report by Platzer and Bröder (2012) claimed that in memory-based decisions, salient attributes or cues are often not ignored even if they are less valid than other cues. When the rank order of cue validities was congruent with their salience hierarchy, people predominantly used a noncompensatory take-the-best strategy (TTB) based on the most valid cue that was also most salient, whereas they used more compensatory strategies when hierarchies were incongruent (i.e. the least valid cue was most salient). Given the recent replication crisis in psychology and methodological shortcomings of the original study, a better-controlled replication with new stimuli and a larger sample was conducted. Two different tasks in a pilot study established convergent evidence for an unequivocal visual salience hierarchy of the cues used. The main experiment clearly replicated the salience effect at the strategy selection level and the longer response times for compensatory strategies compared to TTB. A response time interaction of strategy and condition did not replicate.},
    osf = {https://osf.io/gpsuj}
    }

  • [PDF] Gronau, Q. F., Heck, D. W., Berkhout, S. W., Haaf, J. M., & Wagenmakers, E. (2021). A primer on Bayesian model-averaged meta-analysis. Advances in Methods and Practices in Psychological Science, 4, 1–19. https://doi.org/10.1177/25152459211031256
    [Abstract] [BibTeX] [Preprint] [Data & R Scripts]

    Meta-analysis is the predominant approach for quantitatively synthesizing a set of studies. If the studies themselves are of high quality, meta-analysis can provide valuable insights into the current scientific state of knowledge about a particular phenomenon. In psychological science, the most common approach is to conduct frequentist meta-analysis. In this primer, we discuss an alternative method, Bayesian model-averaged meta-analysis. This procedure combines the results of four Bayesian meta-analysis models: (1) fixed-effect null hypothesis, (2) fixed-effect alternative hypothesis, (3) random-effects null hypothesis, and (4) random-effects alternative hypothesis. These models are combined according to their plausibilities in light of the observed data to address the two key questions “Is the overall effect non-zero?” and “Is there between-study variability in effect size?”. Bayesian model-averaged meta-analysis therefore avoids the need to select either a fixed-effect or random-effects model and instead takes into account model uncertainty in a principled manner.

    @article{gronau2021primer,
    title = {A Primer on {{Bayesian}} Model-Averaged Meta-Analysis},
    author = {Gronau, Quentin F. and Heck, Daniel W and Berkhout, Sophie W. and Haaf, Julia M. and Wagenmakers, Eric-Jan},
    date = {2021},
    journaltitle = {Advances in Methods and Practices in Psychological Science},
    volume = {4},
    pages = {1--19},
    doi = {10.1177/25152459211031256},
    url = {https://psyarxiv.com/97qup},
    abstract = {Meta-analysis is the predominant approach for quantitatively synthesizing a set of studies. If the studies themselves are of high quality, meta-analysis can provide valuable insights into the current scientific state of knowledge about a particular phenomenon. In psychological science, the most common approach is to conduct frequentist meta-analysis. In this primer, we discuss an alternative method, Bayesian model-averaged meta-analysis. This procedure combines the results of four Bayesian meta-analysis models: (1) fixed-effect null hypothesis, (2) fixed-effect alternative hypothesis, (3) random-effects null hypothesis, and (4) random-effects alternative hypothesis. These models are combined according to their plausibilities in light of the observed data to address the two key questions "Is the overall effect non-zero?" and "Is there between-study variability in effect size?". Bayesian model-averaged meta-analysis therefore avoids the need to select either a fixed-effect or random-effects model and instead takes into account model uncertainty in a principled manner.},
    osf = {https://osf.io/npw5c},
    keywords = {Bayesian meta-analysis}
    }

  • [PDF] Heck, D. W. (2021). Assessing the ‘paradox’ of converging evidence by modeling the joint distribution of individual differences: Comment on Davis-Stober and Regenwetter (2019). Psychological Review, 128, 1187–1196. https://doi.org/10.1037/rev0000316
    [Abstract] [BibTeX] [Preprint] [Data & R Scripts]

    Davis-Stober and Regenwetter (2019; D&R) showed that even if all predictions of a theory hold in separate studies, not even a single individual may be described by all predictions jointly. To illustrate this ‘paradox’ of converging evidence, D&R derived upper and lower bounds on the proportion of individuals for whom all predictions of a theory hold. These bounds reflect extreme positive and negative stochastic dependence of individual differences across predictions. However, psychological theories often make more specific and plausible assumptions, such as that true individual differences are independent or show a certain degree of consistency (e.g., due to a common underlying trait). Based on this psychometric perspective, I extend D&R’s conceptual framework by developing a multivariate normal model of individual effects. Assuming perfect consistency (i.e., a correlation of one) of individual effects across predictions, the proportion of individuals described by all predictions of a theory is identical to D&R’s upper bound. The proportion drops substantially when assuming independence of individual effects. However, irrespective of the assumed correlation, the multivariate normal model implies a lower bound that is strictly above D&R’s lower bound if a theory makes at least three predictions. The multivariate model thus mitigates the ‘paradox’ of converging evidence even though it does not resolve it. Overall, scholars can improve the scope of their theories by assuming that individual effects are highly correlated across predictions.

    @article{heck2021assessing,
    title = {Assessing the 'paradox' of Converging Evidence by Modeling the Joint Distribution of Individual Differences: {{Comment}} on {{Davis-Stober}} and {{Regenwetter}} (2019)},
    author = {Heck, Daniel W},
    date = {2021},
    journaltitle = {Psychological Review},
    volume = {128},
    pages = {1187--1196},
    doi = {10.1037/rev0000316},
    url = {https://psyarxiv.com/ca8z4/},
    abstract = {Davis-Stober and Regenwetter (2019; D\&R) showed that even if all predictions of a theory hold in separate studies, not even a single individual may be described by all predictions jointly. To illustrate this 'paradox' of converging evidence, D\&R derived upper and lower bounds on the proportion of individuals for whom all predictions of a theory hold. These bounds reflect extreme positive and negative stochastic dependence of individual differences across predictions. However, psychological theories often make more specific and plausible assumptions, such as that true individual differences are independent or show a certain degree of consistency (e.g., due to a common underlying trait). Based on this psychometric perspective, I extend D\&R's conceptual framework by developing a multivariate normal model of individual effects. Assuming perfect consistency (i.e., a correlation of one) of individual effects across predictions, the proportion of individuals described by all predictions of a theory is identical to D\&R's upper bound. The proportion drops substantially when assuming independence of individual effects. However, irrespective of the assumed correlation, the multivariate normal model implies a lower bound that is strictly above D\&R's lower bound if a theory makes at least three predictions. The multivariate model thus mitigates the 'paradox' of converging evidence even though it does not resolve it. Overall, scholars can improve the scope of their theories by assuming that individual effects are highly correlated across predictions.},
    osf = {https://osf.io/7fk49},
    keywords = {effect size,heterogeneity,Mathematical Psychology,Meta-science,psychometrics,Psychometrics,Quantitative Methods,Social and Behavioral Sciences,theoretical scope,Theory and Philosophy of Science,theory development}
    }

2020

  • [PDF] Bott, F. M., Heck, D. W., & Meiser, T. (2020). Parameter validation in hierarchical MPT models by functional dissociation with continuous covariates: An application to contingency inference. Journal of Mathematical Psychology, 98, 102388. https://doi.org/10.1016/j.jmp.2020.102388
    [Abstract] [BibTeX] [Data & R Scripts]

    In traditional multinomial processing tree (MPT) models for aggregate frequency data, parameters have usually been validated by means of experimental manipulations, thereby testing selective effects of discrete independent variables on specific model parameters. More recently, hierarchical MPT models which account for parameter heterogeneity between participants have been introduced. These models offer a new possibility of parameter validation by analyzing selective covariations of interindividual differences in MPT model parameters with continuous covariates. The new approach enables researchers to test parameter validity in terms of functional dissociations, including convergent validity and discriminant validity in a nomological network. Here, we apply the novel approach to a multidimensional source-monitoring model in the domain of stereotype formation based on pseudocontingency inference. Using hierarchical Bayesian MPT models, we test the validity of source-guessing parameters as indicators of specific source evaluations on the individual level. First, analyzing experimental data on stereotype formation (N = 130), we replicated earlier findings of biased source-guessing parameters while taking parameter heterogeneity across participants into account. Second, we investigated the specificity of covariations between conditional guessing parameters and continuous direct measures of source evaluations. Interindividual differences in direct evaluations predicted interindividual differences in specific source-guessing parameters, thus validating their substantive interpretation. Third, in an exploratory analysis, we examined relations of memory parameters and guessing parameters with cognitive performance measures from a standardized cognitive assessment battery.

    @article{bott2020parameter,
    title = {Parameter Validation in Hierarchical {{MPT}} Models by Functional Dissociation with Continuous Covariates: {{An}} Application to Contingency Inference},
    author = {Bott, Franziska M. and Heck, Daniel W and Meiser, Thorsten},
    date = {2020},
    journaltitle = {Journal of Mathematical Psychology},
    volume = {98},
    pages = {102388},
    doi = {10.1016/j.jmp.2020.102388},
    abstract = {In traditional multinomial processing tree (MPT) models for aggregate frequency data, parameters have usually been validated by means of experimental manipulations, thereby testing selective effects of discrete independent variables on specific model parameters. More recently, hierarchical MPT models which account for parameter heterogeneity between participants have been introduced. These models offer a new possibility of parameter validation by analyzing selective covariations of interindividual differences in MPT model parameters with continuous covariates. The new approach enables researchers to test parameter validity in terms of functional dissociations, including convergent validity and discriminant validity in a nomological network. Here, we apply the novel approach to a multidimensional source-monitoring model in the domain of stereotype formation based on pseudocontingency inference. Using hierarchical Bayesian MPT models, we test the validity of source-guessing parameters as indicators of specific source evaluations on the individual level. First, analyzing experimental data on stereotype formation (N = 130), we replicated earlier findings of biased source-guessing parameters while taking parameter heterogeneity across participants into account. Second, we investigated the specificity of covariations between conditional guessing parameters and continuous direct measures of source evaluations. Interindividual differences in direct evaluations predicted interindividual differences in specific source-guessing parameters, thus validating their substantive interpretation. Third, in an exploratory analysis, we examined relations of memory parameters and guessing parameters with cognitive performance measures from a standardized cognitive assessment battery.},
    osf = {https://osf.io/a6fcz}
    }

  • [PDF] Heck, D. W., & Erdfelder, E. (2020). Benefits of response time-extended multinomial processing tree models: A reply to Starns (2018). Psychonomic Bulletin & Review, 27, 571–580. https://doi.org/10.3758/s13423-019-01663-0
    [Abstract] [BibTeX] [Data & R Scripts]

    In his comment on Heck and Erdfelder (2016), Starns (2018) focuses on the response time-extended two-high-threshold (2HT-RT) model for yes-no recognition tasks, a specific example for the general class of response time-extended multinomial processing tree models (MPT-RTs) we proposed. He argues that the 2HT-RT model cannot accommodate the speed-accuracy trade-off, a key mechanism in speeded recognition tasks. As a remedy, he proposes a specific discrete-state model for recognition memory that assumes a race mechanism for detection and guessing. In this reply, we clarify our motivation for using the 2HT-RT model as an example and highlight the importance and benefits of MPT-RTs as a flexible class of general-purpose, simple-to-use models. By binning RTs into discrete categories, the MPT-RT aproach facilitates the joint modeling of discrete responses and response times in a variety of psychological paradigms. In fact, many paradigms either lack a clear-cut accuracy criterion or show performance levels at ceiling, making corrections for incautious responding redundant. Moreover, we show that some forms of speed-accuracy trade-off can in fact not only be accommodated but also be measured by appropriately designed MPT-RTs.

    @article{heck2020benefits,
    title = {Benefits of Response Time-Extended Multinomial Processing Tree Models: {{A}} Reply to {{Starns}} (2018)},
    author = {Heck, Daniel W and Erdfelder, Edgar},
    date = {2020},
    journaltitle = {Psychonomic Bulletin \& Review},
    volume = {27},
    pages = {571--580},
    doi = {10.3758/s13423-019-01663-0},
    abstract = {In his comment on Heck and Erdfelder (2016), Starns (2018) focuses on the response time-extended two-high-threshold (2HT-RT) model for yes-no recognition tasks, a specific example for the general class of response time-extended multinomial processing tree models (MPT-RTs) we proposed. He argues that the 2HT-RT model cannot accommodate the speed-accuracy trade-off, a key mechanism in speeded recognition tasks. As a remedy, he proposes a specific discrete-state model for recognition memory that assumes a race mechanism for detection and guessing. In this reply, we clarify our motivation for using the 2HT-RT model as an example and highlight the importance and benefits of MPT-RTs as a flexible class of general-purpose, simple-to-use models. By binning RTs into discrete categories, the MPT-RT aproach facilitates the joint modeling of discrete responses and response times in a variety of psychological paradigms. In fact, many paradigms either lack a clear-cut accuracy criterion or show performance levels at ceiling, making corrections for incautious responding redundant. Moreover, we show that some forms of speed-accuracy trade-off can in fact not only be accommodated but also be measured by appropriately designed MPT-RTs.},
    osf = {https://osf.io/qkfxz}
    }

  • [PDF] Heck, D. W., Thielmann, I., Klein, S. A., & Hilbig, B. E. (2020). On the limited generality of air pollution and anxiety as causal determinants of unethical behavior: Commentary on Lu, Lee, Gino, & Galinsky (2018). Psychological Science, 31, 741–747. https://doi.org/10.1177/0956797619866627
    [Abstract] [BibTeX] [Data & R Scripts]

    Lu, Lee, Gino, and Galinsky (2018; LLGG) tested the hypotheses that air pollution causes unethical behavior and that this effect is mediated by increased anxiety. Here, we provide theoretical and empirical arguments against the generality of the effects of air pollution and anxiety on unethical behavior. First, we collected and analyzed monthly longitudinal data on air pollution and crimes for 103 districts in the UK. Contrary to LLGG’s proposition, seasonal trends in air pollution were exactly opposed to monthly crime rates. Moreover, our data provide evidence against the more restrictive hypothesis that air pollution has incremental validity beyond seasonal trends. Second, based on a large-scale reanalysis of incentivized cheating behavior in standard dice-roll and coin-toss tasks, we found that trait anxiety, operationalized by the personality trait Emotionality and its facet Anxiety, are not predictive of dishonesty. Overall, this suggests that LLGG’s theory is too broad and requires further specification.

    @article{heck2020limited,
    title = {On the Limited Generality of Air Pollution and Anxiety as Causal Determinants of Unethical Behavior: {{Commentary}} on {{Lu}}, {{Lee}}, {{Gino}}, \& {{Galinsky}} (2018)},
    author = {Heck, Daniel W and Thielmann, Isabel and Klein, Sina A and Hilbig, Benjamin E},
    date = {2020},
    journaltitle = {Psychological Science},
    volume = {31},
    pages = {741--747},
    doi = {10.1177/0956797619866627},
    abstract = {Lu, Lee, Gino, and Galinsky (2018; LLGG) tested the hypotheses that air pollution causes unethical behavior and that this effect is mediated by increased anxiety. Here, we provide theoretical and empirical arguments against the generality of the effects of air pollution and anxiety on unethical behavior. First, we collected and analyzed monthly longitudinal data on air pollution and crimes for 103 districts in the UK. Contrary to LLGG’s proposition, seasonal trends in air pollution were exactly opposed to monthly crime rates. Moreover, our data provide evidence against the more restrictive hypothesis that air pollution has incremental validity beyond seasonal trends. Second, based on a large-scale reanalysis of incentivized cheating behavior in standard dice-roll and coin-toss tasks, we found that trait anxiety, operationalized by the personality trait Emotionality and its facet Anxiety, are not predictive of dishonesty. Overall, this suggests that LLGG’s theory is too broad and requires further specification.},
    osf = {https://osf.io/k76b2}
    }

  • [PDF] Heck, D. W., Seiling, L., & Bröder, A. (2020). The love of large numbers revisited: A coherence model of the popularity bias. Cognition, 195, 104069. https://doi.org/10.1016/j.cognition.2019.104069
    [Abstract] [BibTeX] [Data & R Scripts]

    Preferences are often based on social information such as experiences and recommendations of other people. The reliance on social information is especially relevant in the case of online shopping, where buying decisions for products may often be based on online reviews by other customers. Recently, Powell, Yu, DeWolf, and Holyoak (2017, Psychological Science, 28, 1432-1442) showed that, when deciding between two products, people do not consider the number of product reviews in a statistically appropriate way as predicted by a Bayesian model but rather exhibit a bias for popular products (i.e., products with many reviews). In the present work, we propose a coherence model of the cognitive mechanism underlying this empirical phenomenon. The new model assumes that people strive for a coherent representation of the available information (i.e., the average review score and the number of reviews). To test this theoretical account, we reanalyzed the data of Powell and colleagues and ran an online study with 244 participants using a wider range of stimulus material than in the original study. Besides replicating the popularity bias, the study provided clear evidence for the predicted coherence effect, that is, decisions became more confident and faster when the available information about popularity and quality was congruent.

    @article{heck2020love,
    title = {The Love of Large Numbers Revisited: {{A}} Coherence Model of the Popularity Bias},
    author = {Heck, Daniel W and Seiling, Lukas and Bröder, Arndt},
    date = {2020},
    journaltitle = {Cognition},
    volume = {195},
    pages = {104069},
    doi = {10.1016/j.cognition.2019.104069},
    abstract = {Preferences are often based on social information such as experiences and recommendations of other people. The reliance on social information is especially relevant in the case of online shopping, where buying decisions for products may often be based on online reviews by other customers. Recently, Powell, Yu, DeWolf, and Holyoak (2017, Psychological Science, 28, 1432-1442) showed that, when deciding between two products, people do not consider the number of product reviews in a statistically appropriate way as predicted by a Bayesian model but rather exhibit a bias for popular products (i.e., products with many reviews). In the present work, we propose a coherence model of the cognitive mechanism underlying this empirical phenomenon. The new model assumes that people strive for a coherent representation of the available information (i.e., the average review score and the number of reviews). To test this theoretical account, we reanalyzed the data of Powell and colleagues and ran an online study with 244 participants using a wider range of stimulus material than in the original study. Besides replicating the popularity bias, the study provided clear evidence for the predicted coherence effect, that is, decisions became more confident and faster when the available information about popularity and quality was congruent.},
    osf = {https://osf.io/mzb7n}
    }

  • [PDF] Heck, D. W., & Noventa, S. (2020). Representing probabilistic models of knowledge space theory by multinomial processing tree models. Journal of Mathematical Psychology, 96, 102329. https://doi.org/10.1016/j.jmp.2020.102329
    [Abstract] [BibTeX] [Data & R Scripts]

    Knowledge Space Theory (KST) aims at modeling the hierarchical relations between items or skills in a learning process. For example, when studying mathematics in school, students first need to master the rules of summation before being able to learn multiplication. In KST, the knowledge states of individuals are represented by means of partially ordered latent classes. In probabilistic KST models, conditional probability parameters are introduced to model transitions from latent knowledge states to observed response patterns. Since these models account for discrete data by assuming a finite number of latent states, they can be represented by Multinomial Processing Tree (MPT) models (i.e., binary decision trees with parameters referring to the conditional probabilities of entering different states). Extending previous work on the link between MPT and KST models for procedural assessments of knowledge, we prove that standard probabilistic models of KST such as the Basic Local Independence Model (BLIM) and the Simple Learning Model (SLM) can be represented as specific instances of MPT models. Given this close link, MPT methods may be applied to address theoretical and practical issues in KST. Using a simulation study, we show that model-selection methods recently implemented for MPT models (e.g., the Bayes factor) allow KST researchers to test and account for violations of local independence, a fundamental assumption in Item Response Theory (IRT) and psychological testing in general. By highlighting the MPT-KST link and its implications for IRT, we hope to facilitate an exchange of theoretical results, statistical methods, and software across these different domains of mathematical psychology.

    @article{heck2020representing,
    title = {Representing Probabilistic Models of Knowledge Space Theory by Multinomial Processing Tree Models},
    author = {Heck, Daniel W and Noventa, Stefano},
    date = {2020},
    journaltitle = {Journal of Mathematical Psychology},
    volume = {96},
    pages = {102329},
    doi = {10.1016/j.jmp.2020.102329},
    abstract = {Knowledge Space Theory (KST) aims at modeling the hierarchical relations between items or skills in a learning process. For example, when studying mathematics in school, students first need to master the rules of summation before being able to learn multiplication. In KST, the knowledge states of individuals are represented by means of partially ordered latent classes. In probabilistic KST models, conditional probability parameters are introduced to model transitions from latent knowledge states to observed response patterns. Since these models account for discrete data by assuming a finite number of latent states, they can be represented by Multinomial Processing Tree (MPT) models (i.e., binary decision trees with parameters referring to the conditional probabilities of entering different states). Extending previous work on the link between MPT and KST models for procedural assessments of knowledge, we prove that standard probabilistic models of KST such as the Basic Local Independence Model (BLIM) and the Simple Learning Model (SLM) can be represented as specific instances of MPT models. Given this close link, MPT methods may be applied to address theoretical and practical issues in KST. Using a simulation study, we show that model-selection methods recently implemented for MPT models (e.g., the Bayes factor) allow KST researchers to test and account for violations of local independence, a fundamental assumption in Item Response Theory (IRT) and psychological testing in general. By highlighting the MPT-KST link and its implications for IRT, we hope to facilitate an exchange of theoretical results, statistical methods, and software across these different domains of mathematical psychology.},
    osf = {https://osf.io/4wma7}
    }

  • [PDF] Jobst, L. J., Heck, D. W., & Moshagen, M. (2020). A comparison of correlation and regression approaches for multinomial processing tree models. Journal of Mathematical Psychology, 98, 102400. https://doi.org/10.1016/j.jmp.2020.102400
    [Abstract] [BibTeX] [Data & R Scripts]

    Multinomial processing tree (MPT) models are a class of stochastic models for categorical data that have recently been extended to account for heterogeneity in individuals by assuming separate parameters per participant. These extensions enable the estimation of correlations among model parameters and correlations between model parameters and external covariates. The present study compares different approaches regarding their ability to estimate both types of correlations. For parameter–parameter correlations, we considered two Bayesian hierarchical MPT models – the beta-MPT approach and the latent-trait approach – and two frequentist approaches that fit the data of each participant separately, either involving a correction for attenuation or not (corrected and uncorrected individual-model approach). Regarding parameter-covariate correlations, we additionally considered the latent-trait regression. Recovery performance was determined via a Monte Carlo simulation varying sample size, number of items, extent of heterogeneity, and magnitude of the true correlation. The results indicate the smallest bias regarding parameter–parameter\hspace{0pt} correlations for the latent-trait approach and the corrected individual-model approach and the smallest bias regarding parameter-covariate correlations for the latent-trait regression and the corrected individual-model approach. However, adequately recovering correlations of MPT parameters generally requires a sufficiently large number of observations and sufficient heterogeneity.

    @article{jobst2020comparison,
    title = {A Comparison of Correlation and Regression Approaches for Multinomial Processing Tree Models},
    author = {Jobst, Lisa Jasmin and Heck, Daniel W and Moshagen, Morten},
    date = {2020},
    journaltitle = {Journal of Mathematical Psychology},
    volume = {98},
    pages = {102400},
    doi = {10.1016/j.jmp.2020.102400},
    abstract = {Multinomial processing tree (MPT) models are a class of stochastic models for categorical data that have recently been extended to account for heterogeneity in individuals by assuming separate parameters per participant. These extensions enable the estimation of correlations among model parameters and correlations between model parameters and external covariates. The present study compares different approaches regarding their ability to estimate both types of correlations. For parameter–parameter correlations, we considered two Bayesian hierarchical MPT models – the beta-MPT approach and the latent-trait approach – and two frequentist approaches that fit the data of each participant separately, either involving a correction for attenuation or not (corrected and uncorrected individual-model approach). Regarding parameter-covariate correlations, we additionally considered the latent-trait regression. Recovery performance was determined via a Monte Carlo simulation varying sample size, number of items, extent of heterogeneity, and magnitude of the true correlation. The results indicate the smallest bias regarding parameter–parameter\hspace{0pt} correlations for the latent-trait approach and the corrected individual-model approach and the smallest bias regarding parameter-covariate correlations for the latent-trait regression and the corrected individual-model approach. However, adequately recovering correlations of MPT parameters generally requires a sufficiently large number of observations and sufficient heterogeneity.},
    osf = {https://osf.io/85duk}
    }

  • [PDF] Klein, S. A., Thielmann, I., Hilbig, B. E., & Heck, D. W. (2020). On the robustness of the association between Honesty-Humility and dishonest behavior for varying incentives. Journal of Research in Personality, 88, 104006. https://doi.org/10.1016/j.jrp.2020.104006
    [Abstract] [BibTeX] [Data & R Scripts]

    {Previous research consistently showed a negative link between Honesty-Humility (HH) and dishonest behavior. However, most prior research neglected the influence of situational factors and their potential interaction with HH. In two incentivized experiments (N = 322

    @article{klein2020robustness,
    title = {On the Robustness of the Association between {{Honesty-Humility}} and Dishonest Behavior for Varying Incentives},
    author = {Klein, Sina A and Thielmann, Isabel and Hilbig, Benjamin E and Heck, Daniel W},
    date = {2020},
    journaltitle = {Journal of Research in Personality},
    volume = {88},
    pages = {104006},
    doi = {10.1016/j.jrp.2020.104006},
    abstract = {Previous research consistently showed a negative link between Honesty-Humility (HH) and dishonest behavior. However, most prior research neglected the influence of situational factors and their potential interaction with HH. In two incentivized experiments (N = 322, N = 552), we thus tested whether the (subjective) utility of incentives moderates the HH-dishonesty link. Replicating prior evidence, HH showed a consistent negative link to dishonesty. However, the utility of incentives did not moderate this association, neither when manipulated through incentive size (BF01 = 5.7) nor when manipulated through gain versus loss framing (BF01 = 20.4). These results demonstrate the robustness of the HH-dishonesty link.},
    osf = {https://osf.io/k73dv}
    }

  • [PDF] Kroneisen, M., & Heck, D. W. (2020). Interindividual differences in the sensitivity for consequences, moral norms and preferences for inaction: Relating personality to the CNI model. Personality and Social Psychology Bulletin, 46, 1013–1026. https://doi.org/10.1177/0146167219893994
    [Abstract] [BibTeX] [Data & R Scripts]

    Research on moral decision-making usually focuses on two ethical principles: The principle of utilitarianism (=morality of an action is determined by its consequences) and the principle of deontology (=morality of an action is valued according to the adherence to moral norms regardless of the consequences). Criticism on traditional moral dilemma research includes the reproach that consequences and norms are confounded in standard paradigms. As a remedy, a multinomial model (the CNI model) was developed to disentangle and measure sensitivity to consequences (C), sensitivity to moral norms (N), and general preference for inaction versus action (I). In two studies, we examined the link of basic personality traits to moral judgments by fitting a hierarchical Bayesian version of the CNI model. As predicted, high Honesty-Humility was selectively associated with sensitivity for norms, whereas high Emotionality was selectively associated with sensitivity for consequences. However, Conscientiousness was not associated with a preference for inaction.

    @article{kroneisen2020interindividual,
    title = {Interindividual Differences in the Sensitivity for Consequences, Moral Norms and Preferences for Inaction: {{Relating}} Personality to the {{CNI}} Model},
    author = {Kroneisen, Meike and Heck, Daniel W},
    date = {2020},
    journaltitle = {Personality and Social Psychology Bulletin},
    volume = {46},
    pages = {1013--1026},
    doi = {10.1177/0146167219893994},
    abstract = {Research on moral decision-making usually focuses on two ethical principles: The principle of utilitarianism (=morality of an action is determined by its consequences) and the principle of deontology (=morality of an action is valued according to the adherence to moral norms regardless of the consequences). Criticism on traditional moral dilemma research includes the reproach that consequences and norms are confounded in standard paradigms. As a remedy, a multinomial model (the CNI model) was developed to disentangle and measure sensitivity to consequences (C), sensitivity to moral norms (N), and general preference for inaction versus action (I). In two studies, we examined the link of basic personality traits to moral judgments by fitting a hierarchical Bayesian version of the CNI model. As predicted, high Honesty-Humility was selectively associated with sensitivity for norms, whereas high Emotionality was selectively associated with sensitivity for consequences. However, Conscientiousness was not associated with a preference for inaction.},
    osf = {https://osf.io/b7c9z}
    }

  • [PDF] Schnuerch, M., Erdfelder, E., & Heck, D. W. (2020). Sequential hypothesis tests for multinomial processing tree models. Journal of Mathematical Psychology, 95, 102326. https://doi.org/10.1016/j.jmp.2020.102326
    [Abstract] [BibTeX] [Data & R Scripts]

    Stimulated by William H. Batchelder’s seminal contributions in the 1980s and 1990s, multinomial processing tree (MPT) modeling has become a powerful and frequently used method in various research fields, most prominently in cognitive psychology and social cognition research. MPT models allow for estimation of, and statistical tests on, parameters that represent psychological processes underlying responses to cognitive tasks. Therefore, their use has also been proposed repeatedly for purposes of psychological assessment, for example, in clinical settings to identify specific cognitive deficits in individuals. However, a considerable drawback of individual MPT analyses emerges from the limited number of data points per individual, resulting in estimation bias, large standard errors, and low power of statistical tests. Classical test procedures such as Neyman–Pearson tests often require very large sample sizes to ensure sufficiently low Type 1 and Type 2 error probabilities. Herein, we propose sequential probability ratio tests (SPRTs) as an efficient alternative. Unlike Neyman–Pearson tests, sequential tests continuously monitor the data and terminate when a predefined criterion is met. As a consequence, SPRTs typically require only about half of the Neyman–Pearson sample size without compromising error probability control. We illustrate the SPRT approach to statistical inference for simple hypotheses in single-parameter MPT models. Moreover, a large-sample approximation, based on ML theory, is presented for typical MPT models with more than one unknown parameter. We evaluate the properties of the proposed test procedures by means of simulations. Finally, we discuss benefits and limitations of sequential MPT analysis.

    @article{schnuerch2020sequential,
    title = {Sequential Hypothesis Tests for Multinomial Processing Tree Models},
    author = {Schnuerch, Martin and Erdfelder, Edgar and Heck, Daniel W},
    date = {2020},
    journaltitle = {Journal of Mathematical Psychology},
    volume = {95},
    pages = {102326},
    doi = {10.1016/j.jmp.2020.102326},
    abstract = {Stimulated by William H. Batchelder’s seminal contributions in the 1980s and 1990s, multinomial processing tree (MPT) modeling has become a powerful and frequently used method in various research fields, most prominently in cognitive psychology and social cognition research. MPT models allow for estimation of, and statistical tests on, parameters that represent psychological processes underlying responses to cognitive tasks. Therefore, their use has also been proposed repeatedly for purposes of psychological assessment, for example, in clinical settings to identify specific cognitive deficits in individuals. However, a considerable drawback of individual MPT analyses emerges from the limited number of data points per individual, resulting in estimation bias, large standard errors, and low power of statistical tests. Classical test procedures such as Neyman–Pearson tests often require very large sample sizes to ensure sufficiently low Type 1 and Type 2 error probabilities. Herein, we propose sequential probability ratio tests (SPRTs) as an efficient alternative. Unlike Neyman–Pearson tests, sequential tests continuously monitor the data and terminate when a predefined criterion is met. As a consequence, SPRTs typically require only about half of the Neyman–Pearson sample size without compromising error probability control. We illustrate the SPRT approach to statistical inference for simple hypotheses in single-parameter MPT models. Moreover, a large-sample approximation, based on ML theory, is presented for typical MPT models with more than one unknown parameter. We evaluate the properties of the proposed test procedures by means of simulations. Finally, we discuss benefits and limitations of sequential MPT analysis.},
    osf = {https://osf.io/98erb}
    }

2019

  • [PDF] Arnold, N. R., Heck, D. W., Bröder, A., Meiser, T., & Boywitt, D. C. (2019). Testing hypotheses about binding in context memory with a hierarchical multinomial modeling approach: A preregistered study. Experimental Psychology, 66, 239–251. https://doi.org/10.1027/1618-3169/a000442
    [Abstract] [BibTeX] [Data & R Scripts]

    In experiments on multidimensional source memory, a stochastic dependency of source memory for different facets of an episode has been repeatedly demonstrated. This may suggest an integrated representation leading to mutual cuing in context retrieval. However, experiments involving a manipulated reinstatement of one source feature have often failed to affect retrieval of the other feature, suggesting unbound features or rather item-feature binding. The stochastic dependency found in former studies might be a spurious correlation due to aggregation across participants varying in memory strength. We test this artifact explanation by applying a hierarchical multinomial model. Observing stochastic dependency when accounting for interindividual differences would rule out the artifact explanation. A second goal is to elucidate the nature of feature binding: Contrasting encoding conditions with integrated feature judgments versus separate feature judgments are expected to induce different levels of stochastic dependency despite comparable overall source memory if integrated representations include feature-feature binding. The experiment replicated the finding of stochastic dependency and, thus, ruled out an artifact interpretation. However, we did not find different levels of stochastic dependency between conditions. Therefore, the current findings do not reveal decisive evidence to distinguish between the feature-feature binding and the item-context binding account.

    @article{arnold2019testing,
    title = {Testing Hypotheses about Binding in Context Memory with a Hierarchical Multinomial Modeling Approach: {{A}} Preregistered Study},
    author = {Arnold, Nina R. and Heck, Daniel W and Bröder, Arndt and Meiser, Thorsten and Boywitt, C. Dennis},
    date = {2019},
    journaltitle = {Experimental Psychology},
    volume = {66},
    pages = {239--251},
    doi = {10.1027/1618-3169/a000442},
    abstract = {In experiments on multidimensional source memory, a stochastic dependency of source memory for different facets of an episode has been repeatedly demonstrated. This may suggest an integrated representation leading to mutual cuing in context retrieval. However, experiments involving a manipulated reinstatement of one source feature have often failed to affect retrieval of the other feature, suggesting unbound features or rather item-feature binding. The stochastic dependency found in former studies might be a spurious correlation due to aggregation across participants varying in memory strength. We test this artifact explanation by applying a hierarchical multinomial model. Observing stochastic dependency when accounting for interindividual differences would rule out the artifact explanation. A second goal is to elucidate the nature of feature binding: Contrasting encoding conditions with integrated feature judgments versus separate feature judgments are expected to induce different levels of stochastic dependency despite comparable overall source memory if integrated representations include feature-feature binding. The experiment replicated the finding of stochastic dependency and, thus, ruled out an artifact interpretation. However, we did not find different levels of stochastic dependency between conditions. Therefore, the current findings do not reveal decisive evidence to distinguish between the feature-feature binding and the item-context binding account.},
    osf = {https://osf.io/kw3pv}
    }

  • [PDF] Erdfelder, E., & Heck, D. W. (2019). Detecting evidential value and p-hacking with the p-curve tool: A word of caution. Zeitschrift für Psychologie, 227, 249–260. https://doi.org/10.1027/2151-2604/a000383
    [Abstract] [BibTeX]

    Simonsohn, Nelson, and Simmons (2014a) proposed p-curve – the distribution of statistically significant p-values for a set of studies – as a tool to assess the evidential value of these studies. They argued that, whereas right-skewed p-curves indicate true underlying effects, left-skewed p-curves indicate selective reporting of significant results when there is no true effect (“p-hacking”). We first review previous research showing that, in contrast to the first claim, null effects may produce right-skewed p-curves under some conditions. We then question the second claim by showing that not only selective reporting but also selective non-reporting of significant results due to a significant outcome of a more popular alternative test of the same hypothesis may produce left-skewed p-curves, even if all studies reflect true effects. Hence, just as right-skewed p-curves do not necessarily imply evidential value, left-skewed p-curves do not necessarily imply p-hacking and absence of true effects in the studies involved.

    @article{erdfelder2019detecting,
    title = {Detecting Evidential Value and P-Hacking with the p-Curve Tool: {{A}} Word of Caution},
    author = {Erdfelder, Edgar and Heck, Daniel W},
    date = {2019},
    journaltitle = {Zeitschrift für Psychologie},
    volume = {227},
    pages = {249--260},
    doi = {10.1027/2151-2604/a000383},
    abstract = {Simonsohn, Nelson, and Simmons (2014a) proposed p-curve – the distribution of statistically significant p-values for a set of studies – as a tool to assess the evidential value of these studies. They argued that, whereas right-skewed p-curves indicate true underlying effects, left-skewed p-curves indicate selective reporting of significant results when there is no true effect (“p-hacking”). We first review previous research showing that, in contrast to the first claim, null effects may produce right-skewed p-curves under some conditions. We then question the second claim by showing that not only selective reporting but also selective non-reporting of significant results due to a significant outcome of a more popular alternative test of the same hypothesis may produce left-skewed p-curves, even if all studies reflect true effects. Hence, just as right-skewed p-curves do not necessarily imply evidential value, left-skewed p-curves do not necessarily imply p-hacking and absence of true effects in the studies involved.}
    }

  • [PDF] Gronau, Q. F., Wagenmakers, E., Heck, D. W., & Matzke, D. (2019). A simple method for comparing complex models: Bayesian model comparison for hierarchical multinomial processing tree models using warp-III bridge sampling. Psychometrika, 84, 261–284. https://doi.org/10.1007/s11336-018-9648-3
    [Abstract] [BibTeX] [Preprint] [Data & R Scripts]

    Multinomial processing trees (MPTs) are a popular class of cognitive models for categorical data. In typical applications, researchers compare several MPTs, each equipped with many parameters, especially when the models are implemented in a hierarchical framework. The principled Bayesian solution is to compute posterior model probabilities and Bayes factors. Both quantities, however, rely on the marginal likelihood, a high-dimensional integral that cannot be evaluated analytically. We show how Warp-III bridge sampling can be used to compute the marginal likelihood for hierarchical MPTs. We illustrate the procedure with two published data sets.

    @article{gronau2019simple,
    title = {A Simple Method for Comparing Complex Models: {{Bayesian}} Model Comparison for Hierarchical Multinomial Processing Tree Models Using Warp-{{III}} Bridge Sampling},
    author = {Gronau, Quentin F. and Wagenmakers, Eric-Jan and Heck, Daniel W and Matzke, Dora},
    date = {2019},
    journaltitle = {Psychometrika},
    volume = {84},
    pages = {261--284},
    doi = {10.1007/s11336-018-9648-3},
    url = {https://psyarxiv.com/yxhfm/},
    abstract = {Multinomial processing trees (MPTs) are a popular class of cognitive models for categorical data. In typical applications, researchers compare several MPTs, each equipped with many parameters, especially when the models are implemented in a hierarchical framework. The principled Bayesian solution is to compute posterior model probabilities and Bayes factors. Both quantities, however, rely on the marginal likelihood, a high-dimensional integral that cannot be evaluated analytically. We show how Warp-III bridge sampling can be used to compute the marginal likelihood for hierarchical MPTs. We illustrate the procedure with two published data sets.},
    osf = {https://osf.io/rycg6},
    keywords = {Bayesian meta-analysis}
    }

  • [PDF] Heck, D. W. (2019). Accounting for estimation uncertainty and shrinkage in Bayesian within-subject intervals: A comment on Nathoo, Kilshaw, and Masson (2018). Journal of Mathematical Psychology, 88, 27–31. https://doi.org/10.1016/j.jmp.2018.11.002
    [Abstract] [BibTeX] [Preprint] [Data & R Scripts]

    To facilitate the interpretation of systematic mean differences in within-subject designs, Nathoo, Kilshaw, and Masson (2018) proposed a Bayesian within-subject highest-density interval (HDI). However, their approach rests on independent maximum-likelihood estimates for the random effects which do not take estimation uncertainty and shrinkage into account. I propose an extension of Nathoo et al.’s method using a fully Bayesian, two-step approach. First, posterior samples are drawn for the linear mixed model. Second, the within-subject HDI is computed repeatedly based on the posterior samples, thereby accounting for estimation uncertainty and shrinkage. After marginalizing over the posterior distribution, the two-step approach results in a Bayesian within-subject HDI with a width similar to that of the classical within-subject confidence interval proposed by Loftus and Masson (1994).

    @article{heck2019accounting,
    title = {Accounting for Estimation Uncertainty and Shrinkage in {{Bayesian}} Within-Subject Intervals: {{A}} Comment on {{Nathoo}}, {{Kilshaw}}, and {{Masson}} (2018)},
    author = {Heck, Daniel W},
    date = {2019},
    journaltitle = {Journal of Mathematical Psychology},
    volume = {88},
    pages = {27--31},
    doi = {10.1016/j.jmp.2018.11.002},
    url = {https://psyarxiv.com/whp8t},
    abstract = {To facilitate the interpretation of systematic mean differences in within-subject designs, Nathoo, Kilshaw, and Masson (2018) proposed a Bayesian within-subject highest-density interval (HDI). However, their approach rests on independent maximum-likelihood estimates for the random effects which do not take estimation uncertainty and shrinkage into account. I propose an extension of Nathoo et al.’s method using a fully Bayesian, two-step approach. First, posterior samples are drawn for the linear mixed model. Second, the within-subject HDI is computed repeatedly based on the posterior samples, thereby accounting for estimation uncertainty and shrinkage. After marginalizing over the posterior distribution, the two-step approach results in a Bayesian within-subject HDI with a width similar to that of the classical within-subject confidence interval proposed by Loftus and Masson (1994).},
    osf = {https://osf.io/mrud9}
    }

  • [PDF] Heck, D. W. (2019). A caveat on the Savage-Dickey density ratio: The case of computing Bayes factors for regression parameters. British Journal of Mathematical and Statistical Psychology, 72, 316–333. https://doi.org/10.1111/bmsp.12150
    [Abstract] [BibTeX] [Preprint] [Data & R Scripts]

    The Savage–Dickey density ratio is a simple method for computing the Bayes factor for an equality constraint on one or more parameters of a statistical model. In regression analysis, this includes the important scenario of testing whether one or more of the covariates have an effect on the dependent variable. However, the Savage–Dickey ratio only provides the correct Bayes factor if the prior distribution of the nuisance parameters under the nested model is identical to the conditional prior under the full model given the equality constraint. This condition is violated for multiple regression models with a Jeffreys–Zellner–Siow prior, which is often used as a default prior in psychology. Besides linear regression models, the limitation of the Savage–Dickey ratio is especially relevant when analytical solutions for the Bayes factor are not available. This is the case for generalized linear models, non‐linear models, or cognitive process models with regression extensions. As a remedy, the correct Bayes factor can be computed using a generalized version of the Savage–Dickey density ratio.

    @article{heck2019caveat,
    title = {A Caveat on the {{Savage-Dickey}} Density Ratio: {{The}} Case of Computing {{Bayes}} Factors for Regression Parameters},
    author = {Heck, Daniel W},
    date = {2019},
    journaltitle = {British Journal of Mathematical and Statistical Psychology},
    volume = {72},
    pages = {316--333},
    doi = {10.1111/bmsp.12150},
    url = {https://psyarxiv.com/7dzsj},
    abstract = {The Savage–Dickey density ratio is a simple method for computing the Bayes factor for an equality constraint on one or more parameters of a statistical model. In regression analysis, this includes the important scenario of testing whether one or more of the covariates have an effect on the dependent variable. However, the Savage–Dickey ratio only provides the correct Bayes factor if the prior distribution of the nuisance parameters under the nested model is identical to the conditional prior under the full model given the equality constraint. This condition is violated for multiple regression models with a Jeffreys–Zellner–Siow prior, which is often used as a default prior in psychology. Besides linear regression models, the limitation of the Savage–Dickey ratio is especially relevant when analytical solutions for the Bayes factor are not available. This is the case for generalized linear models, non‐linear models, or cognitive process models with regression extensions. As a remedy, the correct Bayes factor can be computed using a generalized version of the Savage–Dickey density ratio.},
    osf = {https://osf.io/5hpuc},
    keywords = {Polytope\_Sampling}
    }

  • [PDF] Heck, D. W., & Erdfelder, E. (2019). Maximizing the expected information gain of cognitive modeling via design optimization. Computational Brain & Behavior, 2, 202–209. https://doi.org/10.1007/s42113-019-00035-0
    [Abstract] [BibTeX] [Preprint] [Data & R Scripts]

    To ensure robust scientific conclusions, cognitive modelers should optimize planned experimental designs a priori in order to maximize the expected information gain for answering the substantive question of interest. Both from the perspective of philosophy of science, but also within classical and Bayesian statistics, it is crucial to tailor empirical studies to the specific cognitive models under investigation before collecting any new data. In practice, methods such as design optimization, classical power analysis, and Bayesian design analysis provide indispensable tools for planning and designing informative experiments. Given that cognitive models provide precise predictions for future observations, we especially highlight the benefits of model-based Monte Carlo simulations to judge the expected information gain provided by different possible designs for cognitive modeling.

    @article{heck2019maximizing,
    title = {Maximizing the Expected Information Gain of Cognitive Modeling via Design Optimization},
    author = {Heck, Daniel W and Erdfelder, Edgar},
    date = {2019},
    journaltitle = {Computational Brain \& Behavior},
    volume = {2},
    pages = {202--209},
    doi = {10.1007/s42113-019-00035-0},
    url = {https://psyarxiv.com/6cy9n},
    abstract = {To ensure robust scientific conclusions, cognitive modelers should optimize planned experimental designs a priori in order to maximize the expected information gain for answering the substantive question of interest. Both from the perspective of philosophy of science, but also within classical and Bayesian statistics, it is crucial to tailor empirical studies to the specific cognitive models under investigation before collecting any new data. In practice, methods such as design optimization, classical power analysis, and Bayesian design analysis provide indispensable tools for planning and designing informative experiments. Given that cognitive models provide precise predictions for future observations, we especially highlight the benefits of model-based Monte Carlo simulations to judge the expected information gain provided by different possible designs for cognitive modeling.},
    osf = {https://osf.io/xehk5}
    }

  • [PDF] Heck, D. W., & Davis-Stober, C. P. (2019). Multinomial models with linear inequality constraints: Overview and improvements of computational methods for Bayesian inference. Journal of Mathematical Psychology, 91, 70–87. https://doi.org/10.1016/j.jmp.2019.03.004
    [Abstract] [BibTeX] [Preprint] [Data & R Scripts] [GitHub]

    Many psychological theories can be operationalized as linear inequality constraints on the parameters of multinomial distributions (e.g., discrete choice analysis). These constraints can be described in two equivalent ways: Either as the solution set to a system of linear inequalities or as the convex hull of a set of extremal points (vertices). For both representations, we describe a general Gibbs sampler for drawing posterior samples in order to carry out Bayesian analyses. We also summarize alternative sampling methods for estimating Bayes factors for these model representations using the encompassing Bayes factor method. We introduce the R package multinomineq , which provides an easily-accessible interface to a computationally efficient implementation of these techniques.

    @article{heck2019multinomial,
    title = {Multinomial Models with Linear Inequality Constraints: {{Overview}} and Improvements of Computational Methods for {{Bayesian}} Inference},
    author = {Heck, Daniel W and Davis-Stober, Clintin P},
    date = {2019},
    journaltitle = {Journal of Mathematical Psychology},
    volume = {91},
    pages = {70--87},
    doi = {10.1016/j.jmp.2019.03.004},
    abstract = {Many psychological theories can be operationalized as linear inequality constraints on the parameters of multinomial distributions (e.g., discrete choice analysis). These constraints can be described in two equivalent ways: Either as the solution set to a system of linear inequalities or as the convex hull of a set of extremal points (vertices). For both representations, we describe a general Gibbs sampler for drawing posterior samples in order to carry out Bayesian analyses. We also summarize alternative sampling methods for estimating Bayes factors for these model representations using the encompassing Bayes factor method. We introduce the R package multinomineq , which provides an easily-accessible interface to a computationally efficient implementation of these techniques.},
    arxivnumber = {1808.07140},
    github = {https://github.com/danheck/multinomineq},
    osf = {https://osf.io/xv9u3}
    }

  • [PDF] Heck, D. W., Overstall, A., Gronau, Q. F., & Wagenmakers, E. (2019). Quantifying uncertainty in transdimensional Markov chain Monte Carlo using discrete Markov models. Statistics & Computing, 29, 631–643. https://doi.org/10.1007/s11222-018-9828-0
    [Abstract] [BibTeX] [Preprint] [Data & R Scripts] [GitHub]

    Bayesian analysis often concerns an evaluation of models with different dimensionality as is necessary in, for example, model selection or mixture models. To facilitate this evaluation, transdimensional Markov chain Monte Carlo (MCMC) relies on sampling a discrete indexing variable to estimate the posterior model probabilities. However, little attention has been paid to the precision of these estimates. If only few switches occur between the models in the transdimensional MCMC output, precision may be low and assessment based on the assumption of independent samples misleading. Here, we propose a new method to estimate the precision based on the observed transition matrix of the model-indexing variable. Assuming a first order Markov model, the method samples from the posterior of the stationary distribution. This allows assessment of the uncertainty in the estimated posterior model probabilities, model ranks, and Bayes factors. Moreover, the method provides an estimate for the effective sample size of the MCMC output. In two model-selection examples, we show that the proposed approach provides a good assessment of the uncertainty associated with the estimated posterior model probabilities.

    @article{heck2019quantifying,
    title = {Quantifying Uncertainty in Transdimensional {{Markov}} Chain {{Monte Carlo}} Using Discrete {{Markov}} Models},
    author = {Heck, Daniel W and Overstall, Antony and Gronau, Quentin F and Wagenmakers, Eric-Jan},
    date = {2019},
    journaltitle = {Statistics \& Computing},
    volume = {29},
    pages = {631--643},
    doi = {10.1007/s11222-018-9828-0},
    abstract = {Bayesian analysis often concerns an evaluation of models with different dimensionality as is necessary in, for example, model selection or mixture models. To facilitate this evaluation, transdimensional Markov chain Monte Carlo (MCMC) relies on sampling a discrete indexing variable to estimate the posterior model probabilities. However, little attention has been paid to the precision of these estimates. If only few switches occur between the models in the transdimensional MCMC output, precision may be low and assessment based on the assumption of independent samples misleading. Here, we propose a new method to estimate the precision based on the observed transition matrix of the model-indexing variable. Assuming a first order Markov model, the method samples from the posterior of the stationary distribution. This allows assessment of the uncertainty in the estimated posterior model probabilities, model ranks, and Bayes factors. Moreover, the method provides an estimate for the effective sample size of the MCMC output. In two model-selection examples, we show that the proposed approach provides a good assessment of the uncertainty associated with the estimated posterior model probabilities.},
    arxivnumber = {1703.10364},
    github = {https://github.com/danheck/MCMCprecision},
    osf = {https://osf.io/kjrkz},
    keywords = {heckfirst,Polytope\_Sampling}
    }

  • [PDF] Klein, S. A., Heck, D. W., Reese, G., & Hilbig, B. E. (2019). On the relationship between Openness to Experience, political orientation, and pro-environmental behavior. Personality and Individual Differences, 138, 344–348. https://doi.org/10.1016/j.paid.2018.10.017
    [Abstract] [BibTeX] [Data & R Scripts]

    Previous research consistently showed that Openness to Experience is positively linked to pro-environmental behavior. However, this does not appear to hold whenever pro-environmental behavior is mutually exclusive with cooperation. The present study aimed to replicate this null effect of Openness and to test political orientation as explanatory variable: Openness is associated with a left-wing/liberal political orientation, which, in turn, is associated with both cooperation and pro-environmental behavior, thus creating a decision conflict whenever the latter are mutually exclusive. In an online study (N = 355) participants played the Greater Good Game, a social dilemma involving choice conflict between pro-environmental behavior and cooperation. Results both replicated prior findings and suggested that political orientation could indeed account for the null effect of Openness.

    @article{klein2019relationship,
    title = {On the Relationship between {{Openness}} to {{Experience}}, Political Orientation, and pro-Environmental Behavior},
    author = {Klein, Sina A and Heck, Daniel W and Reese, Gerhard and Hilbig, Benjamin E},
    date = {2019},
    journaltitle = {Personality and Individual Differences},
    volume = {138},
    pages = {344--348},
    doi = {10.1016/j.paid.2018.10.017},
    abstract = {Previous research consistently showed that Openness to Experience is positively linked to pro-environmental behavior. However, this does not appear to hold whenever pro-environmental behavior is mutually exclusive with cooperation. The present study aimed to replicate this null effect of Openness and to test political orientation as explanatory variable: Openness is associated with a left-wing/liberal political orientation, which, in turn, is associated with both cooperation and pro-environmental behavior, thus creating a decision conflict whenever the latter are mutually exclusive. In an online study (N = 355) participants played the Greater Good Game, a social dilemma involving choice conflict between pro-environmental behavior and cooperation. Results both replicated prior findings and suggested that political orientation could indeed account for the null effect of Openness.},
    osf = {https://osf.io/gxjc9}
    }

  • [PDF] Schild, C., Heck, D. W., Ścigała, K. A., & Zettler, I. (2019). Revisiting REVISE: (Re)Testing unique and combined effects of REminding, VIsibility, and SElf-engagement manipulations on cheating behavior. Journal of Economic Psychology, 75, 102161. https://doi.org/10.1016/j.joep.2019.04.001
    [Abstract] [BibTeX] [Data & R Scripts]

    Dishonest behavior poses a crucial threat to individuals and societies at large. To highlight situation factors that potentially reduce the occurrence and/or extent of dishonesty, Ayal, Gino, Barkan, and Ariely (2015) introduced the REVISE framework, consisting of three principles: REminding, VIsibility, and SElf-engagement. The evidence that the three REVISE principles actually reduce dishonesty is not always strong and sometimes even inconsistent, however. We herein thus conceptually replicate three suggested manipulations, each serving as an operationalization of one principle. In a large study with eight conditions and 5,039 participants, we link the REminding, VIsibility, and SElfengagement manipulations to dishonesty, compare their effectiveness with each other, and test for potential interactions between them. Overall, we find that VIsibilty (in terms of overtly monitoring responses) and SElfengagement (in terms of retyping an honesty statement) reduce dishonest behavior. We find no support for the effectiveness of REminding (in terms of ethical priming) or for any interaction between the REVISE principles. We also report two preregistered manipulation-check studies and discuss policy implications of our findings.

    @article{schild2019revisiting,
    title = {Revisiting {{REVISE}}: ({{Re}}){{Testing}} Unique and Combined Effects of {{REminding}}, {{VIsibility}}, and {{SElf-engagement}} Manipulations on Cheating Behavior},
    author = {Schild, Christoph and Heck, Daniel W and Ścigała, Karolina Aleksandra and Zettler, Ingo},
    date = {2019},
    journaltitle = {Journal of Economic Psychology},
    volume = {75},
    pages = {102161},
    doi = {10.1016/j.joep.2019.04.001},
    abstract = {Dishonest behavior poses a crucial threat to individuals and societies at large. To highlight situation factors that potentially reduce the occurrence and/or extent of dishonesty, Ayal, Gino, Barkan, and Ariely (2015) introduced the REVISE framework, consisting of three principles: REminding, VIsibility, and SElf-engagement. The evidence that the three REVISE principles actually reduce dishonesty is not always strong and sometimes even inconsistent, however. We herein thus conceptually replicate three suggested manipulations, each serving as an operationalization of one principle. In a large study with eight conditions and 5,039 participants, we link the REminding, VIsibility, and SElfengagement manipulations to dishonesty, compare their effectiveness with each other, and test for potential interactions between them. Overall, we find that VIsibilty (in terms of overtly monitoring responses) and SElfengagement (in terms of retyping an honesty statement) reduce dishonest behavior. We find no support for the effectiveness of REminding (in terms of ethical priming) or for any interaction between the REVISE principles. We also report two preregistered manipulation-check studies and discuss policy implications of our findings.},
    osf = {https://osf.io/m6cnu}
    }

  • [PDF] Ścigała, K. A., Schild, C., Heck, D. W., & Zettler, I. (2019). Who deals with the devil: Interdependence, personality, and corrupted collaboration. Social Psychological and Personality Science, 10, 1019–1027. https://doi.org/10.1177/1948550618813419
    [Abstract] [BibTeX] [Data & R Scripts]

    Corrupted collaboration, i.e., gaining personal profits through collaborative immoral acts, is a common and destructive phenomenon in societies. Despite the societal relevance of corrupted collaboration, the role of one’s own as well as one’s partner’s characteristics has hitherto remained unexplained. In the present study, we test these roles using the sequential dyadic die-rolling paradigm (N = 499 across five conditions). Our results indicate that interacting with a fully dishonest partner leads to higher cheating rates than interacting with a fully honest partner, although being paired with a fully honest partner does not eliminate dishonesty completely. Furthermore, we found that the basic personality dimension of Honesty-Humility is consistently negatively related to collaborative dishonesty irrespective of whether participants interact with fully honest or fully dishonest partners. Overall, our investigation provides a comprehensive view of the role of interaction partner’s characteristics in settings allowing for corrupted collaboration.

    @article{scigala2019who,
    title = {Who Deals with the Devil: {{Interdependence}}, Personality, and Corrupted Collaboration},
    author = {Ścigała, Karolina Aleksandra and Schild, Christoph and Heck, Daniel W and Zettler, Ingo},
    date = {2019},
    journaltitle = {Social Psychological and Personality Science},
    volume = {10},
    pages = {1019--1027},
    doi = {10.1177/1948550618813419},
    abstract = {Corrupted collaboration, i.e., gaining personal profits through collaborative immoral acts, is a common and destructive phenomenon in societies. Despite the societal relevance of corrupted collaboration, the role of one's own as well as one's partner's characteristics has hitherto remained unexplained. In the present study, we test these roles using the sequential dyadic die-rolling paradigm (N = 499 across five conditions). Our results indicate that interacting with a fully dishonest partner leads to higher cheating rates than interacting with a fully honest partner, although being paired with a fully honest partner does not eliminate dishonesty completely. Furthermore, we found that the basic personality dimension of Honesty-Humility is consistently negatively related to collaborative dishonesty irrespective of whether participants interact with fully honest or fully dishonest partners. Overall, our investigation provides a comprehensive view of the role of interaction partner’s characteristics in settings allowing for corrupted collaboration.},
    osf = {https://osf.io/t7r3h}
    }

  • [PDF] Starns, J. J., Cataldo, A. M., Rotello, C. M., Annis, J., Aschenbrenner, A., Bröder, A., Cox, G., Criss, A., Curl, R. A., Dobbins, I. G., Dunn, J., Enam, T., Evans, N. J., Farrell, S., Fraundorf, S. H., Gronlund, S. D., Heathcote, A., Heck, D. W., Hicks, J. L., Huff, M. J., Kellen, D., Key, K. N., Kilic, A., Klauer, K. C., Kraemer, K. R., Leite, F. P., Lloyd, M. E., Malejka, S., Mason, A., McAdoo, R. M., McDonough, I. M., Michael, R. B., Mickes, L., Mizrak, E., Morgan, D. P., Mueller, S. T., Osth, A., Reynolds, A., Seale-Carlisle, T. M., Singmann, H., Sloane, J. F., Smith, A. M., Tillman, G., van Ravenzwaaij, D., Weidemann, C. T., Wells, G. L., White, C. N., & Wilson, J. (2019). Assessing theoretical conclusions with blinded inference to investigate a potential inference crisis. Advances in Methods and Practices in Psychological Science, 2, 335–349. https://doi.org/10.1177/2515245919869583
    [Abstract] [BibTeX] [Data & R Scripts]

    Scientific advances across a range of disciplines hinge on the ability to make inferences about unobservable theoretical entities on the basis of empirical data patterns. Accurate inferences rely on both discovering valid, replicable data patterns and accurately interpreting those patterns in terms of their implications for theoretical constructs. The replication crisis in science has led to widespread efforts to improve the reliability of research findings, but comparatively little attention has been devoted to the validity of inferences based on those findings. Using an example from cognitive psychology, we demonstrate a blinded-inference paradigm for assessing the quality of theoretical inferences from data. Our results reveal substantial variability in experts’ judgments on the very same data, hinting at a possible inference crisis.

    @article{starns2019assessing,
    title = {Assessing Theoretical Conclusions with Blinded Inference to Investigate a Potential Inference Crisis},
    author = {Starns, Jeffrey J. and Cataldo, Andrea M. and Rotello, Caren M. and Annis, Jeffrey and Aschenbrenner, Andrew and Bröder, Arndt and Cox, Gregory and Criss, Amy and Curl, Ryan A. and Dobbins, Ian G. and Dunn, John and Enam, Tasnuva and Evans, Nathan J. and Farrell, Simon and Fraundorf, Scott H. and Gronlund, Scott D. and Heathcote, Andrew and Heck, Daniel W and Hicks, Jason L. and Huff, Mark J. and Kellen, David and Key, Kylie N. and Kilic, Asli and Klauer, Karl Christoph and Kraemer, Kyle R. and Leite, Fábio P. and Lloyd, Marianne E. and Malejka, Simone and Mason, Alice and McAdoo, Ryan M. and McDonough, Ian M. and Michael, Robert B. and Mickes, Laura and Mizrak, Eda and Morgan, David P. and Mueller, Shane T. and Osth, Adam and Reynolds, Angus and Seale-Carlisle, Travis M. and Singmann, Henrik and Sloane, Jennifer F. and Smith, Andrew M. and Tillman, Gabriel and van Ravenzwaaij, Don and Weidemann, Christoph T. and Wells, Gary L. and White, Corey N. and Wilson, Jack},
    options = {useprefix=true},
    date = {2019},
    journaltitle = {Advances in Methods and Practices in Psychological Science},
    volume = {2},
    pages = {335--349},
    doi = {10.1177/2515245919869583},
    abstract = {Scientific advances across a range of disciplines hinge on the ability to make inferences about unobservable theoretical entities on the basis of empirical data patterns. Accurate inferences rely on both discovering valid, replicable data patterns and accurately interpreting those patterns in terms of their implications for theoretical constructs. The replication crisis in science has led to widespread efforts to improve the reliability of research findings, but comparatively little attention has been devoted to the validity of inferences based on those findings. Using an example from cognitive psychology, we demonstrate a blinded-inference paradigm for assessing the quality of theoretical inferences from data. Our results reveal substantial variability in experts’ judgments on the very same data, hinting at a possible inference crisis.},
    osf = {https://osf.io/92ahy}
    }

2018

  • [PDF] Heck, D. W., Hoffmann, A., & Moshagen, M. (2018). Detecting nonadherence without loss in efficiency: A simple extension of the crosswise model. Behavior Research Methods, 50, 1895–1905. https://doi.org/10.3758/s13428-017-0957-8
    [Abstract] [BibTeX] [Data & R Scripts]

    In surveys concerning sensitive behavior or attitudes, respondents often do not answer truthfully, because of social desirability bias. To elicit more honest responding, the randomized-response (RR) technique aims at increasing perceived and actual anonymity by prompting respondents to answer with a randomly modified and thus uninformative response. In the crosswise model, as a particularly promising variant of the RR, this is achieved by adding a second, nonsensitive question and by prompting respondents to answer both questions jointly. Despite increased privacy protection and empirically higher prevalence estimates of socially undesirable behaviors, evidence also suggests that some respondents might still not adhere to the instructions, in turn leading to questionable results. Herein we propose an extension of the crosswise model (ECWM) that makes it possible to detect several types of response biases with adequate power in realistic sample sizes. Importantly, the ECWM allows for testing the validity of the model’s assumptions without any loss in statistical efficiency. Finally, we provide an empirical example supporting the usefulness of the ECWM.

    @article{heck2018detecting,
    title = {Detecting Nonadherence without Loss in Efficiency: {{A}} Simple Extension of the Crosswise Model},
    author = {Heck, Daniel W and Hoffmann, Adrian and Moshagen, Morten},
    date = {2018},
    journaltitle = {Behavior Research Methods},
    volume = {50},
    pages = {1895--1905},
    doi = {10.3758/s13428-017-0957-8},
    abstract = {In surveys concerning sensitive behavior or attitudes, respondents often do not answer truthfully, because of social desirability bias. To elicit more honest responding, the randomized-response (RR) technique aims at increasing perceived and actual anonymity by prompting respondents to answer with a randomly modified and thus uninformative response. In the crosswise model, as a particularly promising variant of the RR, this is achieved by adding a second, nonsensitive question and by prompting respondents to answer both questions jointly. Despite increased privacy protection and empirically higher prevalence estimates of socially undesirable behaviors, evidence also suggests that some respondents might still not adhere to the instructions, in turn leading to questionable results. Herein we propose an extension of the crosswise model (ECWM) that makes it possible to detect several types of response biases with adequate power in realistic sample sizes. Importantly, the ECWM allows for testing the validity of the model’s assumptions without any loss in statistical efficiency. Finally, we provide an empirical example supporting the usefulness of the ECWM.},
    langid = {english},
    osf = {https://osf.io/mxjgf},
    keywords = {heckfirst,Measurement model,Randomized response,Sensitive questions,Social desirability,Survey design}
    }

  • [PDF] Heck, D. W., Erdfelder, E., & Kieslich, P. J. (2018). Generalized processing tree models: Jointly modeling discrete and continuous variables. Psychometrika, 83, 893–918. https://doi.org/10.1007/s11336-018-9622-0
    [Abstract] [BibTeX] [Data & R Scripts] [GitHub]

    Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.

    @article{heck2018generalized,
    title = {Generalized Processing Tree Models: {{Jointly}} Modeling Discrete and Continuous Variables},
    author = {Heck, Daniel W and Erdfelder, Edgar and Kieslich, Pascal J},
    date = {2018},
    journaltitle = {Psychometrika},
    volume = {83},
    pages = {893--918},
    doi = {10.1007/s11336-018-9622-0},
    abstract = {Multinomial processing tree models assume that discrete cognitive states determine observed response frequencies. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response times, process-tracing measures, or neurophysiological variables. GPT models assume finite-mixture distributions, with weights determined by a processing tree structure, and continuous components modeled by parameterized distributions such as Gaussians with separate or shared parameters across states. We discuss identifiability, parameter estimation, model testing, a modeling syntax, and the improved precision of GPT estimates. Finally, a GPT version of the feature comparison model of semantic categorization is applied to computer-mouse trajectories.},
    github = {https://github.com/danheck/gpt},
    osf = {https://osf.io/fyeum},
    keywords = {heckfirst}
    }

  • [PDF] Heck, D. W., & Moshagen, M. (2018). RRreg: An R package for correlation and regression analyses of randomized response data. Journal of Statistical Software, 85(2), 1–29. https://doi.org/10.18637/jss.v085.i02
    [Abstract] [BibTeX] [GitHub]

    The randomized-response (RR) technique was developed to improve the validity of measures assessing attitudes, behaviors, and attributes threatened by social desirability bias. The RR removes any direct link between individual responses and the sensitive attribute to maximize the anonymity of respondents and, in turn, to elicit more honest responding. Since multivariate analyses are no longer feasible using standard methods, we present the R package RRreg that allows for multivariate analyses of RR data in a user-friendly way. We show how to compute bivariate correlations, how to predict an RR variable in an adapted logistic regression framework (with or without random effects), and how to use RR predictors in a modified linear regression. In addition, the package allows for power-analysis and robustness simulations. To facilitate the application of these methods, we illustrate the benefits of multivariate methods for RR variables using an empirical example.

    @article{heck2018rrreg,
    title = {{{RRreg}}: {{An R}} Package for Correlation and Regression Analyses of Randomized Response Data},
    author = {Heck, Daniel W and Moshagen, Morten},
    date = {2018},
    journaltitle = {Journal of Statistical Software},
    volume = {85(2)},
    pages = {1--29},
    doi = {10.18637/jss.v085.i02},
    abstract = {The randomized-response (RR) technique was developed to improve the validity of measures assessing attitudes, behaviors, and attributes threatened by social desirability bias. The RR removes any direct link between individual responses and the sensitive attribute to maximize the anonymity of respondents and, in turn, to elicit more honest responding. Since multivariate analyses are no longer feasible using standard methods, we present the R package RRreg that allows for multivariate analyses of RR data in a user-friendly way. We show how to compute bivariate correlations, how to predict an RR variable in an adapted logistic regression framework (with or without random effects), and how to use RR predictors in a modified linear regression. In addition, the package allows for power-analysis and robustness simulations. To facilitate the application of these methods, we illustrate the benefits of multivariate methods for RR variables using an empirical example.},
    github = {https://github.com/danheck/RRreg}
    }

  • [PDF] Heck, D. W., Arnold, N. R., & Arnold, D. (2018). TreeBUGS: An R package for hierarchical multinomial-processing-tree modeling. Behavior Research Methods, 50, 264–284. https://doi.org/10.3758/s13428-017-0869-7
    [Abstract] [BibTeX] [Data & R Scripts] [GitHub]

    Multinomial processing tree (MPT) models are a class of measurement models that account for categorical data by assuming a finite number of underlying cognitive processes. Traditionally, data are aggregated across participants and analyzed under the assumption of independently and identically distributed observations. Hierarchical Bayesian extensions of MPT models explicitly account for participant heterogeneity by assuming that the individual parameters follow a continuous hierarchical distribution. We provide an accessible introduction to hierarchical MPT modeling and present the user-friendly and comprehensive R package TreeBUGS, which implements the two most important hierarchical MPT approaches for participant heterogeneity—the beta-MPT approach (Smith & Batchelder, Journal of Mathematical Psychology 54:167-183, 2010) and the latent-trait MPT approach (Klauer, Psychometrika 75:70-98, 2010). TreeBUGS reads standard MPT model files and obtains Markov-chain Monte Carlo samples that approximate the posterior distribution. The functionality and output are tailored to the specific needs of MPT modelers and provide tests for the homogeneity of items and participants, individual and group parameter estimates, fit statistics, and within- and between-subjects comparisons, as well as goodness-of-fit and summary plots. We also propose and implement novel statistical extensions to include continuous and discrete predictors (as either fixed or random effects) in the latent-trait MPT model.

    @article{heck2018treebugs,
    title = {{{TreeBUGS}}: {{An R}} Package for Hierarchical Multinomial-Processing-Tree Modeling},
    author = {Heck, Daniel W and Arnold, Nina R. and Arnold, Denis},
    date = {2018},
    journaltitle = {Behavior Research Methods},
    volume = {50},
    pages = {264--284},
    doi = {10.3758/s13428-017-0869-7},
    abstract = {Multinomial processing tree (MPT) models are a class of measurement models that account for categorical data by assuming a finite number of underlying cognitive processes. Traditionally, data are aggregated across participants and analyzed under the assumption of independently and identically distributed observations. Hierarchical Bayesian extensions of MPT models explicitly account for participant heterogeneity by assuming that the individual parameters follow a continuous hierarchical distribution. We provide an accessible introduction to hierarchical MPT modeling and present the user-friendly and comprehensive R package TreeBUGS, which implements the two most important hierarchical MPT approaches for participant heterogeneity—the beta-MPT approach (Smith \& Batchelder, Journal of Mathematical Psychology 54:167-183, 2010) and the latent-trait MPT approach (Klauer, Psychometrika 75:70-98, 2010). TreeBUGS reads standard MPT model files and obtains Markov-chain Monte Carlo samples that approximate the posterior distribution. The functionality and output are tailored to the specific needs of MPT modelers and provide tests for the homogeneity of items and participants, individual and group parameter estimates, fit statistics, and within- and between-subjects comparisons, as well as goodness-of-fit and summary plots. We also propose and implement novel statistical extensions to include continuous and discrete predictors (as either fixed or random effects) in the latent-trait MPT model.},
    github = {https://github.com/denis-arnold/TreeBUGS},
    langid = {english},
    osf = {https://osf.io/s82bw},
    keywords = {heckfirst}
    }

  • [PDF] Heck, D. W., Thielmann, I., Moshagen, M., & Hilbig, B. E. (2018). Who lies? A large-scale reanalysis linking basic personality traits to unethical decision making. Judgment and Decision Making, 13, 356–371. https://doi.org/10.1017/S1930297500009232
    [Abstract] [BibTeX] [Preprint] [Data & R Scripts]

    Previous research has established that higher levels of trait Honesty-Humility (HH) are associated with less dishonest behavior in cheating paradigms. However, only imprecise effect size estimates of this HH-cheating link are available. Moreover, evidence is inconclusive on whether other basic personality traits from the HEXACO or Big Five models are associated with unethical decision making and whether such effects have incremental validity beyond HH. We address these issues in a highly powered reanalysis of 16 studies assessing dishonest behavior in an incentivized, one-shot cheating paradigm (N = 5,002). For this purpose, we rely on a newly developed logistic regression approach for the analysis of nested data in cheating paradigms. We also test theoretically derived interactions of HH with other basic personality traits (i.e., Emotionality and Conscientiousness) and situational factors (i.e., the baseline probability of observing a favorable outcome) as well as the incremental validity of HH over demographic characteristics. The results show a medium to large effect of HH (odds ratio = 0.53), which was independent of other personality, situational, or demographic variables. Only one other trait (Big Five Agreeableness) was associated with unethical decision making, although it failed to show any incremental validity beyond HH.

    @article{heck2018who,
    title = {Who Lies? {{A}} Large-Scale Reanalysis Linking Basic Personality Traits to Unethical Decision Making},
    author = {Heck, Daniel W and Thielmann, Isabel and Moshagen, Morten and Hilbig, Benjamin E},
    date = {2018},
    journaltitle = {Judgment and Decision Making},
    volume = {13},
    pages = {356--371},
    doi = {10.1017/S1930297500009232},
    url = {http://journal.sjdm.org/18/18322/jdm18322.pdf},
    abstract = {Previous research has established that higher levels of trait Honesty-Humility (HH) are associated with less dishonest behavior in cheating paradigms. However, only imprecise effect size estimates of this HH-cheating link are available. Moreover, evidence is inconclusive on whether other basic personality traits from the HEXACO or Big Five models are associated with unethical decision making and whether such effects have incremental validity beyond HH. We address these issues in a highly powered reanalysis of 16 studies assessing dishonest behavior in an incentivized, one-shot cheating paradigm (N = 5,002). For this purpose, we rely on a newly developed logistic regression approach for the analysis of nested data in cheating paradigms. We also test theoretically derived interactions of HH with other basic personality traits (i.e., Emotionality and Conscientiousness) and situational factors (i.e., the baseline probability of observing a favorable outcome) as well as the incremental validity of HH over demographic characteristics. The results show a medium to large effect of HH (odds ratio = 0.53), which was independent of other personality, situational, or demographic variables. Only one other trait (Big Five Agreeableness) was associated with unethical decision making, although it failed to show any incremental validity beyond HH.},
    osf = {https://osf.io/56hw4},
    keywords = {heckfirst}
    }

  • [PDF] Miller, R., Scherbaum, S., Heck, D. W., Goschke, T., & Enge, S. (2018). On the relation between the (censored) shifted Wald and the Wiener distribution as measurement models for choice response times. Applied Psychological Measurement, 42, 116–135. https://doi.org/10.1177/0146621617710465
    [Abstract] [BibTeX]

    Inferring processes or constructs from performance data is a major hallmark of cognitive psychometrics. Particularly, diffusion modeling of response times (RTs) from correct and erroneous responses using the Wiener distribution has become a popular measurement tool because it provides a set of psychologically interpretable parameters. However, an important precondition to identify all of these parameters is a sufficient number of RTs from erroneous responses. In the present article, we show by simulation that the parameters of the Wiener distribution can be recovered from tasks yielding very high or even perfect response accuracies using the shifted Wald distribution. Specifically, we argue that error RTs can be modeled as correct RTs that have undergone censoring by using techniques from parametric survival analysis. We illustrate our reasoning by fitting the Wiener and (censored) shifted Wald distribution to RTs from six participants who completed a Go/No-go task. In accordance with our simulations, diffusion modeling using the Wiener and the shifted Wald distribution yielded identical parameter estimates when the number of erroneous responses was predicted to be low. Moreover, the modeling of error RTs as censored correct RTs substantially improved the recovery of these diffusion parameters when premature trial timeout was introduced to increase the number of omission errors. Thus, the censored shifted Wald distribution provides a suitable means for diffusion modeling in situations when the Wiener distribution cannot be fitted without parametric constraints.

    @article{miller2018relation,
    title = {On the Relation between the (Censored) Shifted {{Wald}} and the {{Wiener}} Distribution as Measurement Models for Choice Response Times},
    author = {Miller, Robert and Scherbaum, S and Heck, Daniel W and Goschke, Thomas and Enge, Soeren},
    date = {2018},
    journaltitle = {Applied Psychological Measurement},
    volume = {42},
    pages = {116--135},
    doi = {10.1177/0146621617710465},
    abstract = {Inferring processes or constructs from performance data is a major hallmark of cognitive psychometrics. Particularly, diffusion modeling of response times (RTs) from correct and erroneous responses using the Wiener distribution has become a popular measurement tool because it provides a set of psychologically interpretable parameters. However, an important precondition to identify all of these parameters is a sufficient number of RTs from erroneous responses. In the present article, we show by simulation that the parameters of the Wiener distribution can be recovered from tasks yielding very high or even perfect response accuracies using the shifted Wald distribution. Specifically, we argue that error RTs can be modeled as correct RTs that have undergone censoring by using techniques from parametric survival analysis. We illustrate our reasoning by fitting the Wiener and (censored) shifted Wald distribution to RTs from six participants who completed a Go/No-go task. In accordance with our simulations, diffusion modeling using the Wiener and the shifted Wald distribution yielded identical parameter estimates when the number of erroneous responses was predicted to be low. Moreover, the modeling of error RTs as censored correct RTs substantially improved the recovery of these diffusion parameters when premature trial timeout was introduced to increase the number of omission errors. Thus, the censored shifted Wald distribution provides a suitable means for diffusion modeling in situations when the Wiener distribution cannot be fitted without parametric constraints.}
    }

  • [PDF] Plieninger, H., & Heck, D. W. (2018). A new model for acquiescence at the interface of psychometrics and cognitive psychology. Multivariate Behavioral Research, 53, 633–654. https://doi.org/10.1080/00273171.2018.1469966
    [Abstract] [BibTeX] [GitHub]

    When measuring psychological traits, one has to consider that respondents often show content-unrelated response behavior in answering questionnaires. To disentangle the target trait and two such response styles, extreme responding and midpoint responding, Böckenholt (2012, Psychological Methods, 17, 665–678) developed an item response model based on a latent processing tree structure. We propose a theoretically motivated extension of this model to also measure acquiescence, the tendency to agree with both regular and reversed items. Substantively, our approach builds on multinomial processing tree (MPT) models that are used in cognitive psychology to disentangle qualitatively distinct processes. Accordingly, the new model for response styles assumes a mixture distribution of affirmative responses, which are either determined by the underlying target trait or by acquiescence. In order to estimate the model parameters, we rely on Bayesian hierarchical estimation of MPT models. In simulations, we show that the model provides unbiased estimates of response styles and the target trait, and we compare the new model and Böckenholt’s model in a recovery study. An empirical example from personality psychology is used for illustrative purposes.

    @article{plieninger2018new,
    title = {A New Model for Acquiescence at the Interface of Psychometrics and Cognitive Psychology},
    author = {Plieninger, Hansjörg and Heck, Daniel W},
    date = {2018},
    journaltitle = {Multivariate Behavioral Research},
    volume = {53},
    pages = {633--654},
    doi = {10.1080/00273171.2018.1469966},
    abstract = {When measuring psychological traits, one has to consider that respondents often show content-unrelated response behavior in answering questionnaires. To disentangle the target trait and two such response styles, extreme responding and midpoint responding, Böckenholt (2012, Psychological Methods, 17, 665–678) developed an item response model based on a latent processing tree structure. We propose a theoretically motivated extension of this model to also measure acquiescence, the tendency to agree with both regular and reversed items. Substantively, our approach builds on multinomial processing tree (MPT) models that are used in cognitive psychology to disentangle qualitatively distinct processes. Accordingly, the new model for response styles assumes a mixture distribution of affirmative responses, which are either determined by the underlying target trait or by acquiescence. In order to estimate the model parameters, we rely on Bayesian hierarchical estimation of MPT models. In simulations, we show that the model provides unbiased estimates of response styles and the target trait, and we compare the new model and Böckenholt's model in a recovery study. An empirical example from personality psychology is used for illustrative purposes.},
    github = {https://github.com/hplieninger/mpt2irt}
    }

2017

  • [PDF] Gronau, Q. F., Van Erp, S., Heck, D. W., Cesario, J., Jonas, K. J., & Wagenmakers, E. (2017). A Bayesian model-averaged meta-analysis of the power pose effect with informed and default priors: the case of felt power. Comprehensive Results in Social Psychology, 2, 123–138. https://doi.org/10.1080/23743603.2017.1326760
    [Abstract] [BibTeX] [Data & R Scripts]

    Earlier work found that – compared to participants who adopted constrictive body postures – participants who adopted expansive body postures reported feeling more powerful, showed an increase in testosterone and a decrease in cortisol, and displayed an increased tolerance for risk. However, these power pose effects have recently come under considerable scrutiny. Here, we present a Bayesian meta-analysis of six preregistered studies from this special issue, focusing on the effect of power posing on felt power. Our analysis improves on standard classical meta-analyses in several ways. First and foremost, we considered only preregistered studies, eliminating concerns about publication bias. Second, the Bayesian approach enables us to quantify evidence for both the alternative and the null hypothesis. Third, we use Bayesian model-averaging to account for the uncertainty with respect to the choice for a fixed-effect model or a random-effect model. Fourth, based on a literature review, we obtained an empirically informed prior distribution for the between-study heterogeneity of effect sizes. This empirically informed prior can serve as a default choice not only for the investigation of the power pose effect but for effects in the field of psychology more generally. For effect size, we considered a default and an informed prior. Our meta-analysis yields very strong evidence for an effect of power posing on felt power. However, when the analysis is restricted to participants unfamiliar with the effect, the meta-analysis yields evidence that is only moderate.

    @article{gronau2017bayesian,
    title = {A {{Bayesian}} Model-Averaged Meta-Analysis of the Power Pose Effect with Informed and Default Priors: The Case of Felt Power},
    author = {Gronau, Quentin F. and Van Erp, Sara and Heck, Daniel W and Cesario, Joseph and Jonas, Kai J. and Wagenmakers, Eric-Jan},
    date = {2017},
    journaltitle = {Comprehensive Results in Social Psychology},
    volume = {2},
    pages = {123--138},
    doi = {10.1080/23743603.2017.1326760},
    abstract = {Earlier work found that – compared to participants who adopted constrictive body postures – participants who adopted expansive body postures reported feeling more powerful, showed an increase in testosterone and a decrease in cortisol, and displayed an increased tolerance for risk. However, these power pose effects have recently come under considerable scrutiny. Here, we present a Bayesian meta-analysis of six preregistered studies from this special issue, focusing on the effect of power posing on felt power. Our analysis improves on standard classical meta-analyses in several ways. First and foremost, we considered only preregistered studies, eliminating concerns about publication bias. Second, the Bayesian approach enables us to quantify evidence for both the alternative and the null hypothesis. Third, we use Bayesian model-averaging to account for the uncertainty with respect to the choice for a fixed-effect model or a random-effect model. Fourth, based on a literature review, we obtained an empirically informed prior distribution for the between-study heterogeneity of effect sizes. This empirically informed prior can serve as a default choice not only for the investigation of the power pose effect but for effects in the field of psychology more generally. For effect size, we considered a default and an informed prior. Our meta-analysis yields very strong evidence for an effect of power posing on felt power. However, when the analysis is restricted to participants unfamiliar with the effect, the meta-analysis yields evidence that is only moderate.},
    osf = {https://osf.io/k5avt},
    keywords = {Bayesian meta-analysis}
    }

  • [PDF] Heck, D. W., Hilbig, B. E., & Moshagen, M. (2017). From information processing to decisions: Formalizing and comparing probabilistic choice models. Cognitive Psychology, 96, 26–40. https://doi.org/10.1016/j.cogpsych.2017.05.003
    [Abstract] [BibTeX] [Data & R Scripts]

    Decision strategies explain how people integrate multiple sources of information to make probabilistic inferences. In the past decade, increasingly sophisticated methods have been developed to determine which strategy explains decision behavior best. We extend these efforts to test psychologically more plausible models (i.e., strategies), including a new, probabilistic version of the take-the-best (TTB) heuristic that implements a rank order of error probabilities based on sequential processing. Within a coherent statistical framework, deterministic and probabilistic versions of TTB and other strategies can directly be compared using model selection by minimum description length or the Bayes factor. In an experiment with inferences from given information, only three of 104 participants were best described by the psychologically plausible, probabilistic version of TTB. Similar as in previous studies, most participants were classified as users of weighted-additive, a strategy that integrates all available information and approximates rational decisions.

    @article{heck2017information,
    title = {From Information Processing to Decisions: {{Formalizing}} and Comparing Probabilistic Choice Models},
    author = {Heck, Daniel W and Hilbig, Benjamin E and Moshagen, Morten},
    date = {2017},
    journaltitle = {Cognitive Psychology},
    volume = {96},
    pages = {26--40},
    doi = {10.1016/j.cogpsych.2017.05.003},
    abstract = {Decision strategies explain how people integrate multiple sources of information to make probabilistic inferences. In the past decade, increasingly sophisticated methods have been developed to determine which strategy explains decision behavior best. We extend these efforts to test psychologically more plausible models (i.e., strategies), including a new, probabilistic version of the take-the-best (TTB) heuristic that implements a rank order of error probabilities based on sequential processing. Within a coherent statistical framework, deterministic and probabilistic versions of TTB and other strategies can directly be compared using model selection by minimum description length or the Bayes factor. In an experiment with inferences from given information, only three of 104 participants were best described by the psychologically plausible, probabilistic version of TTB. Similar as in previous studies, most participants were classified as users of weighted-additive, a strategy that integrates all available information and approximates rational decisions.},
    osf = {https://osf.io/jcd2c},
    keywords = {heckfirst,Polytope\_Sampling,popularity\_bias}
    }

  • [PDF] Heck, D. W., & Erdfelder, E. (2017). Linking process and measurement models of recognition-based decisions. Psychological Review, 124, 442–471. https://doi.org/10.1037/rev0000063
    [Abstract] [BibTeX] [Data & R Scripts]

    When making inferences about pairs of objects, one of which is recognized and the other is not, the recognition heuristic states that participants choose the recognized object in a noncompensatory way without considering any further knowledge. In contrast, information-integration theories such as parallel constraint satisfaction (PCS) assume that recognition is merely one of many cues that is integrated with further knowledge in a compensatory way. To test both process models against each other without manipulating recognition or further knowledge, we include response times into the r-model, a popular multinomial processing tree model for memory-based decisions. Essentially, this response-time-extended r-model allows to test a crucial prediction of PCS, namely, that the integration of recognition-congruent knowledge leads to faster decisions compared to the consideration of recognition only—even though more information is processed. In contrast, decisions due to recognition-heuristic use are predicted to be faster than decisions affected by any further knowledge. Using the classical German-cities example, simulations show that the novel measurement model discriminates between both process models based on choices, decision times, and recognition judgments only. In a reanalysis of 29 data sets including more than 400,000 individual trials, noncompensatory choices of the recognized option were estimated to be slower than choices due to recognition-congruent knowledge. This corroborates the parallel information-integration account of memory-based decisions, according to which decisions become faster when the coherence of the available information increases. (PsycINFO Database Record (c) 2017 APA, all rights reserved)

    @article{heck2017linking,
    title = {Linking Process and Measurement Models of Recognition-Based Decisions},
    author = {Heck, Daniel W and Erdfelder, Edgar},
    date = {2017},
    journaltitle = {Psychological Review},
    volume = {124},
    pages = {442--471},
    doi = {10.1037/rev0000063},
    abstract = {When making inferences about pairs of objects, one of which is recognized and the other is not, the recognition heuristic states that participants choose the recognized object in a noncompensatory way without considering any further knowledge. In contrast, information-integration theories such as parallel constraint satisfaction (PCS) assume that recognition is merely one of many cues that is integrated with further knowledge in a compensatory way. To test both process models against each other without manipulating recognition or further knowledge, we include response times into the r-model, a popular multinomial processing tree model for memory-based decisions. Essentially, this response-time-extended r-model allows to test a crucial prediction of PCS, namely, that the integration of recognition-congruent knowledge leads to faster decisions compared to the consideration of recognition only—even though more information is processed. In contrast, decisions due to recognition-heuristic use are predicted to be faster than decisions affected by any further knowledge. Using the classical German-cities example, simulations show that the novel measurement model discriminates between both process models based on choices, decision times, and recognition judgments only. In a reanalysis of 29 data sets including more than 400,000 individual trials, noncompensatory choices of the recognized option were estimated to be slower than choices due to recognition-congruent knowledge. This corroborates the parallel information-integration account of memory-based decisions, according to which decisions become faster when the coherence of the available information increases. (PsycINFO Database Record (c) 2017 APA, all rights reserved)},
    osf = {https://osf.io/4kv87},
    keywords = {heckfirst,heckpaper,popularity\_bias}
    }

  • [PDF] Klein, S. A., Hilbig, B. E., & Heck, D. W. (2017). Which is the greater good? A social dilemma paradigm disentangling environmentalism and cooperation. Journal of Environmental Psychology, 53, 40–49. https://doi.org/10.1016/j.jenvp.2017.06.001
    [Abstract] [BibTeX] [Data & R Scripts]

    In previous research, pro-environmental behavior (PEB) was almost exclusively aligned with in-group cooperation. However, PEB and in-group cooperation can also be mutually exclusive or directly conflict. To provide first evidence on behavior in these situations, the present work develops the Greater Good Game (GGG), a social dilemma paradigm with a selfish, a cooperative, and a pro-environmental choice option. In Study 1, the GGG and a corresponding measurement model were experimentally validated using different payoff structures. Results show that in-group cooperation is the dominant behavior in a situation of mutual exclusiveness, whereas selfish behavior becomes more dominant in a situation of conflict. Study 2 examined personality influences on choices in the GGG. High Honesty-Humility was associated with less selfishness, whereas Openness was not associated with more PEB. Results corroborate the paradigm as a valid instrument for investigating the conflict between in-group cooperation and PEB and provide first insights into personality influences.

    @article{klein2017which,
    title = {Which Is the Greater Good? {{A}} Social Dilemma Paradigm Disentangling Environmentalism and Cooperation},
    author = {Klein, Sina A. and Hilbig, Benjamin E. and Heck, Daniel W},
    date = {2017},
    journaltitle = {Journal of Environmental Psychology},
    volume = {53},
    pages = {40--49},
    doi = {10.1016/j.jenvp.2017.06.001},
    abstract = {In previous research, pro-environmental behavior (PEB) was almost exclusively aligned with in-group cooperation. However, PEB and in-group cooperation can also be mutually exclusive or directly conflict. To provide first evidence on behavior in these situations, the present work develops the Greater Good Game (GGG), a social dilemma paradigm with a selfish, a cooperative, and a pro-environmental choice option. In Study 1, the GGG and a corresponding measurement model were experimentally validated using different payoff structures. Results show that in-group cooperation is the dominant behavior in a situation of mutual exclusiveness, whereas selfish behavior becomes more dominant in a situation of conflict. Study 2 examined personality influences on choices in the GGG. High Honesty-Humility was associated with less selfishness, whereas Openness was not associated with more PEB. Results corroborate the paradigm as a valid instrument for investigating the conflict between in-group cooperation and PEB and provide first insights into personality influences.},
    osf = {https://osf.io/zw2ze},
    keywords = {Actual behavior,Cognitive psychometrics,Externalities,HEXACO,Public goods}
    }

2016

  • [PDF] Heck, D. W., & Wagenmakers, E. (2016). Adjusted priors for Bayes factors involving reparameterized order constraints. Journal of Mathematical Psychology, 73, 110–116. https://doi.org/10.1016/j.jmp.2016.05.004
    [Abstract] [BibTeX] [Preprint] [Data & R Scripts]

    Many psychological theories that are instantiated as statistical models imply order constraints on the model parameters. To fit and test such restrictions, order constraints of the form theta_i {$<$} theta_j can be reparameterized with auxiliary parameters eta in [0,1] to replace the original parameters by theta_i = eta*theta_j. This approach is especially common in multinomial processing tree (MPT) modeling because the reparameterized, less complex model also belongs to the MPT class. Here, we discuss the importance of adjusting the prior distributions for the auxiliary parameters of a reparameterized model. This adjustment is important for computing the Bayes factor, a model selection criterion that measures the evidence in favor of an order constraint by trading off model fit and complexity. We show that uniform priors for the auxiliary parameters result in a Bayes factor that differs from the one that is obtained using a multivariate uniform prior on the order-constrained original parameters. As a remedy, we derive the adjusted priors for the auxiliary parameters of the reparameterized model. The practical relevance of the problem is underscored with a concrete example using the multi-trial pair-clustering model.

    @article{heck2016adjusted,
    title = {Adjusted Priors for {{Bayes}} Factors Involving Reparameterized Order Constraints},
    author = {Heck, Daniel W and Wagenmakers, Eric-Jan},
    date = {2016},
    journaltitle = {Journal of Mathematical Psychology},
    volume = {73},
    pages = {110--116},
    doi = {10.1016/j.jmp.2016.05.004},
    abstract = {Many psychological theories that are instantiated as statistical models imply order constraints on the model parameters. To fit and test such restrictions, order constraints of the form theta\_i {$<$} theta\_j can be reparameterized with auxiliary parameters eta in [0,1] to replace the original parameters by theta\_i = eta*theta\_j. This approach is especially common in multinomial processing tree (MPT) modeling because the reparameterized, less complex model also belongs to the MPT class. Here, we discuss the importance of adjusting the prior distributions for the auxiliary parameters of a reparameterized model. This adjustment is important for computing the Bayes factor, a model selection criterion that measures the evidence in favor of an order constraint by trading off model fit and complexity. We show that uniform priors for the auxiliary parameters result in a Bayes factor that differs from the one that is obtained using a multivariate uniform prior on the order-constrained original parameters. As a remedy, we derive the adjusted priors for the auxiliary parameters of the reparameterized model. The practical relevance of the problem is underscored with a concrete example using the multi-trial pair-clustering model.},
    arxivnumber = {1511.08775},
    osf = {https://osf.io/cz827},
    keywords = {heckfirst,Polytope\_Sampling}
    }

  • [PDF] Heck, D. W., & Erdfelder, E. (2016). Extending multinomial processing tree models to measure the relative speed of cognitive processes. Psychonomic Bulletin & Review, 23, 1440–1465. https://doi.org/10.3758/s13423-016-1025-6
    [Abstract] [BibTeX] [Data & R Scripts]

    Multinomial processing tree (MPT) models account for observed categorical responses by assuming a finite number of underlying cognitive processes. We propose a general method that allows for the inclusion of response times (RTs) into any kind of MPT model to measure the relative speed of the hypothesized processes. The approach relies on the fundamental assumption that observed RT distributions emerge as mixtures of latent RT distributions that correspond to different underlying processing paths. To avoid auxiliary assumptions about the shape of these latent RT distributions, we account for RTs in a distribution-free way by splitting each observed category into several bins from fast to slow responses, separately for each individual. Given these data, latent RT distributions are parameterized by probability parameters for these RT bins, and an extended MPT model is obtained. Hence, all of the statistical results and software available for MPT models can easily be used to fit, test, and compare RT-extended MPT models. We demonstrate the proposed method by applying it to the two-high-threshold model of recognition memory.

    @article{heck2016extending,
    title = {Extending Multinomial Processing Tree Models to Measure the Relative Speed of Cognitive Processes},
    author = {Heck, Daniel W and Erdfelder, Edgar},
    date = {2016},
    journaltitle = {Psychonomic Bulletin \& Review},
    volume = {23},
    pages = {1440--1465},
    doi = {10.3758/s13423-016-1025-6},
    abstract = {Multinomial processing tree (MPT) models account for observed categorical responses by assuming a finite number of underlying cognitive processes. We propose a general method that allows for the inclusion of response times (RTs) into any kind of MPT model to measure the relative speed of the hypothesized processes. The approach relies on the fundamental assumption that observed RT distributions emerge as mixtures of latent RT distributions that correspond to different underlying processing paths. To avoid auxiliary assumptions about the shape of these latent RT distributions, we account for RTs in a distribution-free way by splitting each observed category into several bins from fast to slow responses, separately for each individual. Given these data, latent RT distributions are parameterized by probability parameters for these RT bins, and an extended MPT model is obtained. Hence, all of the statistical results and software available for MPT models can easily be used to fit, test, and compare RT-extended MPT models. We demonstrate the proposed method by applying it to the two-high-threshold model of recognition memory.},
    osf = {https://osf.io/msca9},
    keywords = {heckfirst,heckpaper}
    }

  • [PDF] Thielmann, I., Heck, D. W., & Hilbig, B. E. (2016). Anonymity and incentives: An investigation of techniques to reduce socially desirable responding in the Trust Game. Judgment and Decision Making, 11, 527–536. https://doi.org/10.1017/S1930297500004605
    [Abstract] [BibTeX] [Preprint] [Data & R Scripts]

    Economic games offer a convenient approach for the study of prosocial behavior. As an advantage, they allow for straightforward implementation of different techniques to reduce socially desirable responding. We investigated the effectiveness of the most prominent of these techniques, namely providing behavior-contingent incentives and maximizing anonymity in three versions of the Trust Game: (i) a hypothetical version without monetary incentives and with a typical level of anonymity, (ii) an incentivized version with monetary incentives and the same (typical) level of anonymity, and (iii) an indirect questioning version without incentives but with a maximum level of anonymity, rendering responses inconclusive due to adding random noise via the Randomized Response Technique. Results from a large (N = 1,267) and heterogeneous sample showed comparable levels of trust for the hypothetical and incentivized versions using direct questioning. However, levels of trust decreased when maximizing the inconclusiveness of responses through indirect questioning. This implies that levels of trust might be particularly sensitive to changes in individuals’ anonymity but not necessarily to monetary incentives.

    @article{thielmann2016anonymity,
    title = {Anonymity and Incentives: {{An}} Investigation of Techniques to Reduce Socially Desirable Responding in the {{Trust Game}}},
    author = {Thielmann, Isabel and Heck, Daniel W and Hilbig, Benjamin E},
    date = {2016},
    journaltitle = {Judgment and Decision Making},
    volume = {11},
    pages = {527--536},
    doi = {10.1017/S1930297500004605},
    url = {http://journal.sjdm.org/16/16613/jdm16613.pdf},
    abstract = {Economic games offer a convenient approach for the study of prosocial behavior. As an advantage, they allow for straightforward implementation of different techniques to reduce socially desirable responding. We investigated the effectiveness of the most prominent of these techniques, namely providing behavior-contingent incentives and maximizing anonymity in three versions of the Trust Game: (i) a hypothetical version without monetary incentives and with a typical level of anonymity, (ii) an incentivized version with monetary incentives and the same (typical) level of anonymity, and (iii) an indirect questioning version without incentives but with a maximum level of anonymity, rendering responses inconclusive due to adding random noise via the Randomized Response Technique. Results from a large (N = 1,267) and heterogeneous sample showed comparable levels of trust for the hypothetical and incentivized versions using direct questioning. However, levels of trust decreased when maximizing the inconclusiveness of responses through indirect questioning. This implies that levels of trust might be particularly sensitive to changes in individuals’ anonymity but not necessarily to monetary incentives.},
    osf = {https://osf.io/h7p5t}
    }

2015

  • [PDF] Erdfelder, E., Castela, M., Michalkiewicz, M., & Heck, D. W. (2015). The advantages of model fitting compared to model simulation in research on preference construction. Frontiers in Psychology, 6, 140. https://doi.org/10.3389/fpsyg.2015.00140
    [BibTeX]
    @article{erdfelder2015advantages,
    title = {The Advantages of Model Fitting Compared to Model Simulation in Research on Preference Construction},
    author = {Erdfelder, Edgar and Castela, Marta and Michalkiewicz, Martha and Heck, Daniel W},
    date = {2015},
    journaltitle = {Frontiers in Psychology},
    volume = {6},
    pages = {140},
    doi = {10.3389/fpsyg.2015.00140}
    }

  • [PDF] Heck, D. W., Wagenmakers, E., & Morey, R. D. (2015). Testing order constraints: Qualitative differences between Bayes factors and normalized maximum likelihood. Statistics & Probability Letters, 105, 157–162. https://doi.org/10.1016/j.spl.2015.06.014
    [Abstract] [BibTeX] [Preprint]

    We compared Bayes factors to normalized maximum likelihood for the simple case of selecting between an order-constrained versus a full binomial model. This comparison revealed two qualitative differences in testing order constraints regarding data dependence and model preference.

    @article{heck2015testing,
    title = {Testing Order Constraints: {{Qualitative}} Differences between {{Bayes}} Factors and Normalized Maximum Likelihood},
    author = {Heck, Daniel W and Wagenmakers, Eric-Jan and Morey, Richard D.},
    date = {2015},
    journaltitle = {Statistics \& Probability Letters},
    volume = {105},
    pages = {157--162},
    doi = {10.1016/j.spl.2015.06.014},
    abstract = {We compared Bayes factors to normalized maximum likelihood for the simple case of selecting between an order-constrained versus a full binomial model. This comparison revealed two qualitative differences in testing order constraints regarding data dependence and model preference.},
    arxivnumber = {1411.2778},
    keywords = {Inequality constraint,Minimum description length,model,Model complexity,model selection,Model selection,Polytope\_Sampling,selection}
    }

2014

  • [PDF] Heck, D. W., Moshagen, M., & Erdfelder, E. (2014). Model selection by minimum description length: Lower-bound sample sizes for the Fisher information approximation. Journal of Mathematical Psychology, 60, 29–34. https://doi.org/10.1016/j.jmp.2014.06.002
    [Abstract] [BibTeX] [Preprint] [GitHub]

    The Fisher information approximation (FIA) is an implementation of the minimum description length principle for model selection. Unlike information criteria such as AIC or BIC, it has the advantage of taking the functional form of a model into account. Unfortunately, FIA can be misleading in finite samples, resulting in an inversion of the correct rank order of complexity terms for competing models in the worst case. As a remedy, we propose a lower-bound N' for the sample size that suffices to preclude such errors. We illustrate the approach using three examples from the family of multinomial processing tree models.

    @article{heck2014model,
    title = {Model Selection by Minimum Description Length: {{Lower-bound}} Sample Sizes for the {{Fisher}} Information Approximation},
    author = {Heck, Daniel W and Moshagen, Morten and Erdfelder, Edgar},
    date = {2014},
    journaltitle = {Journal of Mathematical Psychology},
    volume = {60},
    pages = {29--34},
    doi = {10.1016/j.jmp.2014.06.002},
    abstract = {The Fisher information approximation (FIA) is an implementation of the minimum description length principle for model selection. Unlike information criteria such as AIC or BIC, it has the advantage of taking the functional form of a model into account. Unfortunately, FIA can be misleading in finite samples, resulting in an inversion of the correct rank order of complexity terms for competing models in the worst case. As a remedy, we propose a lower-bound N' for the sample size that suffices to preclude such errors. We illustrate the approach using three examples from the family of multinomial processing tree models.},
    arxivnumber = {1808.00212},
    github = {https://github.com/danheck/FIAminimumN},
    keywords = {heckfirst}
    }

  • [PDF] Platzer, C., Bröder, A., & Heck, D. W. (2014). Deciding with the eye: How the visually manipulated accessibility of information in memory influences decision behavior. Memory & Cognition, 42, 595–608. https://doi.org/10.3758/s13421-013-0380-z
    [Abstract] [BibTeX]

    Decision situations are typically characterized by uncertainty: Individuals do not know the values of different options on a criterion dimension. For example, consumers do not know which is the healthiest of several products. To make a decision, individuals can use information about cues that are probabilistically related to the criterion dimension, such as sugar content or the concentration of natural vitamins. In two experiments, we investigated how the accessibility of cue information in memory affects which decision strategy individuals rely on. The accessibility of cue information was manipulated by means of a newly developed paradigm, the spatial-memory-cueing paradigm, which is based on a combination of the looking-at-nothing phenomenon and the spatial-cueing paradigm. The results indicated that people use different decision strategies, depending on the validity of easily accessible information. If the easily accessible information is valid, people stop information search and decide according to a simple take-the-best heuristic. If, however, information that comes to mind easily has a low predictive validity, people are more likely to integrate all available cue information in a compensatory manner.

    @article{platzer2014deciding,
    title = {Deciding with the Eye: {{How}} the Visually Manipulated Accessibility of Information in Memory Influences Decision Behavior},
    author = {Platzer, Christine and Bröder, Arndt and Heck, Daniel W},
    date = {2014},
    journaltitle = {Memory \& Cognition},
    volume = {42},
    pages = {595--608},
    doi = {10.3758/s13421-013-0380-z},
    abstract = {Decision situations are typically characterized by uncertainty: Individuals do not know the values of different options on a criterion dimension. For example, consumers do not know which is the healthiest of several products. To make a decision, individuals can use information about cues that are probabilistically related to the criterion dimension, such as sugar content or the concentration of natural vitamins. In two experiments, we investigated how the accessibility of cue information in memory affects which decision strategy individuals rely on. The accessibility of cue information was manipulated by means of a newly developed paradigm, the spatial-memory-cueing paradigm, which is based on a combination of the looking-at-nothing phenomenon and the spatial-cueing paradigm. The results indicated that people use different decision strategies, depending on the validity of easily accessible information. If the easily accessible information is valid, people stop information search and decide according to a simple take-the-best heuristic. If, however, information that comes to mind easily has a low predictive validity, people are more likely to integrate all available cue information in a compensatory manner.},
    keywords = {Accessibility,Decision Making,memory,Spatial attention,Visual salience}
    }

Keynotes, Invited Talks & Conference Presentations

2024

  • Heck, D. W. (2024). Modeling uncertainty in stepwise estimation approaches. Bayesian methods for the Social Sciences. Amsterdam, Netherlands. https://bayesforshs2.sciencesconf.org
    @inproceedings{heck2024modeling,
    title = {Modeling Uncertainty in Stepwise Estimation Approaches},
    booktitle = {Bayesian Methods for the {{Social Sciences}}},
    author = {Heck, Daniel W},
    date = {2024},
    volume = {Amsterdam, Netherlands},
    url = {https://bayesforshs2.sciencesconf.org},
    keywords = {heckinvited}
    }

  • Heck, D. W., & Schmidt, O. (2024). Multinomial models of the repetition-based truth effect: Shift in response bias or reduced discrimination ability?. Virtual MathPsych.
    @inproceedings{heck2024multinomial,
    title = {Multinomial Models of the Repetition-Based Truth Effect: {{Shift}} in Response Bias or Reduced Discrimination Ability?},
    booktitle = {Virtual {{MathPsych}}},
    author = {Heck, Daniel W and Schmidt, Oliver},
    date = {2024},
    keywords = {⛔ No DOI found,hecktalk}
    }

2023

  • Heck, D. W. (2023). metaBMA: Bayesian model averaging for meta-analysis in R. ESMARConf2023. online. https://youtu.be/DcsRnRgY_co
    @inproceedings{heck2023metabma,
    title = {{{metaBMA}}: {{Bayesian}} Model Averaging for Meta-Analysis in {{R}}},
    booktitle = {{{ESMARConf2023}}},
    author = {Heck, Daniel W},
    date = {2023},
    volume = {online},
    url = {https://youtu.be/DcsRnRgY_co},
    keywords = {hecktalk}
    }

  • Heck, D. W. (2023). Modeling the link between plausibility and the repetition-based truth effect. Virtual MathPsych. https://youtu.be/q1mcr2912bI
    @inproceedings{heck2023modeling,
    title = {Modeling the Link between Plausibility and the Repetition-Based Truth Effect},
    booktitle = {Virtual {{MathPsych}}},
    author = {Heck, Daniel W},
    date = {2023},
    url = {https://youtu.be/q1mcr2912bI},
    keywords = {hecktalk}
    }

  • Heck, D. W. (2023). A tutorial on Bayesian model averaging for meta-analysis using the metaBMA package in R. ESMARConf2023. online. https://youtu.be/e68YX1VTe_A
    @inproceedings{heck2023tutorial,
    title = {A Tutorial on {{Bayesian}} Model Averaging for Meta-Analysis Using the {{metaBMA}} Package in {{R}}},
    booktitle = {{{ESMARConf2023}}},
    author = {Heck, Daniel W},
    date = {2023},
    volume = {online},
    url = {https://youtu.be/e68YX1VTe_A},
    keywords = {hecktalk}
    }

2022

  • Heck, D. W. (2022). Cognitive psychometrics: Measuring latent capacities with multinomial processing tree models. Department of Analysis and Modeling of Complex Data (Klaus Oberauer). Zürich, Switzerland.
    @inproceedings{heck2022cognitive,
    title = {Cognitive Psychometrics: {{Measuring}} Latent Capacities with Multinomial Processing Tree Models},
    booktitle = {Department of {{Analysis}} and {{Modeling}} of {{Complex Data}} ({{Klaus Oberauer}})},
    author = {Heck, Daniel W},
    date = {2022},
    volume = {Zürich, Switzerland},
    abstract = {Many psychological theories assume that different cognitive processes can result in the same observable responses. Multinomial processing tree (MPT) models allow researchers to disentangle mixtures of latent processes based on observed response frequencies. MPT models have recently been extended to account for participant and item heterogeneity by assuming hierarchical group-level distributions. Thereby, it has become possible to link latent cognitive processes to external covariates such as personality traits and other person characteristics. Independently, item response trees (IRTrees) have become popular for modeling response styles. Whereas cognitive and social psychology has usually focused on the experimental validation of MPT parameters at the group level, psychometric approaches consider both the item and person level, thus allowing researchers to test the convergent and discriminant validity of measurements. Bridging these different modeling approaches, Bayesian hierarchical MPT models provide an opportunity to connect traditionally isolated disciplines in psychology.},
    keywords = {heckinvited}
    }

  • Heck, D. W., & Mayer, M. (2022). Cultural consensus theory for two-dimensional location judgments. In-Person MathPsych. Toronto.
    @inproceedings{heck2022cultural,
    title = {Cultural Consensus Theory for Two-Dimensional Location Judgments},
    booktitle = {In-{{Person MathPsych}}},
    author = {Heck, Daniel W and Mayer, Maren},
    date = {2022},
    volume = {Toronto},
    keywords = {hecktalk}
    }

  • Heck, D. W. (2022). Modeling the link between plausibility and the illusory truth effect. 64. Tagung experimentell arbeitender Psychologen. Köln, Germany.
    @inproceedings{heck2022modeling,
    title = {Modeling the Link between Plausibility and the Illusory Truth Effect},
    booktitle = {64. {{Tagung}} Experimentell Arbeitender {{Psychologen}}},
    author = {Heck, Daniel W},
    date = {2022},
    volume = {Köln, Germany},
    keywords = {hecktalk}
    }

  • Heck, D. W. (2022). Recent advances in multinomial modeling. Keynote, In-Person MathPsych. Toronto.
    @inproceedings{heck2022recent,
    title = {Recent Advances in Multinomial Modeling},
    booktitle = {Keynote, {{In-Person MathPsych}}},
    author = {Heck, Daniel W},
    date = {2022},
    volume = {Toronto},
    keywords = {keynote}
    }

2021

  • Heck, D. W. (2021). Cognitive psychometrics: Measuring interindividual differences in latent processes. Center for Cognitive Science (Constantin Rothkopf). TU Darmstadt, Germany.
    @inproceedings{heck2021cognitive,
    title = {Cognitive Psychometrics: {{Measuring}} Interindividual Differences in Latent Processes},
    booktitle = {Center for {{Cognitive Science}} ({{Constantin Rothkopf}})},
    author = {Heck, Daniel W},
    date = {2021},
    volume = {TU Darmstadt, Germany},
    abstract = {Many psychological theories assume that qualitatively different cognitive processes can result in identical responses. Multinomial processing tree (MPT) models allow researchers to disentangle latent cognitive processes based on observed response frequencies. Recently, MPT models have been extended to explicitly account for participant and item heterogeneity. These hierarchical Bayesian MPT models provide the opportunity to connect two traditionally isolated disciplines. Whereas cognitive psychology has often focused on the experimental validation of MPT model parameters on the group level, psychometrics provides the necessary concepts and tools for measuring differences in MPT parameters on the item or person level. Moreover, MPT parameters can be regressed on covariates to model latent processes as a function of personality traits or other person characteristics.},
    keywords = {heckinvited}
    }

  • Heck, D. W. (2021). Cognitive psychometrics: Measuring latent capacities with multinomial processing tree models. Department of Analysis and Modeling of Complex Data (Anna-Lena Schubert). Mainz, Germany.
    @inproceedings{heck2021cognitive-1,
    title = {Cognitive Psychometrics: {{Measuring}} Latent Capacities with Multinomial Processing Tree Models},
    booktitle = {Department of {{Analysis}} and {{Modeling}} of {{Complex Data}} ({{Anna-Lena Schubert}})},
    author = {Heck, Daniel W},
    date = {2021},
    volume = {Mainz, Germany},
    abstract = {Many psychological theories assume that different cognitive processes can result in the same observable responses. Multinomial processing tree (MPT) models allow researchers to disentangle mixtures of latent processes based on observed response frequencies. MPT models have recently been extended to account for participant and item heterogeneity by assuming hierarchical group-level distributions. Thereby, it has become possible to link latent cognitive processes to external covariates such as personality traits and other person characteristics. Independently, item response trees (IRTrees) have become popular for modeling response styles. Whereas cognitive and social psychology has usually focused on the experimental validation of MPT parameters at the group level, psychometric approaches consider both the item and person level, thus allowing researchers to test the convergent and discriminant validity of measurements. Bridging these different modeling approaches, Bayesian hierarchical MPT models provide an opportunity to connect traditionally isolated disciplines in psychology.},
    keywords = {heckinvited}
    }

  • Heck, D. W. (2021). Cognitive psychometrics using hierarchical multinomial processing tree models. Invited Talk, International Meeting of the Psychometric Society (IMPS).
    @inproceedings{heck2021cognitive-2,
    title = {Cognitive Psychometrics Using Hierarchical Multinomial Processing Tree Models},
    booktitle = {Invited {{Talk}}, {{International Meeting}} of the {{Psychometric Society}} ({{IMPS}})},
    author = {Heck, Daniel W},
    date = {2021},
    keywords = {keynote}
    }

  • Heck, D. W., & Bockting, F. (2021). Bayes factors for repeated-measures designs: Benefits of model selection and model averaging. 15th Conference of the Section Methods and Evaluation. Mannheim, Germany.
    @inproceedings{heck2021fgme,
    title = {Bayes Factors for Repeated-Measures Designs: {{Benefits}} of Model Selection and Model Averaging},
    booktitle = {15th {{Conference}} of the {{Section Methods}} and {{Evaluation}}},
    author = {Heck, Daniel W and Bockting, Florence},
    date = {2021},
    volume = {Mannheim, Germany},
    keywords = {hecktalk}
    }

  • Heck, D. W. (2021). Assessing the 'paradox' of converging evidence by modeling the joint distribution of individual differences. Virtual MathPsych. https://youtu.be/2t3DiMwVsoI
    @inproceedings{heck2021mathpsych,
    title = {Assessing the 'paradox' of Converging Evidence by Modeling the Joint Distribution of Individual Differences},
    booktitle = {Virtual {{MathPsych}}},
    author = {Heck, Daniel W},
    date = {2021},
    url = {https://youtu.be/2t3DiMwVsoI},
    keywords = {hecktalk}
    }

  • Heck, D. W. (2021). Modeling the proportion of individuals described by a theory: About the relevance of multivariate assumptions. 63. Tagung experimentell arbeitender Psychologen. Ulm, Germany.
    @inproceedings{heck2021teap,
    title = {Modeling the Proportion of Individuals Described by a Theory: {{About}} the Relevance of Multivariate Assumptions},
    booktitle = {63. {{Tagung}} Experimentell Arbeitender {{Psychologen}}},
    author = {Heck, Daniel W},
    date = {2021},
    volume = {Ulm, Germany},
    keywords = {hecktalk}
    }

2019

  • Heck, D. W., & Davis-Stober, C. P. (2019). Bayesian inference for multinomial models with linear inequality constraints. Meeting of the European Mathematical Psychology Group. Heidelberg, Germany.
    @inproceedings{heck2019bayesian,
    title = {Bayesian Inference for Multinomial Models with Linear Inequality Constraints},
    booktitle = {Meeting of the {{European Mathematical Psychology Group}}},
    author = {Heck, Daniel W and Davis-Stober, Clintin P},
    date = {2019},
    volume = {Heidelberg, Germany},
    keywords = {hecktalk}
    }

  • Heck, D. W. (2019). Bayesian inference for multinomial models with convex linear inequality constraints. Department of Psychology (Eric-Jan Wagenmakers). Amsterdam, Netherlands.
    @inproceedings{heck2019bayesian-1,
    title = {Bayesian Inference for Multinomial Models with Convex Linear Inequality Constraints},
    booktitle = {Department of {{Psychology}} ({{Eric-Jan Wagenmakers}})},
    author = {Heck, Daniel W},
    date = {2019},
    volume = {Amsterdam, Netherlands},
    abstract = {Many theories in psychology make predictions about the relative size of probabilities underlying response frequencies for different stimulus material, experimental conditions, or preexisting groups. In such scenarios, multinomial models with inequality constrains are ideally suited for testing informative hypotheses and theoretical orderings on choice probabilities (e.g., whether choice probabilities monotonically increase across conditions). Even though different research groups have developed custom-tailored methods for specific applications and theories, no standardized methods and software are available for the general class of inequality-constrained multinomial models. To facilitate the application of multinomial models by applied and substantive researchers, the user-friendly R package “multinomineq” (Heck \& Davis-Stober, 2018) implements and extends computational methods to fit and test multinomial models with linear inequality constraints. Besides model fitting via Markov chain Monte Carlo sampling, the package facilitates model testing with posterior-predictive p-values and encompassing Bayes factors.},
    keywords = {heckinvited}
    }

  • Heck, D. W. (2019). Cognitive psychometrics with Bayesian hierarchical multinomial processing tree models. Meeting of the Working Group Structural Equation Modeling.
    @inproceedings{heck2019cognitive,
    title = {Cognitive Psychometrics with {{Bayesian}} Hierarchical Multinomial Processing Tree Models},
    booktitle = {Meeting of the {{Working Group Structural Equation Modeling}}},
    author = {Heck, Daniel W},
    date = {2019},
    eventtitle = {Tübingen, {{Germany}}},
    keywords = {hecktalk}
    }

  • Heck, D. W. (2019). Multinomial models with convex linear inequality constraints. Stochastics in Mannheim (Leif Döring). Mannheim, Germany.
    @inproceedings{heck2019multinomial-1,
    title = {Multinomial Models with Convex Linear Inequality Constraints},
    booktitle = {Stochastics in {{Mannheim}} ({{Leif Döring}})},
    author = {Heck, Daniel W},
    date = {2019},
    volume = {Mannheim, Germany},
    abstract = {Many theories in psychology make predictions about the relative size of probabilities underlying response frequencies for different stimulus material, experimental conditions, or preexisting groups. In such scenarios, multinomial models with inequality constrains are ideally suited for testing informative hypotheses and theoretical orderings on choice probabilities (e.g., whether choice probabilities monotonically increase across conditions). Even though different research groups have developed custom-tailored methods for specific applications and theories, no standardized methods and software are available for the general class of inequality-constrained multinomial models. To facilitate the application of multinomial models by applied and substantive researchers, the user-friendly R package “multinomineq” (Heck \& Davis-Stober, 2018) implements and extends computational methods to fit and test multinomial models with linear inequality constraints. Besides model fitting via Markov chain Monte Carlo sampling, the package facilitates model testing with posterior-predictive p-values and encompassing Bayes factors.},
    keywords = {heckinvited}
    }

  • Heck, D. W. (2019). Multinomial models with convex linear inequality constraints. Department of Psychology (Herbert Hoijtink). Utrecht, Netherlands.
    @inproceedings{heck2019multinomial-3,
    title = {Multinomial Models with Convex Linear Inequality Constraints},
    booktitle = {Department of {{Psychology}} ({{Herbert Hoijtink}})},
    author = {Heck, Daniel W},
    date = {2019},
    volume = {Utrecht, Netherlands},
    abstract = {Many theories in psychology make predictions about the relative size of probabilities underlying response frequencies for different stimulus material, experimental conditions, or preexisting groups. In such scenarios, multinomial models with inequality constrains are ideally suited for testing informative hypotheses and theoretical orderings on choice probabilities (e.g., whether choice probabilities monotonically increase across conditions). Even though different research groups have developed custom-tailored methods for specific applications and theories, no standardized methods and software are available for the general class of inequality-constrained multinomial models. To facilitate the application of multinomial models by applied and substantive researchers, the user-friendly R package “multinomineq” (Heck \& Davis-Stober, 2018) implements and extends computational methods to fit and test multinomial models with linear inequality constraints. Besides model fitting via Markov chain Monte Carlo sampling, the package facilitates model testing with posterior-predictive p-values and encompassing Bayes factors.},
    keywords = {heckinvited}
    }

  • Heck, D. W. (2019). Processing tree models for discrete and continuous variables. Cognition and Perception (Rolf Ulrich). Tübingen, Germany.
    @inproceedings{heck2019processing,
    title = {Processing Tree Models for Discrete and Continuous Variables},
    booktitle = {Cognition and {{Perception}} ({{Rolf Ulrich}})},
    author = {Heck, Daniel W},
    date = {2019},
    volume = {Tübingen, Germany},
    keywords = {heckinvited}
    }

  • Heck, D. W., Noventa, S., & Erdfelder, E. (2019). Representing probabilistic models of knowledge space theory by multinomial processing tree models. 52th Annual Meeting of the Society for Mathematical Psychology. Montreal, Canada.
    @inproceedings{heck2019representing,
    title = {Representing Probabilistic Models of Knowledge Space Theory by Multinomial Processing Tree Models},
    booktitle = {52th {{Annual Meeting}} of the {{Society}} for {{Mathematical Psychology}}},
    author = {Heck, Daniel W and Noventa, Stefano and Erdfelder, Edgar},
    date = {2019},
    volume = {Montreal, Canada},
    keywords = {hecktalk}
    }

  • Heck, D. W., Davis-Stober, C. P., & Cavagnaro, D. R. (2019). Testing informative hypotheses about latent classes of strategy users based on probabilistic classifications. 52th Annual Meeting of the Society for Mathematical Psychology. Montreal, Canada.
    @inproceedings{heck2019testing,
    title = {Testing Informative Hypotheses about Latent Classes of Strategy Users Based on Probabilistic Classifications},
    booktitle = {52th {{Annual Meeting}} of the {{Society}} for {{Mathematical Psychology}}},
    author = {Heck, Daniel W and Davis-Stober, Clintin P. and Cavagnaro, Daniel R.},
    date = {2019},
    volume = {Montreal, Canada},
    keywords = {hecktalk}
    }

2018

  • Heck, D. W. (2018). Bayesian hierarchical multinomial processing tree models: A general framework for cognitive psychometrics. 51. Kongress der Deutschen Gesellschaft für Psychologie. Frankfurt, Germany.
    @inproceedings{heck2018bayesian,
    title = {Bayesian Hierarchical Multinomial Processing Tree Models: {{A}} General Framework for Cognitive Psychometrics},
    booktitle = {51. {{Kongress}} Der {{Deutschen Gesellschaft}} Für {{Psychologie}}},
    author = {Heck, Daniel W},
    date = {2018},
    volume = {Frankfurt, Germany},
    keywords = {hecktalk}
    }

  • Heck, D. W. (2018). A caveat on using the Savage-Dickey density ratio in regression models. Department of Psychology (Eric-Jan Wagenmakers). Amsterdam, Netherlands.
    @inproceedings{heck2018caveat-1,
    title = {A Caveat on Using the {{Savage-Dickey}} Density Ratio in Regression Models},
    booktitle = {Department of {{Psychology}} ({{Eric-Jan Wagenmakers}})},
    author = {Heck, Daniel W},
    date = {2018},
    volume = {Amsterdam, Netherlands},
    abstract = {In regression analysis, researchers are usually interested in testing whether one or more covariates have an effect on the dependent variable. To compute the Bayes factor for such an effect, the Savage-Dickey density ratio (SDDR) is often used. However, the SDDR only provides the correct Bayes factor if the prior distribution under the nested model is identical to the conditional prior under the full model. This assumption does not hold for regression models with the Jeffreys-Zellner-Siow (JZS) prior on multiple predictors. Beyond standard linear regression, this limitation of the SDDR is especially relevant when analytical solutions for the Bayes factor are not available (e.g., as in generalized linear models, nonlinear models, or cognitive process models with regression extensions). As a remedy, a generalization of the SDDR allows computing the correct Bayes factor.},
    keywords = {heckinvited}
    }

  • Heck, D. W., Erdfelder, E., & Kieslich, P. J. (2018). Jointly modeling mouse-trajectories and accuracies with generalized processing trees. 60. Tagung experimentell arbeitender Psychologen. Marburg, Germany.
    @inproceedings{heck2018jointly,
    title = {Jointly Modeling Mouse-Trajectories and Accuracies with Generalized Processing Trees},
    booktitle = {60. {{Tagung}} Experimentell Arbeitender {{Psychologen}}},
    author = {Heck, Daniel W and Erdfelder, E and Kieslich, Pascal J},
    date = {2018},
    volume = {Marburg, Germany},
    abstract = {Jointly Modeling Mouse-Trajectories and Accuracies with Generalized Processing Trees},
    keywords = {hecktalk}
    }

  • Heck, D. W., Seiling, L., & Bröder, A. (2018). The love of large numbers revisited: A coherence model of the popularity bias. Meeting of the Society of Judgment and Decision Making. New Orleans, MA.
    @inproceedings{heck2018love,
    title = {The Love of Large Numbers Revisited: {{A}} Coherence Model of the Popularity Bias},
    booktitle = {Meeting of the {{Society}} of {{Judgment}} and {{Decision Making}}},
    author = {Heck, Daniel W and Seiling, Lukas and Bröder, Arndt},
    date = {2018},
    volume = {New Orleans, MA},
    keywords = {heckposter}
    }

  • Heck, D. W. (2018). Towards a measurement model for advice taking. SMiP Winter Retreat. St. Martin, Germany.
    @inproceedings{heck2018measurement,
    title = {Towards a Measurement Model for Advice Taking},
    booktitle = {{{SMiP Winter Retreat}}},
    author = {Heck, Daniel W},
    date = {2018},
    volume = {St. Martin, Germany},
    keywords = {hecktalk}
    }

  • Heck, D. W. (2018). Multinomial models with convex linear inequality-constraints. SMiP Summer Retreat. Wiesneck, Germany.
    @inproceedings{heck2018multinomial,
    title = {Multinomial Models with Convex Linear Inequality-Constraints},
    booktitle = {{{SMiP Summer Retreat}}},
    author = {Heck, Daniel W},
    date = {2018},
    volume = {Wiesneck, Germany},
    keywords = {hecktalk}
    }

  • Heck, D. W. (2018). Computing Bayes factors for cognitive models: A caveat on the Savage-Dickey density ratio. Psychonomic Society 59th Annual Meeting. New Orleans, LA.
    @inproceedings{heck2018psychonomics,
    title = {Computing {{Bayes}} Factors for Cognitive Models: {{A}} Caveat on the {{Savage-Dickey}} Density Ratio},
    booktitle = {Psychonomic {{Society}} 59th {{Annual Meeting}}},
    author = {Heck, Daniel W},
    date = {2018},
    volume = {New Orleans, LA},
    keywords = {hecktalk}
    }

  • Heck, D. W. (2018). TreeBUGS: Hierarchical multinomial processing tree models in R. Keynote, Psychoco 2018: International Workshop on Psychometric Computing. Tübingen, Germany.
    @inproceedings{heck2018treebugs-3,
    title = {{{TreeBUGS}}: {{Hierarchical}} Multinomial Processing Tree Models in {{R}}},
    booktitle = {Keynote, {{Psychoco}} 2018: {{International Workshop}} on {{Psychometric Computing}}},
    author = {Heck, Daniel W},
    date = {2018},
    volume = {Tübingen, Germany},
    keywords = {keynote}
    }

2017

  • Heck, D. W., & Erdfelder, E. (2017). Discrete-state modeling of discrete and continuous variables: A generalized processing tree framework. 13. Tagung der Fachgruppe Methoden & Evaluation. Tübingen, Germany.
    @inproceedings{heck2017discretestate,
    title = {Discrete-State Modeling of Discrete and Continuous Variables: {{A}} Generalized Processing Tree Framework},
    booktitle = {13. {{Tagung}} Der {{Fachgruppe Methoden}} \& {{Evaluation}}},
    author = {Heck, Daniel W and Erdfelder, E.},
    date = {2017},
    volume = {Tübingen, Germany},
    keywords = {hecktalk}
    }

  • Heck, D. W. (2017). Extending multinomial processing tree models to account for response times and other continuous variables. Social Psychology and Methodology (Christoph Klauer). Freiburg, Germany.
    @inproceedings{heck2017extending,
    title = {Extending Multinomial Processing Tree Models to Account for Response Times and Other Continuous Variables},
    booktitle = {Social {{Psychology}} and {{Methodology}} ({{Christoph Klauer}})},
    author = {Heck, Daniel W},
    date = {2017},
    volume = {Freiburg, Germany},
    keywords = {heckinvited}
    }

  • Heck, D. W. (2017). Extending multinomial processing tree models to response times: The case of the recognition heuristic. Center for Adaptive Rationality (Thorsten Pachur). Max Planck Institute, Berlin, Germany.
    @inproceedings{heck2017extending-1,
    title = {Extending Multinomial Processing Tree Models to Response Times: {{The}} Case of the Recognition Heuristic},
    booktitle = {Center for {{Adaptive Rationality}} ({{Thorsten Pachur}})},
    author = {Heck, Daniel W},
    date = {2017},
    volume = {Max Planck Institute, Berlin, Germany},
    keywords = {heckinvited}
    }

  • Heck, D. W., Hilbig, B. E., & Moshagen, M. (2017). Formalizing and comparing psychologically plausible models of multiattribute decisions. Meeting of the Society of Judgment and Decision Making. Vancouver, BC.
    @inproceedings{heck2017formalizing,
    title = {Formalizing and Comparing Psychologically Plausible Models of Multiattribute Decisions},
    booktitle = {Meeting of the {{Society}} of {{Judgment}} and {{Decision Making}}},
    author = {Heck, Daniel W and Hilbig, Benjamin E and Moshagen, Morten},
    date = {2017},
    volume = {Vancouver, BC},
    keywords = {heckposter}
    }

  • Heck, D. W., & Erdfelder, E. (2017). A generalized processing tree framework for discrete-state modeling of discrete and continuous variables. Psychonomic Society 58th Annual Meeting. Vancouver, BC.
    @inproceedings{heck2017generalized-1,
    title = {A Generalized Processing Tree Framework for Discrete-State Modeling of Discrete and Continuous Variables},
    booktitle = {Psychonomic {{Society}} 58th {{Annual Meeting}}},
    author = {Heck, Daniel W and Erdfelder, Edgar},
    date = {2017},
    volume = {Vancouver, BC},
    keywords = {heckposter}
    }

  • Heck, D. W., & Erdfelder, E. (2017). Jointly modeling discrete and continuous variables: A generalized processing tree framework. 59. Tagung experimentell arbeitender Psychologen. Dresden, Germany.
    @inproceedings{heck2017jointly,
    title = {Jointly Modeling Discrete and Continuous Variables: {{A}} Generalized Processing Tree Framework},
    booktitle = {59. {{Tagung}} Experimentell Arbeitender {{Psychologen}}},
    author = {Heck, Daniel W and Erdfelder, E.},
    date = {2017},
    volume = {Dresden, Germany},
    keywords = {hecktalk}
    }

  • Heck, D. W., Erdfelder, E., & Kieslich, P. J. (2017). Modeling mouse-tracking trajectories with generalized processing tree models. 50th Annual Meeting of the Society for Mathematical Psychology. Warwick, UK.
    @inproceedings{heck2017modeling,
    title = {Modeling Mouse-Tracking Trajectories with Generalized Processing Tree Models},
    booktitle = {50th {{Annual Meeting}} of the {{Society}} for {{Mathematical Psychology}}},
    author = {Heck, Daniel W and Erdfelder, E and Kieslich, Pascal J},
    date = {2017},
    volume = {Warwick, UK},
    abstract = {Multinomial processing tree models assume a finite number of cognitive states that determine frequencies of discrete responses. Generalized processing tree (GPT) models extend this conceptual framework to continuous variables such as response-times, process-tracing measures, or neurophysiological variables. Essentially, GPT models assume a finite mixture distribution, where the weights are determined by a processing-tree structure, whereas continuous components are modeled by parameterized distributions such as Gaussians with separate or shared means across states. Using a simple modeling syntax, GPT models can easily be adapted to different experimental designs. We develop and test a GPT model for a mouse-tracking paradigm for a semantic categorization task, which is based on the feature comparison model (Smith, Shoben, \& Rips, 1974). The model jointly accounts for response frequencies of correct responses and the maximum-deviation of mouse trajectories relative to a direct path.},
    keywords = {hecktalk}
    }

  • Heck, D. W. (2017). Quantifying uncertainty in transdimensional Markov chain Monte Carlo. Stochastics in Mannheim (Leif Döring). Mannheim, Germany.
    @inproceedings{heck2017quantifying,
    title = {Quantifying Uncertainty in Transdimensional {{Markov}} Chain {{Monte Carlo}}},
    booktitle = {Stochastics in {{Mannheim}} ({{Leif Döring}})},
    author = {Heck, Daniel W},
    date = {2017},
    volume = {Mannheim, Germany},
    keywords = {heckinvited}
    }

  • Heck, D. W., Arnold, N. R., & Arnold, D. (2017). TreeBUGS: A user-friendly software for hierarchical multinomial processing tree modeling. Meeting of the Society of Computers in Psychology. Vancouver, BC.
    @inproceedings{heck2017treebugs-1,
    title = {{{TreeBUGS}}: {{A}} User-Friendly Software for Hierarchical Multinomial Processing Tree Modeling},
    booktitle = {Meeting of the {{Society}} of {{Computers}} in {{Psychology}}},
    author = {Heck, Daniel W and Arnold, Nina R. and Arnold, Denis},
    date = {2017},
    volume = {Vancouver, BC},
    keywords = {hecktalk}
    }

2016

  • Heck, D. W., & Erdfelder, E. (2016). Generalized processing tree models: Modeling discrete and continuous variables simultaneously. 47th European Mathematical Psychology Group Meeting. Copenhagen, Denmark.
    @inproceedings{heck2016generalized,
    title = {Generalized Processing Tree Models: {{Modeling}} Discrete and Continuous Variables Simultaneously},
    booktitle = {47th {{European Mathematical Psychology Group Meeting}}},
    author = {Heck, Daniel W and Erdfelder, E.},
    date = {2016},
    volume = {Copenhagen, Denmark},
    keywords = {hecktalk}
    }

  • Heck, D. W., & Erdfelder, E. (2016). Model-based evidence on response-time predictions of the recognition heuristic versus compensatory accounts of recognition use. 50. Kongress der Deutschen Gesellschaft für Psychologie. Leipzig, Germany.
    @inproceedings{heck2016modelbased,
    title = {Model-Based Evidence on Response-Time Predictions of the Recognition Heuristic versus Compensatory Accounts of Recognition Use},
    booktitle = {50. {{Kongress}} Der {{Deutschen Gesellschaft}} Für {{Psychologie}}},
    author = {Heck, Daniel W and Erdfelder, E.},
    date = {2016},
    volume = {Leipzig, Germany},
    keywords = {hecktalk}
    }

  • Heck, D. W. (2016). A parallel-constraint satisfaction account of recognition-based decisions. Coherence-Based Approaches to Decision Making, Cognition, and Communication. Berlin, Germany.
    @inproceedings{heck2016parallelconstraint,
    title = {A Parallel-Constraint Satisfaction Account of Recognition-Based Decisions},
    booktitle = {Coherence-{{Based Approaches}} to {{Decision Making}}, {{Cognition}}, and {{Communication}}},
    author = {Heck, Daniel W},
    date = {2016},
    volume = {Berlin, Germany},
    keywords = {hecktalk}
    }

  • Heck, D. W. (2016). Die Rekognitions-Heuristik als Spezialfall allgemeiner Informationsintegrations-Theorien: Erkenntnisse durch Antwortzeitmodellierung mit MPT Modellen. Department of General Psychology II (Klaus Rothermund). Jena, Germany.
    @inproceedings{heck2016rekognitionsheuristik,
    title = {Die {{Rekognitions-Heuristik}} Als {{Spezialfall}} Allgemeiner {{Informationsintegrations-Theorien}}: {{Erkenntnisse}} Durch {{Antwortzeitmodellierung}} Mit {{MPT Modellen}}},
    booktitle = {Department of {{General Psychology II}} ({{Klaus Rothermund}})},
    author = {Heck, Daniel W},
    date = {2016},
    volume = {Jena, Germany},
    keywords = {heckinvited}
    }

  • Heck, D. W. (2016). RRreg: Ein R Package für Multivariate Analysen der Randomized Response Technik. Lehrstuhl für Diagnostik und Differentielle Psychologie (Jochen Musch). Düsseldorf, Germany.
    @inproceedings{heck2016rrreg-1,
    title = {{{RRreg}}: {{Ein R Package}} Für {{Multivariate Analysen}} Der {{Randomized Response Technik}}},
    booktitle = {Lehrstuhl Für {{Diagnostik}} Und {{Differentielle Psychologie}} ({{Jochen Musch}})},
    author = {Heck, Daniel W},
    date = {2016},
    volume = {Düsseldorf, Germany},
    keywords = {heckinvited}
    }

  • Heck, D. W., & Erdfelder, E. (2016). Testing between information integration and heuristic accounts of recognition-based decisions. 58. Tagung experimentell arbeitender Psychologen. Heidelberg, Germany.
    @inproceedings{heck2016testing,
    title = {Testing between Information Integration and Heuristic Accounts of Recognition-Based Decisions},
    booktitle = {58. {{Tagung}} Experimentell Arbeitender {{Psychologen}}},
    author = {Heck, Daniel W and Erdfelder, E.},
    date = {2016},
    volume = {Heidelberg, Germany},
    keywords = {hecktalk}
    }

  • Heck, D. W., & Erdfelder, E. (2016). Testing between serial and parallel theories of recognition-based heuristic decisions. 2nd International Meeting of the Psychonomic Society. Granada, Spain.
    @inproceedings{heck2016testing-1,
    title = {Testing between Serial and Parallel Theories of Recognition-Based Heuristic Decisions},
    booktitle = {2nd {{International Meeting}} of the {{Psychonomic Society}}},
    author = {Heck, Daniel W and Erdfelder, E.},
    date = {2016},
    volume = {Granada, Spain},
    keywords = {heckposter}
    }

2015

  • Heck, D. W., & Erdfelder, E. (2015). Comparing the relative processing speed of the recognition heuristic and information integration: Extending the r-model to response times. 46th European Mathematical Psychology Group Meeting. Padua, Italy.
    @inproceedings{heck2015comparing,
    title = {Comparing the Relative Processing Speed of the Recognition Heuristic and Information Integration: {{Extending}} the r-Model to Response Times},
    booktitle = {46th {{European Mathematical Psychology Group Meeting}}},
    author = {Heck, Daniel W and Erdfelder, E.},
    date = {2015},
    volume = {Padua, Italy},
    keywords = {hecktalk}
    }

  • Heck, D. W., & Erdfelder, E. (2015). Measuring the relative speed of the recognition heuristic. International Summer School on "Theories and Methods in Judgment and Decision Making Research". Nürnberg, Germany.
    @inproceedings{heck2015measuring,
    title = {Measuring the Relative Speed of the Recognition Heuristic},
    booktitle = {International {{Summer School}} on "{{Theories}} and {{Methods}} in {{Judgment}} and {{Decision Making Research}}"},
    author = {Heck, Daniel W and Erdfelder, E.},
    date = {2015},
    volume = {Nürnberg, Germany},
    keywords = {heckposter}
    }

  • Heck, D. W., & Erdfelder, E. (2015). Modeling response times within the multinomial processing tree framework. 12. Tagung der Fachgruppe Methoden & Evaluation. Jena, Germany.
    @inproceedings{heck2015modeling,
    title = {Modeling Response Times within the Multinomial Processing Tree Framework},
    booktitle = {12. {{Tagung}} Der {{Fachgruppe Methoden}} \& {{Evaluation}}},
    author = {Heck, Daniel W and Erdfelder, E.},
    date = {2015},
    volume = {Jena, Germany},
    keywords = {hecktalk}
    }

  • Heck, D. W., & Erdfelder, E. (2015). Response time modeling for finite-state models of recognition. 57. Tagung experimentell arbeitender Psychologen. Hildesheim, Germany.
    @inproceedings{heck2015response,
    title = {Response Time Modeling for Finite-State Models of Recognition},
    booktitle = {57. {{Tagung}} Experimentell Arbeitender {{Psychologen}}},
    author = {Heck, Daniel W and Erdfelder, E.},
    date = {2015},
    volume = {Hildesheim, Germany},
    keywords = {hecktalk}
    }

2014

  • Heck, D. W., Moshagen, M., & Erdfelder, E. (2014). Modellselektion anhand Minimum Description Length: Wie groß muss die Stichprobengröße bei Anwendung der Fisher Information Approximation mindestens sein?. 49. Kongress der Deutschen Gesellschaft für Psychologie. Bochum, Germany.
    @inproceedings{heck2014modellselektion,
    title = {Modellselektion Anhand {{Minimum Description Length}}: {{Wie}} Groß Muss Die {{Stichprobengröße}} Bei {{Anwendung}} Der {{Fisher Information Approximation}} Mindestens Sein?},
    booktitle = {49. {{Kongress}} Der {{Deutschen Gesellschaft}} Für {{Psychologie}}},
    author = {Heck, Daniel W and Moshagen, Morten and Erdfelder, E.},
    date = {2014},
    volume = {Bochum, Germany},
    keywords = {hecktalk}
    }

  • Heck, D. W., & Erdfelder, E. (2014). Response time modeling for finite-state models of recognition. Third European Summer School on Computational Modeling of Cognition with Applications to Society. Laufen, Germany.
    @inproceedings{heck2014response,
    title = {Response Time Modeling for Finite-State Models of Recognition},
    booktitle = {Third {{European Summer School}} on {{Computational  Modeling}} of {{Cognition}} with {{Applications}} to {{Society}}},
    author = {Heck, Daniel W and Erdfelder, E.},
    date = {2014},
    volume = {Laufen, Germany},
    keywords = {heckposter}
    }

2013

  • Heck, D. W., & Moshagen, M. (2013). Model selection by minimum description length: Performance of the Fisher information approximation. 46th Annual Meeting of the Society for Mathematical Psychology. Potsdam, Germany.
    @inproceedings{heck2013model,
    title = {Model Selection by Minimum Description Length: {{Performance}} of the {{Fisher}} Information Approximation},
    booktitle = {46th {{Annual Meeting}} of the {{Society}} for {{Mathematical Psychology}}},
    author = {Heck, Daniel W and Moshagen, Morten},
    date = {2013},
    volume = {Potsdam, Germany},
    keywords = {heckposter}
    }

  • Heck, D. W., & Moshagen, M. (2013). Model selection of multinomial processing tree models – A Monte Carlo simulation. 55. Tagung experimentell arbeitender Psychologen. Vienna, Austria.
    @inproceedings{heck2013model-1,
    title = {Model Selection of Multinomial Processing Tree Models – {{A Monte Carlo}} Simulation},
    booktitle = {55. {{Tagung}} Experimentell Arbeitender {{Psychologen}}},
    author = {Heck, Daniel W and Moshagen, Morten},
    date = {2013},
    volume = {Vienna, Austria},
    keywords = {heckposter}
    }