Peer Review History
Original SubmissionJune 13, 2022 |
---|
PONE-D-22-16988Differentiating Bayesian model updating and model revision based on their prediction error dynamicsPLOS ONE Dear Dr. Rutar, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. The paper requires major revision before it can be reconsidered for publication. For details, please refer to the reviewers' comments, which I believe are detailed and helpful. I would like to draw your attention on comments raised by the reviewers about clarifying the assumptions made and discussing potential limitations of those assumptions with reference to the results presented. Please also ensure that you improve the description of your experiments, to improve clarity, as per reviewers' comments. If you decide to resubmit a revised version, please provide point-to-point responses to each of the comments made by the reviewers. In your response, ensure you clearly explain what revisions have been made to address each of the points raised by the reviewers. If a comment is not addressed, please justify this decision in your response to the reviewer. Please submit your revised manuscript by Oct 30 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Anthony C Constantinou Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 2. Please change "female” or "male" to "woman” or "man" as appropriate, when used as a noun (see for instance https://apastyle.apa.org/style-grammar-guidelines/bias-free-language/gender). 3. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match. When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section. 4. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Partly Reviewer #2: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: I Don't Know ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I enjoyed reading this clever and nicely motivated study of model revision and updating. I thought the overall motivation was excellent. However, the design is so complicated I did not fully understand what was being reversed between the revision and updating phases. Furthermore, you have made some rather superficial assumptions in building your hypotheses. This means it is difficult to assess the significance or implication of your empirical findings. Finally, although it is pleasingly honest of you to acknowledge you started with pupillary responses as the primary dependent measure, I think this was fundamentally misguided for several reasons (please see below). In short, could you think about the following points — and whether you can restructure your paper along the lines suggested. First, you need to be slightly more formal and specific about the distinction between model revision and model updating. I appreciate that these are terms that you have put in the literature — and that you will want to retain. However, there is an unfortunate conflation of the word ‘updating’— in the sense of Bayesian belief updating and model updating — that need to resolve. Furthermore, you seem to have a purely narrative understanding of predictive processing and the distinction between parameter learning and structure learning. I say this because you talk about revising hypotheses in the introduction. In predictive processing, there is only one model or hypothesis, and it is the parameters of this model that are updated (through revising prior beliefs to posterior beliefs) on the basis of experience. When you talk about model updating, you are referring to the structure of the model, as opposed to its parameters. I think you should make this clear with the following: “Predictive processing can be regarded as an umbrella term for active inference and learning. Crucially, learning comes in two flavours: it can refer to the updating of model parameters (i.e., parameter learning of the sort associated with activity or experience -dependent plasticity in the brain). Conversely, the model itself can be updated (i.e., structure learning mediated by the addition or removal of connections in the brain). In this context, model revision refers to the revision of model parameters or connection weights under a specific generative model or architecture, while model updating refers to the selection or reduction of models in terms of their structure[1-3]. There are two approaches to this kind of structure learning. One can start from an overcomplete generative model and then eliminate redundant parameters (i.e., Bayesian model reduction [4]). Conversely, one can explore model space by adding extra parameters or connections (e.g., in the spirit of nonparametric Bayes [5, 6]). In both instances, the alternative models or hypotheses are compared in terms of their marginal likelihood or log evidence; rendering structure learning an instance of Bayesian model selection [7]. In the case of Bayesian model reduction, from an over complete model, there are neurobiological plausible and simple rules that can implement model updating — and that may write underwrite aha moments or, indeed, a functional explanation for sleep and its associated synaptic homeostasis [8-11]." The second big issue is your use of pupillary diameter as a proxy for prediction error. I think that this is an unfounded and misguided move. The link between various belief updating processes in predictive processing and pupillary responses has yet to be established. I would leverage this in the way that you frame your report. In other words, instead of starting off by assuming that pupillary dilatation reflects this or that, you can identify the best explanations for pupillary dilatation on the basis of your results. I suggest this because most of the available evidence and computational work in predictive processing suggests that pupillary dilatation does not reflect prediction errors per se, but the precision or confidence placed in prediction errors of a particular sort. I would recommend you read [12] and then say something along the following lines: “The precise beliefs updates or learning that underlie pupillary dilatation in predictive processing has yet to be fully established. However, early considerations suggest that the noradrenergic basis of pupillary dilatation links it to the encoding of precision or confidence about contingencies (i.e., transition probabilities) in the generative models that underlie active inference (a.k.a., predictive processing). In other words, pupillary responses may reflect the predictability or salience of a stimulus; where salience refers here to the propensity to revise or update latent or hidden states that are being inferred. However, the evidence for phasic pupillary responses reflecting, e.g., prediction errors, precision or precision-weighted prediction errors is much less clear. One might imagine that pupillary dilatation could play the role of electrophysiological correlates — such as the mismatch negativity – in reflecting the information gain or surprise inherent in a particular stimulus. In light of this, we characterised the time course of model revision and updating in terms of behavioural responses (i.e., predictive accuracy) and asked: what is the best predictor of accompanying pupillary responses. In this study we were primarily concerned with phasic pupillary responses and, specifically, responses evoked by surprising or informative stimuli relative to predicted stimuli." What I am proposing here is that you use the behavioural responses to track learning and then use the argument that only after learning can there be predictions – and that only after there are predictions is a stimulus informative or surprising. In other words, you would expect to see a monotonic relationship between model updating or revision as expressed in behavioural learning and pupillary responses. The nature of this relationship is, I think is open. For example, it could reflect the confidence or precision about a prediction. In this case, the evoked responses to correct and incorrect targets should be the same. Alternatively, pupillary responses could reflect an update to the predictions of predictability (i.e. precision). In this case, the interesting differences will emerge in terms of the difference between correct and incorrect target stimuli. In terms of your experimental design, I think you need to be more careful in distinguishing your design from a simple reversal learning paradigm. I would recommend something along the following lines: “To disambiguate model revision from model updating, it is necessary to evince aha moments or model updating; in the sense that a pre-existing model is not fit for purpose after a change in contingencies. This requires a paradigm that goes beyond conventional reversal learning (i.e., where contingencies simply change and the parameters encoding those contingencies are revised via parametric learning). To examine putative model updating, we used a two-phase protocol, in which a simple (revision) model of associative contingencies was sufficient to explain observable outcomes. In the second (update) phase, we changed the contingencies in a structural or qualitative fashion by adding a conditional dependency or context sensitivity. Specifically, in the simple model there was no interaction between the predictive validity of visual cues and auditory cues. However, in the update phase the predictive validity of visual cues depended upon the presence of auditory cues. This allowed as to examine the model revision and updating as subjects learned a simple model and then learned a more structured model." I think at this stage, you have to think carefully about your hypotheses. Generally speaking, to look at model updating (i.e., Bayesian model selection or structure learning) one has to have a rather delicate paradigm that elicits aha moments. In other words, a sudden switch associated with the act of selecting one model over another – that is revealed by an abrupt change inference and subsequent task performance. I do not think you have got this in your paradigm. In other words, there will be a degree of model updating in both the updating and revision phases. It may be that the simple model allows for a shorter latency of model updating, while the context sensitive (update phase) model has a more protracted update. One could address this but by assuming that each subject commits to a selected model at the point of model updating and estimate the most likely time point of this updating. The idea here would be that for the revision phase, most subjects discover or select their model early in the trials; while for the update phase, some subjects find the model more quickly while it takes other subjects much longer. This might be an interesting way of using your intersubject variability. Notice that this suggestion rests upon using the behavioural responses as a more efficient measure of learning. Once you have tied down the dynamics of model revision and updating, you can then turn to the pupillary responses and ask what they are most likely to reflect. In this spirit, you might also add in your discussion (to your paragraph about ways forward). "Ultimately, to establish the construct validity of pupillary responses in terms of model revision and updating, it will be necessary to have efficient estimates of various belief states and learning. These can only be inferred from observable behaviour (e.g., choice behaviour or reaction times), under the ideal Bayesian observer assumptions afforded by active inference. Early work along these lines has looked at baseline pupillary dilatation using a Markov decision process as the generative model [12]. It would be interesting to repeat this kind of exercise using paradigms that can elicit model updating and accompanying aha moments. See [8] for a numerical example of synthetic model updating." Finally, I think you need to be clearer about the experimental design. There were too many factors and changes for the reader to make sense of. For example, I did not understand whether Mappings 1 and 2 referred to the precision (i.e., 80% versus 20%) or to the mapping per se (i.e., square means left). Crucially, it was not clear what was reversed and what was not reversed. I think the simplest thing to do would be to have a figure in which you draw the mappings for the two phases of the paradigm separately. The maps should connect the cue to the targets with the little arrows. The precision of these mappings can then be indicated with 80% or 20% beside the arrows. This should also resolve confusion about your counterbalancing. For example, when you said that Mappings 1 and 2 were counterbalanced over subjects, does this mean that certain subjects never experienced one of the two Mappings? I hope that these suggestions help should any revision required. 1. Smith, R., et al., An Active Inference Approach to Modeling Structure Learning: Concept Learning as an Example Case. Front Comput Neurosci, 2020. 14: p. 41. 2. Gershman, S.J. and Y. Niv, Learning latent structure: carving nature at its joints. Curr Opin Neurobiol, 2010. 20(2): p. 251-6. 3. Tervo, D.G., J.B. Tenenbaum, and S.J. Gershman, Toward the neural implementation of structure learning. Curr Opin Neurobiol, 2016. 37: p. 99-105. 4. Friston, K., T. Parr, and P. Zeidman, Bayesian model reduction. arXiv preprint arXiv:1805.07092, 2018. 5. Goldwater, S., Nonparametric Bayesian Models of Lexical Acquisition. 2006, Brown University. 6. Gershman, S.J. and D.M. Blei, A tutorial on Bayesian nonparametric models. Journal of Mathematical Psychology, 2012. 56(1): p. 1-12. 7. Hoeting, J.A., et al., Bayesian Model Averaging: A Tutorial. Statistical Science, 1999. 14(4): p. 382-401. 8. Friston, K.J., et al., Active Inference, Curiosity and Insight. Neural Comput, 2017. 29(10): p. 2633-2683. 9. Hobson, J.A. and K.J. Friston, Consciousness, Dreams, and Inference The Cartesian Theatre Revisited. Journal of Consciousness Studies, 2014. 21(1-2): p. 6-32. 10. Tononi, G. and C. Cirelli, Sleep function and synaptic homeostasis. Sleep Med Rev., 2006. 10(1): p. 49-62. 11. Hinton, G.E., et al., The "wake-sleep" algorithm for unsupervised neural networks. Science, 1995. 268(5214): p. 1158-61. 12. Vincent, P., et al., With an eye on uncertainty: Modelling pupillary responses to environmental volatility. PLOS Computational Biology, 2019. 15(7): p. e1007126. Reviewer #2: I enjoyed reading this paper. This paper differentiates model updating and model revision using behavioural experiments---two concepts which have been, according to the authors, recently distinguished theoretically. It does so by proposing a behavioural experiment consisting of updating and revision phases, and assesses the participant’s predictions and prediction errors throughout these phases, showing that these two phases have different predictive processing characteristics. The paper assumes that “existing accounts of learning in the predictive-processing framework currently lack a crucial component: a constructive learning mechanism that accounts for changing models structurally when new hypotheses need to be learnt”. As such, it has the ambition to inform the theoretical development of mechanisms that reproduce human model learning. One possibility, that the experiments suggest, is that “participants first built multiple models from scratch in the updating phase and update them in the revision phase”. The paper is compelling very well written, and I would recommend it for publication if the authors could say something about the following queries. My main comment (detailed below) is that I do not entirely agree with the way the premise of the paper. I believe that the field has competing hypotheses about how humans learn their model of the world. While I think the experiments from the paper are a valuable contribution, I believe that framing them in light of the recent literature in computational cognitive science would increase the impact of the paper. In particular, maybe it will be possible to say something about whether the experiments provide evidence for or against different computational mechanisms that have been proposed to account for human model learning within predictive processing. A caveat: I am not qualified to assess the validity and soundness of the behavioural experiments. Major comment: The paper, on several occasions claims that “existing accounts of learning in the predictive-processing framework currently lack a crucial component: a constructive learning mechanism that accounts for changing models structurally when new hypotheses need to be learnt” (l501-502). It then proceeds by noting that “Kwisthout and colleagues (2017) proposed that model revision is a learning mechanism that is distinct from Bayesian model updating and accounts for such a structural change in generative models.” (l504-505). While there are currently no algorithms that can reproduce human model learning at scale, the field of predictive processing has proposed several mechanistic explanations for human model learning. As I see it, these are split into two main categories: 1) Model revision as model updating: Model revision is cast as Bayesian belief updating over spaces of models. This is the view that has been developed by Tenenbaum, Gershman and colleagues. Human model learning can be done (in theory) by doing Bayesian inference over big spaces of generative models, often written as probabilistic programs. How do we add a factor, or hypothesis, to an existing model? One way to do this is via the toolkit of Bayesian non-parametric Bayes, whence the number of say, hidden state factors in a model is updated via Bayesian inference. Mathematically, this may requires priors over spaces of models that are infinitely large, but this is not a problem both theoretically and computationally. A nice review of Bayesian non-parametrics is: A tutorial on Bayesian nonparametric models by Gershman et al (2012). A nice review of model learning as Bayesian inference on large (but finite) spaces of generative models is: Bayesian Models of Conceptual Development: Learning as Building Models of the World by Ullman et al (2020). A couple of nice papers that have implemented the latter in practice, showing human learning efficiency in some tasks are: Human-Level Reinforcement Learning through Theory-Based Modeling, Exploration, and Planning by Tsividis et al (2021); and Inductive biases in theory-based reinforcement learning by Pouncy and Gershman (2022). 2) Model revision as free energy minimisation: this view describes human model learning as a process of (variational) free energy minimisation on spaces of generative models. This is the view that is advocated by Friston and colleagues. This view is not very dissimilar to the one aboce. Free energy minimisation entails Bayesian updating with maximisation of the model evidence. This is equivalent to minimising model complexity while maximising its accuracy. In short, the added imperative to Bayesian updating entail regularising the model: fitting the Bayesian posterior, while staying within models that are computationally manageable. In practice, this leads to building abstractions and hierarchical depth. A nice review of all this is: Active inference on discrete state-spaces: A synthesis by Da Costa et al (2020). Much has not been explored regarding the use of free energy minimisation to learn models; but, from the current literature two algorithms stand out (these are discussed in the previous paper): a) Bayesian model reduction, which enables efficient model reduction thanks to free energy minimisation, see Bayesian model reduction by Friston et al (2019). This has been used to model sleep, synaptic pruning, and insight, e.g., Active Inference, Curiosity and Insight by Friston et al (2017). b) Bayesian model expansion: which is about adding hypotheses to a model (ie growing a model), see An Active Inference Approach to Modeling Structure Learning: Concept Learning as an Example Case by Smith et al (2020). It would be great if the authors could, either qualitatively or quantitatively say whether these experiments bring evidence in favour or against either of these hypotheses, which to my understanding are the main hypotheses advanced by the field in terms of describing model learning. My hunch is that, since model updating and revision are shown to have different predictive processing characteristics, it could a point in favour of the free energy view of things (which adds something to Bayesian updating). That said, Bayesian model updating is so flexible that maybe this framework could account for the data as well. Also, it might be possible to say something about the model revision phase in relation to mechanism 2. At the very least, the authors should mention this theoretical work on human model learning in the introduction. Minor comments: - L69-72: “the entirety of human cognition and behaviour from visual processing (Rao & Ballard, 1999;Edwards et al., 2017; Petro & Muckli, 2016) to mentalizing (Kilner et al., 2007; Koster-Hale & Saxe, 2013)”. o Here may also be worth adding “and action” or “control” eg., Action and behavior: a free-energy formulation by Friston et al 2010. o L100-101 “Model revision, unlike model updating, changes the structure of a generative model by altering its causal connections or by adding and removing hypotheses (Kwisthout et al., 2017)”. Here it is worth mentioning the other terms in the literature that are synonyms to model revision: o Structure learning: Learning latent structure: carving nature at its joints by Gershman and Niv (2010); Active inference on discrete state-spaces: A synthesis by Da Costa et al (2020); An Active Inference Approach to Modeling Structure Learning: Concept Learning as an Example Case by Smith et al (2020) o Causal inference: Elements of Causal Inference by Peters et al (2017). o In regards to the suggestion that “participants first built multiple models from scratch in the updating phase and update them in the revision phase”, there may be a connection with the computational account of model learning in terms of Bayesian model updating presented in Inductive biases in theory-based reinforcement learning by Pouncy and Gershman (2022), which considers a handful of models (ie competing hypotheses) at each point in time. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Karl Friston Reviewer #2: Yes: Lancelot Da Costa ********** [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. |
Revision 1 |
Differentiating between Bayesian parameter learning and structure learning based on behavioural and pupil measures PONE-D-22-16988R1 Dear Dr. Rutar, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Anthony C Constantinou Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes Reviewer #2: I Don't Know ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: Many thanks for responding to my previous comments – and congratulations on a very thoughtful piece of work. Reviewer #2: I thank you for your detailed responses to our comments and thorough revision to the manuscript. I especially liked the reframing of the paper in terms of parameter learning versus structure learning, which resolved many of my queries. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: Yes: Karl Friston Reviewer #2: Yes: Lancelot Da Costa ********** |
Formally Accepted |
PONE-D-22-16988R1 Differentiating between Bayesian parameter learning and structure learning based on behavioural and pupil measures Dear Dr. Rutar: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Anthony C Constantinou Academic Editor PLOS ONE |
Open letter on the publication of peer review reports
PLOS recognizes the benefits of transparency in the peer review process. Therefore, we enable the publication of all of the content of peer review and author responses alongside final, published articles. Reviewers remain anonymous, unless they choose to reveal their names.
We encourage other journals to join us in this initiative. We hope that our action inspires the community, including researchers, research funders, and research institutions, to recognize the benefits of published peer review reports for all parts of the research system.
Learn more at ASAPbio .