Principles of open, transparent and reproducible science in author guidelines of sleep research and chronobiology journals

Background: "Open science" is an umbrella term describing various aspects of transparent and open science practices. The adoption of practices at different levels of the scientific process (e.g., individual researchers, laboratories, institutions) has been rapidly changing the scientific research landscape in the past years, but their uptake differs from discipline to discipline. Here, we asked to what extent journals in the field of sleep research and chronobiology encourage or even require following transparent and open science principles in their author guidelines. Methods: We scored the author guidelines of a comprehensive set of 27 sleep and chronobiology journals, including the major outlets in the field, using the standardised Transparency and Openness (TOP) Factor. The TOP Factor is a quantitative summary of the extent to which journals encourage or require following various aspects of open science, including data citation, data transparency, analysis code transparency, materials transparency, design and analysis guidelines, study pre-registration, analysis plan pre-registration, replication, registered reports, and the use of open science badges. Results: Across the 27 journals, we find low values on the TOP Factor (median [25 th, 75 th percentile] 3 [1, 3], min. 0, max. 9, out of a total possible score of 29) in sleep research and chronobiology journals. Conclusions: Our findings suggest an opportunity for sleep research and chronobiology journals to further support recent developments in transparent and open science by implementing transparency and openness principles in their author guidelines.


Introduction
During the past few years, the open science movement gained increasing popularity and is rapidly changing the way science is done, especially among early career researchers striving to improve scientific practice and overcome deficits in the current scientific status quo 1,2 . The term "open science" is relatively ill-defined and includes a range of different methods, tools, platforms, and practices that are geared to improving the quality of science through transparency 3 . At present, it is still largely up to individual researchers and research groups to decide to what extent they want to engage in open science practices and incentives that may promote open science are rare. Journals as the main outlets for archival scientific dissemination can support the movement and offer ways to make the scientific process more open, reproducible, and emphasise good scientific practice. They may even speed up the process by requiring authors to adhere to open science standards. However, to what extent do journals in the fields of sleep and chronobiology encourage or even require following the standards of open science?
The scientific fields of sleep research and chronobiology concern all aspects of sleep and circadian rhythmicity. As almost all aspects of physiology and behaviour are under some type of circadian control, this cluster of scientific fields is fundamentally interdisciplinary, employing a wide variety of methodologies. Therefore, this research area is very heterogeneous, drawing from different 'core' disciplines (including neuroscience, psychology, molecular biology, and others), each with their own scientific history, and the degree to which open science principles are adopted may vary widely.
In this study, we asked to what extent scientific journals specialised on sleep research and chronobiology lay out open-science principles in their author guidelines. Inspired by previous publications in the field of pain research 4,5 , we assessed the implementation of research transparency and openness in journal guidelines using the quantitative Transparency and Openness Factor 6 (TOP Factor). The TOP Factor, launched in February 2020, is based on the TOP guidelines and provides a normative standard for research transparency and openness. It contains ten sub-scales, corresponding to different aspects of openness and transparency in scientific research, reflecting the Transparency and Openness Promotion Guidelines: data citation, data transparency, analysis code transparency, materials transparency, design and analysis guidelines, study pre-registration, analysis plan pre-registration, replication, registered reports, and open science badges.
The TOP Factor recognises different levels relating to mentioning, encouraging, requiring and enforcing specific transparency and openness practices, which are implemented in a verbally anchored rating scheme. The specific practices are: • Data citation refers to the citation of data in a repository using standard means, including a digital object identifier (DOI). Citing the data set(s) that is/are reported on in a given publication facilitates access to these data, independent re-analyses by other authors times, and validation of results, thereby likely increasing the probability that a given result can be reproduced by a different group, or at a later time.
• Data, analysis code, and materials transparency refers to making data, analysis code and materials available as part of the journal submission. The availability of original data again supports the reproducibility of a given reported research result, as other investigators will be able to analyse the data independently. Furthermore, the availability of data also supports meta-analysis or evidence synthesis efforts. The availability of code is a necessary step to be able to reproduce the exact same analysis again at a later time. Finally, availability of materials used in the study ensures that other investigators can independently replicate the research design and expand on it in a way that is consistent with the original publication.
• Design and analysis guidelines refers to the inclusion of instruments describing the study design and analysis formally, such as the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) or Consolidated Standards of Reporting Trials (CONSORT) standards. These standard methods aid in providing an overview of the methodological rigour of the study and facilitate appraisal of completeness of a given research study.
• Study pre-registration and analysis pre-registration refers to the pre-registration of data collection and/or analysis prior to their execution. Pre-registering study designs and analysis can be a countermeasure against HARKing (hypothesizing after the results are known) 7 and other questionable research practices (QRPs) 8 . Pre-registration can take many forms, and most often refers to specifying the research design in advance of data collection, as well as the data analysis plan, so as to avoid "fishing" for positive results.
• Replication refers to an explicit desire of the journal to include articles not based on novelty. This reflects the implicit mission of some journals to publish only novel journals, and not simply replications of previous work. Publishing replications of previous work and

Amendments from Version 1
This revised version incorporates comments and suggestions made by the reviewers, including: reorganisation of material between Methods and Results sections, clarifications of methodology, clarifications related to the scoring method, novel reliability analysis now included in the tables, more nuance in the Discussion and additional information in the Introduction section, and further minor suggestions. In addition to these reviewer-suggested changes, we have also changed the title to be clearer, and reflect the work reported on here.
Any further responses from the reviewers can be found at the end of the article REVISED their results to some degree supports independent confirmation of the previous results, and also to a certain extent, addresses the file drawer problem 9 .
• The category Registered reports refers to prospective peer review, i.e. evaluation of a manuscript submitted to a journal prior to data collection and/or data analysis. Registered reports have recently gained significant traction, with a few high-profile journals, including Nature Human Behaviour, Nature Communications, Scientific Reports and PLOS Biology, accepting this article that 'frontloads' peer review.
• Open-science badges refers to the use of so-called badges, which are awarded if a paper adheres to specific standards, thereby providing an incentive for promoting transparency and openness 10 .
In summary, the TOP Factor covers major dimensions of open science and provides a helpful and standardised tool that allows to compare between journals or fields the extent to which they encourage or require adherence to open science principles.

TOP Factor
The TOP Factor (Transparency and Openness Factor; see extended data 11 ) is a quantitative score summarising the presence, requirement, and enforcement of transparent and open science practices in journals. It includes a total of ten sub-scales, of which nine score 0-3, and one scores 0-2, thereby resulting in a maximal summed score of 29. Higher values indicate a higher degree of adherence to the TOP practices.

Journal identification
Journals to be included in the rating were identified using a hybrid pre-registered strategy 12 : • Primary strategy. Relevant journals were identified using search on the Web of Science Master Journal List (WoS MJL). The search terms, entered in separate searches, were: • Secondary strategies. In addition to the primary search strategy, we used two supportive secondary strategies to identify relevant journals that may have been missed in the primary strategy: o "Own domain-relevant expertise in sleep and chronobiology; o "Informal consultation with a senior researcher with >25 years of experience in the field.
• Validation. The search results were merged, and duplicates were removed. We validated our search strategy by confirming that all journals listed in a recent publication on sleep research journals 13 were identified using this strategy. We validated our search strategy by confirming that the above search terms produce the same list of journals in MEDLINE (https://www.ncbi.nlm.nih. gov/nlmcatalog?term=currentlyindexed%5BAll%20Fields %5D%20AND%20currentlyindexedelectronic%5BAll%20 Fields%5D&cmd=DetailsSearch).
In addition to this strategy, we found two additional journals via searching the list of TOP signatories (https://www.cos.io/initiatives/top-guidelines) for "sleep", "chronobiology", "circadian", "biological rhythms" and "dream", and one through a search in the National Library of Medicine (NLM).

Journal meta-data extraction
We extracted the 2018 Journal Impact Factor (JIF) and the 5-year Journal Impact Factor from the Clarivate Analytics InCites platform. We obtained the NLM ID using search on the NLM data base, from which we also extracted the MEDLINE indexing status and the first year of publication. Information regarding support by scientific or professional societies extracted from both the NLM entry, and the journal website.

Journal guidelines extraction
We consulted the journal websites for author guidelines. Where possible, we archived journal guidelines either locally, or on the Internet Wayback Machine.

Scoring and conflict resolution
Three scorers (authors of this study, M.S., M.H.S, and C.B.) independently assessed the 27 identified journals' TOP Factors in a total of 270 individual ratings (27 journals × 10 rating categories). In a first pass, the three scorers agreed in 77.7% of all ratings (210 out of 270 ratings; see underlying data 11 ). We then discussed and resolved major sources of discrepancy (e.g., we agreed that a clinical trial registration counted as preregistration), resolved some per-item disagreements and rescored the categories "Data citation" (initial disagreement rate: 13/27), "Reporting guidelines" (initial disagreement rate: 13/27) and "Study pre-registration" (initial disagreement rate: 19/27, see above) independently in a second pass (see underlying data 11 ). We report Fleiss' kappa 14 , intraclass correlation (ICC, calculated using the R package "irr"), and the disagreement rate for each item before resolving discrepancies. At the end of this second pass, we again entered a consensus process, by the end of which all ratings agreed. All scoring was completed between mid-May and mid-June 2020. Results are reported in Table 2.

Journal identification
Through our search strategy, we identified a total of 28 journals. The 2018 JIF was available for 15 out of 28 journals (53.6%), and the 5-year JIF was available for all of these 15 except one (14 out of 28; 50%). Out of these 28 journals 11 out of 28, 39.2% of journals were not at present supported by a society. Three journals accepted submissions in a language other than English. Through our hybrid search strategy, we also found six additional journals (PLOS Neglected Tropical Diseases, International Journal of Gerontology, Lung India, Annals of Thoracic Medicine, Chronic Respiratory Disease, and Frontiers in Psychiatry), which we did not further consider, as they did not focus on sleep research or chronobiology, or were general purpose journals. One journal, Sleep Medicine Reviews, did not have any public author guidelines available, as it is an invite-only journal, and was therefore excluded, yielding a total number of 27 journals.
Low explicit implementation of transparency and openness in sleep research and chronobiology journals Across the 27 journals we examined, we find a total median TOP Factor of 3 (25 th percentile 1, 75 th percentile 4, minimum 0, maximum 9, IQR 3) out of a maximum of 29 points (Table 1 and Table 2). The three journals scoring highest on the TOP were Clocks & Sleep (9), Sleep Science and Practice (7), and Sleep and Vigilance (6). Interestingly, these three journals were founded no earlier than 2017.
To determine if journal prestige is correlated with TOP factor, we examined the correlation between IF and 5-year IF with Top Factor, finding no evidence for a significant rank correlation between IF and TOP Factor (Spearman's ρ=-0.1012, p=0.7306; n=14) or 5-year IF and TOP Factor (Spearman's ρ=0.1389, p= 0.6509; n=13).

Discussion
Low uptake of transparent and openness principles Our results, which focus on sleep research and chronobiology journals exclusively, are comparable to data on the uptake of the TOP guidelines across other disciplines. In 412 journals included in the Center for Open Science data base (https://osf.io/qatkz/, Version: 19, 10 November 2020), the median TOP Factor is 2 (25th percentile 0, 75th percentile 8, minimum 0, maximum 27, IQR 8). Our subfield data are also comparable to a recent study examining transparency and openness in the field of pain research, which found a median TOP Factor of 3.5 (IQR 2.8) 4,5 .
As the TOP Factor was only launched in February 2020, we see the low transparency and openness scores in sleep research and chronobiology journals as an opportunity to revisit how we do science, and how we report it. Importantly, while the TOP Factor is a useful instrument to assess transparency and openness at the journal level, it is clear that science policy makers, scientific and learned societies, funders, and research institutions play a key role in incentivizing open science.
Lack of a standard specification for journal guidelines Across the 28 journals we examined, author guidelines varied widely in their accuracy, detail, and organisation of information.
Many journals appeared to follow standard publisher guidelines, with very little or no modifications for the specific journal and often even referred to the publisher guidelines for further information. An additional challenge comes from the fact that the public-facing journal guidelines are not fully indicative of the process that the journal will implement, as further guidelines or requirements may be hidden in the submission system, or in correspondence with the journal during peer review or after acceptance of the article.
For example, it is unclear to what extent a rule will be enforced in the submission process when the guidelines say that authors 'will be asked to' do something. Fundamentally, this unseen information may limit the extent to which public author guidelines are truly reflective of the enforcement of transparency and openness principles in a given journal. In one instance, the editorial celebrating the inaugural issue of the journal stated that it welcomes Registered Reports, but at present, the author guidelines do not explicitly state this 15 . Unless one was to consult this additional information, it would remain unknown to the author. One way to improve transparency and openness may be to devise a standard, machineand human-readable specification schema for submission guidelines, reflecting the categories in the TOP Factor.

Ambiguity in transparency and openness standards
There can be large ambiguity in the extent that a journal implements specific transparency and openness standards. Take, for example, the category "Study pre-registration". There are four levels in this category: Level 0: Journal says nothing; Level 1: Articles will state if work was preregistered; Level 2: Article states whether work was preregistered and, if so, journal verifies adherence to preregistered plan; Level 3: Journal requires that confirmatory or inferential research must be preregistered.
According to the TOP Guidelines (v1.0.1), Level 1 is satisfied if the research was registered in an independent, institutional registry, specifying "study design, variables, and treatment conditions prior to conducting the research", leaving the level of detail open and rendering scorings ambiguous.
Indeed, there is a debate and confusion regarding the use of the terms registration vs. pre-registration 16 . While the registration of a clinical trial in a trial registry such as clinicaltrials.gov can be relatively lightweight and contain only minimal details, a pre-registration (as used in the open science community) typically refers to the prospective specification of concrete study details, including methodology, sample size, and analysis plan prior to data collection in registries such as AsPredicted.org (https://www.aspredicted.org/) or OSF Preregistration (https://osf. io/prereg/) 17 .
In more detail, the registration of a clinical trial in a registry such as clinicaltrials.gov on the one hand, and the pre-registration of analysis procedures and hypotheses prior to conducting the research on the other hand, mostly serve fundamentally different purposes, which is reflected in their nature too. First, clinical studies, which have not been registered, are impossible to publish in respected journals rendering the process a necessity rather than a self-imposed step to improve scientific transparency. Generally, when authors register a clinical trial (e.g. on the German Clinical Trials Register 18 ), they have to provide a short description of the trial, name the study goals, describe the intervention, name the primary endpoints, inclusion and exclusion criteria, the final sample size (without rationale), and the sponsor. Clearly, although the degree of detail is of course also Elsevier n/a n/a n/a   subject to variation among pre-registered studies, the required level of detail for registering clinical studies is rather low, with accountability consequently likewise being very low. In some legislations (such as Switzerland), the submission of ethics application as a clinical trial (which is required for some studies that modify sleep schedules), by default deposits the study in the (Swiss) clinical trial registry 19 .
Further developments of the TOP guidelines should therefore reflect the extent to which something has been preregistered, possibly also including at which time point during the scientific process the registration has taken place. Likewise, journals should be clear about what level of preregistration they expect.
Linguistic details: When is 'should' mandatory?
The author guidelines also differed in the degree they used language to specify requirements. For example, many journals "encouraged" authors to do something, but the use of this term basically carries no power -you may also just ignore it. The use of the verb "should" may be intended to signal mandatory requirements, but it leaves the possibility of ignoring the requirement. Likewise, journals that "ask authors to do something" may still allow exceptions. This may not only be favourable for authors, who do not comply with the requirements, but also allows editors to treat some submissions different from others. If, in addition to this, the reward is too low or non-existent, or there are no tangible negative consequences, even diligent scientists become a bit lazy, or simply prioritise non-optional tasks. Importantly, journal guidelines should frame requirements positively, and highlight the benefits of specific open-science practices, including discoverability of data and preventing the file-drawer effect.
Open review as an additional open science dimension Some journals, including eLife, PLOS, and Clocks & Sleep, now offer publishing of the pre-publication peer-review, with the possibility of naming the reviewers (if they agree). Publishing the review report and disclosing the reviewer identity are independent aspects of the broader 'open review' concept, as it is possible to disclose the reviewer identity to the authors as part of the journal submission and publication process without publishing it as part of the article, and it is also possible to publish the review report in anonymised form. There may of course be negative effects of disclosing the identity of reviewers, and potential reviewers may self-censor or fear retaliation if their names are known. Publishing the review reports does not only make the journey of an article from submission to publication transparent, laying open the shaping of an article through the peer review process. It can also curtail unreasonable requests during peer review and may encourage reviewers to provide constructive feedback oriented towards the best scientific outcome. We therefore encourage to include Open review as an additional category in future developments of the TOP guidelines, including the previously described shape that open peer review can take.
The TOP Factor as a metric Quantitative metrics can be subjected to being 'gamed', i.e., manipulated in such a way that they no longer correspond to corresponding to the original objective of the metric 20,21 .
As principles of open science are becoming more important in hiring and funding decisions, the TOP Factor may similarly be subject to being 'gamed', with 'openwashing', i.e. the possible practice of superficially claiming adherence to open-science principles without 'true' engagement with them, being an immediate concern 22 . Ultimately, whether or not this is a key driver will only be available for study retrospectively.
While the TOP Factor represents a first an important step in understanding the uptake of open and reproducible science systematically, it is important to note that some researchers may not be able to engage with open and reproducible science, due to institutional requirements, not least due to the effort required to implement them. Furthermore, there may be privacy or intellectual property concerns that could prevent the release of data, or the release of code pipelines. It is also conceivable that journals may not see open and reproducible practices as 'nice to have' but not 'must have' features or may think that they place a burden on the authors, thereby making the journal less attractive. This highlights the complexity of translating concepts of open, transparent and reproducible science into practice.
We further want to highlight the need to consider open science not as one 'movement', but as a set of practices, principles and policies from a wide and diverse "buffet" of options 23 . The TOP Factor represents but one way of considering the extent to which one aspect of the research ecosystem -journals -can be summarised.

Conclusion
In a comprehensive analysis of the author guidelines for 27 sleep research and chronobiology journals, we have found low evidence for explicit implementation of open and transparent science principles as assessed by the TOP Factor. We therefore encourage journals to make their requirements more explicit. Furthermore, to promote the recent developments in open, transparent and reproducible, journals should provide incentives for following open science practices.

Open Peer Review
The TOP Factor is an appropriate framework to describe these issues, but it has some limitations. Notably, I would not describe TOP factor as an "instrument." Like the TOP guidelines, the TOP factor includes normative standards for transparency. It does not provide structured questions to determine whether those standards have been met. Some aspects of the TOP guidelines are difficult to rate because the standards were not developed to be used as a rating instrument. Each of the TOP guidelines includes multiple components, and journal policy language can be unclear and contradictory. For example, this manuscript says that "Replication refers to an explicit desire of the journal to include articles not based on novelty," which is part of, but not the entirety of, a TOP standard designed to promote the publication of replication studies. It is unclear how the authors handled this and other compound standards/questions. The discussion also raises the problem of unclear/imprecise language like "should," but the manuscript does not explain how the authors coded policies with unclear language.
The methods used to assess agreement are very limited. Given the proportion of journal policies that were rated level 0, the total agreement should be extremely high. For example, it is concerning that the authors disagreed about 13/28 journals for one item, which suggests that the "instrument" is not reliable. At a minimum, the authors should report kappa and specific agreement (e.g., agreement for the categories), and they should consider the implications of poor reliability for efforts to measure and improve transparency and openness.
The authors rated a small number of journals related to sleep research, a field in which I am not an expert. It seems possible that these journals would be representative of this specific field of research. Transparency is poor in many disciplines, but these specific results might have limited generalizability as well as limited internal validity.

If applicable, is the statistical analysis and its interpretation appropriate? Partly
Are all the source data underlying the results available to ensure full reproducibility? Yes

If applicable, is the statistical analysis and its interpretation appropriate? Not applicable
Are all the source data underlying the results available to ensure full reproducibility? Yes We thank the reviewer for the comments.
Abstract: need more justification for why adherence should be mandatory.
We have now revised the abstract, and have removed the reference to mandatory adherence. Ultimately, we do not think that forcing authors to adhere to specific guidelines will lead to culture change.
P2 Introduction, 3rd paragraph: Refs 4 and 5 are both in the same field of pain; that is not evidence of "publications in other fields" (plural).
We have now revised this sentence to specifically refer to pain research.

P3 Introduction: Please add text about why each principle or group of principles are important for science.
We now added more text describing why each of these principles may be important for science, giving the reader more context.

P3 Methods: Reference 16 is to another version of this paper. It is not independent validation.
We We have now moved this to the Validation section.

P4 Method, left panel: What strategy was used to find the two additional journals?
We have now included this information.

P4 Methods. Journal meta-data extraction: Information about what percent of journal were supported by a society and what percent were in language that was not English belongs in
Results (unless those criteria are exclusionary).
We have now moved this to the Results section.

P4 Methods. Journal guidelines extraction: Information about no public author guidelines belongs in Results.
We have now moved this to the Results section. We have now moved this to the Discussion section.

P4 Results "Lack of a standard specification for journal guidelines": It is not clear why the information in the first two sentences should be considered a problem. Most of the rest of the paragraph belongs in the Discussion.
This paragraph was moved to the discussion. We have now added references to the two main pre-registration sites: AsPredicted.org and OSF Pregistration.
P9 Discussion: Another reason besides laziness why are researcher might not want to complete optional requirements is the many other tasks they might need to complete that are not optional.
We have revised this to include the nuance that researchers may simply prioritise nonoptional tasks. We have now revised this section to say: Some journals, including eLife, PLOS, and Clocks & Sleep, now offer publishing of the prepublication peer-review, with the possibility of naming the reviewers (if they agree).
Publishing the review report and disclosing the reviewer identity are independent aspects of the broader 'open review' concept, as it is possible to disclose the reviewer identity to the authors as part of the journal submission and publication process without publishing it as part of the article, and it is also possible to publish the review report in anonymised form. There may of course be negative effects of disclosing the identity of reviewers, and potential reviewers may self-censor or fear retaliation if their names are known. Publishing the review reports does not only make the journey of an article from submission to publication transparent, laying open the shaping of an article through the peer review process. It can also curtail unreasonable requests during peer review and may encourage reviewers to provide constructive feedback oriented towards the best scientific outcome. We therefore encourage to include Open review as an additional category in future developments of the TOP guidelines, including the previously described shape that open peer review can take.
scientists to practice open science (page 9). This is then followed by a series of reasons why researchers have not adopted open science practices e.g., it costs time. It might be nice to remind the reader of the positive reasons to adopt open science practices (e.g., https://bit.ly/2X0nOrJ).
It is great that the authors have provided the data. However for the second excel file SuplementaryInformation_S2.xlsx, sheet "comparison", the syntax used does not appear to consistently generate the arrays of the rater ratings -might be worth checking this out to see what is going wrong. It might also be helpful to provide an aggregate sheet in that file as was the case in the SuplementaryInformation_S1.xlsx file. 6.
The tables in the manuscript are a bit tedious to read. The authors could consider condensing them a bit. Ideally, the tables would be formatted to be publishable in portrait rather than landscape format. This may be achieved by choosing a smaller font and putting some of the labels into the table legend. In case the publisher intends to reformat these tables, this comment can largely be ignored.

If applicable, is the statistical analysis and its interpretation appropriate? Yes
Are all the source data underlying the results available to ensure full reproducibility? Yes

Are the conclusions drawn adequately supported by the results? Yes
We thank the reviewers for their comments, which we think have greatly improved the manuscript. In revising this manuscript, we have revisited specific sections in the Methods and our calculations, some of which required corrections, which we have now made in the manuscript.

Reviewer 1
The authors assess the implementation of open science practices in journal author guidelines in the field of sleep and chronobiology. The authors achieved this by calculating the Transparency and Openess (TOP) factor for 28 sleep and chronobiology journals. They find that overall, open science practices are poorly represented within sleep and chronobiology journals and call for more explicit open science guidelines, e.g. via replacing "soft encouragement" with "hard requirements". The implementation of open science practices is an important matter and the manuscript helps to raise awareness that many publishers still do not encourage or require these practices. With that said, the present work only provides a limited overview for the field of sleep and chronobiology, without providing a goal post or a broader reference point (except for data from the field of pain research). I think a broader comparison across several fields would be more informative. However, given that few papers acknowledge open science issues in sleep research this paper is a breath of fresh air towards important discussion of such matters.
Issues that might be addressed in the current manuscript are: What I am missing is a reference point. The authors mention the TOP factor in pain research on page 4, but I would like to see some general information about the distribution of TOP factors and a descriptive comparison of where sleep journals stand relative to others in general, as e.g. done in this blog article: https://www.natureindex.com/news-blog/top-factor-rates-journals-on-transparencyopenness. After all, sleep research is not only published in sleep-related journals.

1.
Maybe explicitly mention when the TOP factor was launched (Feb 2020), i.e. how much time there was to implement it. Of course, open science practices could (and should) have been implemented before the TOP factor, but given that the measure is around for a short period of time, it is reasonable to expect that journal requirements are not yet geared towards it.

2.
Another point to discuss would be the potential misuse of the TOP factor, i.e. some form of "gamification" or "open washing", where the TOP factor becomes the new impact factor (https://go.nature.com/3gat5Vl). It may also be worth discussing whether the TOP factor is enough to encourage good science. Especially in sleep science there is a tendency to collect very small samples and as far as I can see there is no measurement for this included in the TOP factor (e.g., a priori power analyses with smallest effect size of interest). Also, funders and universities may play a much more important role in incentivizing open and reproducible science.

3.
the journals) might be too small for a meaningful overview. Then it would be useful to nevertheless offer a scatter plot.
The authors rightly point out that journals should adopt hard requirements to encourage scientists to practice open science (page 9). This is then followed by a series of reasons why researchers have not adopted open science practices e.g., it costs time. It might be nice to remind the reader of the positive reasons to adopt open science practices (e.g., https://bit.ly/2X0nOrJ).

5.
It is great that the authors have provided the data. However for the second excel file SuplementaryInformation_S2.xlsx, sheet "comparison", the syntax used does not appear to consistently generate the arrays of the rater ratings -might be worth checking this out to see what is going wrong. It might also be helpful to provide an aggregate sheet in that file as was the case in the SuplementaryInformation_S1.xlsx file. 6.
The tables in the manuscript are a bit tedious to read. The authors could consider condensing them a bit. Ideally, the tables would be formatted to be publishable in portrait rather than landscape format. This may be achieved by choosing a smaller font and putting some of the labels into the table legend. In case the publisher intends to reformat these tables, this comment can largely be ignored.

7.
We thank the reviewers for their comments. We have now included a section comparing the TOP Factors we found with journals registered in the Center for Open Science data base: Our results, which focus on sleep research and chronobiology journals exclusively, are comparable to data on the uptake of the TOP guidelines across other disciplines. In 412 journals included in the Center for Open Science data base (https://osf.io/qatkz/, Version: 19, 10 November 2020), the median TOP Factor is 2 (25th percentile 0, 75th percentile 8, minimum 0, maximum 27, IQR 8). Our subfield data are also comparable to a recent study examining transparency and openness in the field of pain research, which found a median TOP Factor of 3.5 (IQR 2.8) 4, 5 .
This now serves as a useful reference for the comparison, indicating that the results found here are very much comparable with other journals as well.
Maybe explicitly mention when the TOP factor was launched (Feb 2020), i.e. how much time there was to implement it. Of course, open science practices could (and should) have been implemented before the TOP factor, but given that the measure is around for a short period of time, it is reasonable to expect that journal requirements are not yet geared towards it.
Point 2: We have now addressed this in the introduction, stating that it was launched in February 2020, and also have added a section about this in the discussion. We write: As the TOP Factor was only launched in February 2020, we see the low transparency and openness scores in sleep research and chronobiology journals as an opportunity to revisit how we do science, and how we report it. Importantly, while the TOP Factor is a useful instrument to assess transparency and openness at the journal level, it is clear that science policy makers, scientific and learned societies, funders, and research institutions play a key role in incentivizing open science.
Another point to discuss would be the potential misuse of the TOP factor, i.e. some form of "gamification" or "open washing", where the TOP factor becomes the new impact factor ( https://go.nature.com/3gat5Vl). It may also be worth discussing whether the TOP factor is enough to encourage good science. Especially in sleep science there is a tendency to collect very small samples and as far as I can see there is no measurement for this included in the TOP factor (e.g., a priori power analyses with smallest effect size of interest). Also, funders and universities may play a much more important role in incentivizing open and reproducible science.
We have now addressed this in the discussion, where we write: Gaming the TOP Factor? Quantitative metrics can be subjected to being 'gamed', i.e., manipulated in such a way that they no longer correspond to corresponding to the original objective of the metric 17, 18 . As principles of open science are becoming more important in hiring and funding decisions, the TOP Factor may similarly be subject to being 'gamed', with 'openwashing' being an immediate concern 19 . Ultimately, whether or not this is a key driver will only be available for study retrospectively.
The authors collected data on the impact factor -it would be interesting to compare the general association between impact factor and TOP factor. However, the sample size (especially considering that the impact factor was only available for a subset of the journals) might be too small for a meaningful overview. Then it would be useful to nevertheless offer a scatter plot.
We now reported the rank correlation, but there is no convincing evidence. As the data are available in the tables, any interested readers will be able to follow up with their own assessments.
The authors rightly point out that journals should adopt hard requirements to encourage scientists to practice open science (page 9). This is then followed by a series of reasons why researchers have not adopted open science practices e.g., it costs time. It might be nice to remind the reader of the positive reasons to adopt open science practices (e.g., https://bit.ly/2X0nOrJ).
We have now revised this and highlighted the positive aspects, and also stated that journal guidelines should be framed positively.
It is great that the authors have provided the data. However for the second excel file SuplementaryInformation_S2.xlsx, sheet "comparison", the syntax used does not appear to consistently generate the arrays of the rater ratings -might be worth checking this out to see what is going wrong. It might also be helpful to provide an aggregate sheet in that file as was the case in the SuplementaryInformation_S1.xlsx file.
We provide the second table only as a raw record of the intermediate data, and do not think that this will be useful for readers, but have included it for completeness. We would be happy to remove it.
The tables in the manuscript are a bit tedious to read. The authors could consider condensing them a bit. Ideally, the tables would be formatted to be publishable in portrait rather than landscape format. This may be achieved by choosing a smaller font and putting some of the labels into the table legend. In case the publisher intends to reformat these tables, this comment can largely be ignored.
Point 7: The formatting of the tables is ultimately up to the publisher and we therefore will leave it to them.

Competing Interests:
No competing interests were disclosed.