Skip to main content

Main menu

  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE

User menu

  • Log out
  • Log in
  • My Cart

Search

  • Advanced search
Journal of Neuroscience
  • Log out
  • Log in
  • My Cart
Journal of Neuroscience

Advanced Search

Submit a Manuscript
  • HOME
  • CONTENT
    • Early Release
    • Featured
    • Current Issue
    • Issue Archive
    • Collections
    • Podcast
  • ALERTS
  • FOR AUTHORS
    • Information for Authors
    • Fees
    • Journal Clubs
    • eLetters
    • Submit
    • Special Collections
  • EDITORIAL BOARD
    • Editorial Board
    • ECR Advisory Board
    • Journal Staff
  • ABOUT
    • Overview
    • Advertise
    • For the Media
    • Rights and Permissions
    • Privacy Policy
    • Feedback
    • Accessibility
  • SUBSCRIBE
PreviousNext
Dual Perspectives

Neuroscience Needs to Test Both Statistical and Scientific Hypotheses

Bradley E. Alger
Journal of Neuroscience 9 November 2022, 42 (45) 8432-8438; https://doi.org/10.1523/JNEUROSCI.1134-22.2022
Bradley E. Alger
Department of Physiology, Program in Neuroscience, University of Maryland School of Medicine, Baltimore, Maryland 21201
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Bradley E. Alger
  • Article
  • Info & Metrics
  • eLetters
  • PDF
Loading

Published eLetters

Guidelines

As a forum for professional feedback, submissions of letters are open to all. You do not need to be a subscriber. To avoid redundancy, we urge you to read other people's letters before submitting your own. Name, current appointment, place of work, and email address are required to send a letter, and will be published with your review. We also require that you declare any competing financial interests. Unprofessional submissions will not be considered or responded to.

Submit a Response to This Article
Compose eLetter

More information about text formats

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Author Information
First or given name, e.g. 'Peter'.
Your last, or family, name, e.g. 'MacMoody'.
Your email address, e.g. higgs-boson{at}gmail.com
Your role and/or occupation, e.g. 'Orthopedic Surgeon'.
Your organization or institution (if applicable), e.g. 'Royal Free Hospital'.
Statement of Competing Interests
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
11 + 3 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.

Vertical Tabs

Jump to comment:

  • RE: Author Response to Alger et al.
    Robert Calin-Jageman
    Submitted on: 09 November 2022
  • Submitted on: (9 November 2022)
    Page navigation anchor for RE: Author Response to Alger et al.
    RE: Author Response to Alger et al.
    • Robert Calin-Jageman, Author, Dominican University

    Alger’s defense of testing with p values is informed by a falsificationist approach to science, one in which scientists derive predictions from their theories, collect data, and then judge their predictions as corroborated or refuted.

     

    Alger argues that estimation is not suitable for a falsificationist approach to science.  Are p values more suitable?  Under current practice: no.  Neuroscientists currently test the null hypothesis, not their own (often unspecified) predictions.  In fact, the current approach to using p values does not allow the researcher’s predictions to be falsified.  Non-significant results, which should be “valuable ‘negative information’ for scientific knowledge” are published at implausibly low rates. 

     

    What would it take to use p values according to falsificationist ideals of science?  Estimation thinking.  That is, researchers would need to think more about effect sizes and uncertainty, enough to derive quantitative predictions from their hypotheses (If our theory is correct, CREB1 enhancement will boost memory by >30%).  Then, researchers would need to conduct tests that put those predictions are real risk.  That would involve, at a minimum: a) planning for an adequate sample size b) specifying an interval null hypotheses to pro...

    Show More

    Alger’s defense of testing with p values is informed by a falsificationist approach to science, one in which scientists derive predictions from their theories, collect data, and then judge their predictions as corroborated or refuted.

     

    Alger argues that estimation is not suitable for a falsificationist approach to science.  Are p values more suitable?  Under current practice: no.  Neuroscientists currently test the null hypothesis, not their own (often unspecified) predictions.  In fact, the current approach to using p values does not allow the researcher’s predictions to be falsified.  Non-significant results, which should be “valuable ‘negative information’ for scientific knowledge” are published at implausibly low rates. 

     

    What would it take to use p values according to falsificationist ideals of science?  Estimation thinking.  That is, researchers would need to think more about effect sizes and uncertainty, enough to derive quantitative predictions from their hypotheses (If our theory is correct, CREB1 enhancement will boost memory by >30%).  Then, researchers would need to conduct tests that put those predictions are real risk.  That would involve, at a minimum: a) planning for an adequate sample size b) specifying an interval null hypotheses to provide a clear standard for falsifying their prediction (Our prediction will be falsified if memory retention changes by less than 20%).  Current use of p values is simply not falsificationist; estimation thinking could help reform it to the ideals Alger convincingly extolls.

     

    Although Alger acknowledges that hypothesis testing is not the only mode of scientific inquiry, he approves of the fact that it is now “a principal method and is pervasive”.  Perhaps we should be troubled by the current hegemony of testing.  For example, Hodgkin and Huxley developed their model of the action potential (1952) without p values.  Moreover, their work does not reflect a falsificationist mode of inquiry, but rather a pragmatic epistemology best summarized by Box’s dictum that “all models are wrong, but some are useful” (1979, p. 202).  Hodgkin and Huxley gladly made use of wrong but useful simplifications; they noted discrepancies between model and data not as refutations but as topics for additional research.  This mode of inquiry seems fruitful and valuable for neuroscience.  Perhaps hypothesis testing, even if improved, should become less pervasive.

     

    Finally, Alger presents a dour assessment of pre-registration, claiming it is not feasible at the cutting edge, would allow labs to be scooped, and lets researchers game the system.  It is true that pre-registration is impossible for researchers pursuing projects so novel that “too little is known to be able to specify in advance all of the relevant variables or the likely outcomes of manipulations”.   That is as it should be: by Alger’s standards, researchers so far out on the cutting edge are not yet ready to put their still-nascent theories to the test.  When researchers have derived clear statistical predictions from their scientific hypotheses, they will not find pre-registration difficult or time-consuming.  When that time comes, there is no need to worry about being scooped; the Open Science Framework allows an embargo period during which pre-registrations are kept private (but not forever!).  And yes, pre-registration can be gamed.  What’s notable is that the pre-registration process allows us to detect, quantify, and scold poor practices.  Under current norms we can only guess at the degree to which questionable practices underlie our sparkling published literature.  If we want improved testing with accountability, pre-registration is a useful tool. 

     

    References

     

    Box GEP (1979) Robustness in the Strategy of Scientific Model Building. In: Robustness in Statistics (Launer RL, Wilkinson GN, eds), pp 201–236. Academic Press.

     

    Hodgkin AL, Huxley AF (1952) A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of Physiology 117:500–544.

     

    Show Less
    Competing Interests: None declared.
Back to top

In this issue

The Journal of Neuroscience: 42 (45)
Journal of Neuroscience
Vol. 42, Issue 45
9 Nov 2022
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Masthead (PDF)
Email

Thank you for sharing this Journal of Neuroscience article.

NOTE: We request your email address only to inform the recipient that it was you who recommended this article, and that it is not junk mail. We do not retain these email addresses.

Enter multiple addresses on separate lines or separate them with commas.
Neuroscience Needs to Test Both Statistical and Scientific Hypotheses
(Your Name) has forwarded a page to you from Journal of Neuroscience
(Your Name) thought you would be interested in this article in Journal of Neuroscience.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Print
View Full Page PDF
Citation Tools
Neuroscience Needs to Test Both Statistical and Scientific Hypotheses
Bradley E. Alger
Journal of Neuroscience 9 November 2022, 42 (45) 8432-8438; DOI: 10.1523/JNEUROSCI.1134-22.2022

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Respond to this article
Request Permissions
Share
Neuroscience Needs to Test Both Statistical and Scientific Hypotheses
Bradley E. Alger
Journal of Neuroscience 9 November 2022, 42 (45) 8432-8438; DOI: 10.1523/JNEUROSCI.1134-22.2022
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Introduction
    • Discussion
    • Footnotes
    • References
  • Info & Metrics
  • eLetters
  • PDF

Responses to this article

Respond to this article

Jump to comment:

  • RE: Author Response to Alger et al.
    Robert Calin-Jageman
    Published on: 09 November 2022
  • Published on: (9 November 2022)
    Page navigation anchor for RE: Author Response to Alger et al.
    RE: Author Response to Alger et al.
    • Robert Calin-Jageman, Author, Dominican University

    Alger’s defense of testing with p values is informed by a falsificationist approach to science, one in which scientists derive predictions from their theories, collect data, and then judge their predictions as corroborated or refuted.

     

    Alger argues that estimation is not suitable for a falsificationist approach to science.  Are p values more suitable?  Under current practice: no.  Neuroscientists currently test the null hypothesis, not their own (often unspecified) predictions.  In fact, the current approach to using p values does not allow the researcher’s predictions to be falsified.  Non-significant results, which should be “valuable ‘negative information’ for scientific knowledge” are published at implausibly low rates. 

     

    What would it take to use p values according to falsificationist ideals of science?  Estimation thinking.  That is, researchers would need to think more about effect sizes and uncertainty, enough to derive quantitative predictions from their hypotheses (If our theory is correct, CREB1 enhancement will boost memory by >30%).  Then, researchers would need to conduct tests that put those predictions are real risk.  That would involve, at a minimum: a) planning for an adequate sample size b) specifying an interval null hypotheses to pro...

    Show More

    Alger’s defense of testing with p values is informed by a falsificationist approach to science, one in which scientists derive predictions from their theories, collect data, and then judge their predictions as corroborated or refuted.

     

    Alger argues that estimation is not suitable for a falsificationist approach to science.  Are p values more suitable?  Under current practice: no.  Neuroscientists currently test the null hypothesis, not their own (often unspecified) predictions.  In fact, the current approach to using p values does not allow the researcher’s predictions to be falsified.  Non-significant results, which should be “valuable ‘negative information’ for scientific knowledge” are published at implausibly low rates. 

     

    What would it take to use p values according to falsificationist ideals of science?  Estimation thinking.  That is, researchers would need to think more about effect sizes and uncertainty, enough to derive quantitative predictions from their hypotheses (If our theory is correct, CREB1 enhancement will boost memory by >30%).  Then, researchers would need to conduct tests that put those predictions are real risk.  That would involve, at a minimum: a) planning for an adequate sample size b) specifying an interval null hypotheses to provide a clear standard for falsifying their prediction (Our prediction will be falsified if memory retention changes by less than 20%).  Current use of p values is simply not falsificationist; estimation thinking could help reform it to the ideals Alger convincingly extolls.

     

    Although Alger acknowledges that hypothesis testing is not the only mode of scientific inquiry, he approves of the fact that it is now “a principal method and is pervasive”.  Perhaps we should be troubled by the current hegemony of testing.  For example, Hodgkin and Huxley developed their model of the action potential (1952) without p values.  Moreover, their work does not reflect a falsificationist mode of inquiry, but rather a pragmatic epistemology best summarized by Box’s dictum that “all models are wrong, but some are useful” (1979, p. 202).  Hodgkin and Huxley gladly made use of wrong but useful simplifications; they noted discrepancies between model and data not as refutations but as topics for additional research.  This mode of inquiry seems fruitful and valuable for neuroscience.  Perhaps hypothesis testing, even if improved, should become less pervasive.

     

    Finally, Alger presents a dour assessment of pre-registration, claiming it is not feasible at the cutting edge, would allow labs to be scooped, and lets researchers game the system.  It is true that pre-registration is impossible for researchers pursuing projects so novel that “too little is known to be able to specify in advance all of the relevant variables or the likely outcomes of manipulations”.   That is as it should be: by Alger’s standards, researchers so far out on the cutting edge are not yet ready to put their still-nascent theories to the test.  When researchers have derived clear statistical predictions from their scientific hypotheses, they will not find pre-registration difficult or time-consuming.  When that time comes, there is no need to worry about being scooped; the Open Science Framework allows an embargo period during which pre-registrations are kept private (but not forever!).  And yes, pre-registration can be gamed.  What’s notable is that the pre-registration process allows us to detect, quantify, and scold poor practices.  Under current norms we can only guess at the degree to which questionable practices underlie our sparkling published literature.  If we want improved testing with accountability, pre-registration is a useful tool. 

     

    References

     

    Box GEP (1979) Robustness in the Strategy of Scientific Model Building. In: Robustness in Statistics (Launer RL, Wilkinson GN, eds), pp 201–236. Academic Press.

     

    Hodgkin AL, Huxley AF (1952) A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of Physiology 117:500–544.

     

    Show Less
    Competing Interests: None declared.

Related Articles

Cited By...

More in this TOC Section

  • Alzheimer's Targeted Treatments: Focus on Amyloid and Inflammation
  • Better Inference in Neuroscience: Test Less, Estimate More
Show more Dual Perspectives

Subjects

  • 2022 Annual Meeting Issue
  • Home
  • Alerts
  • Follow SFN on BlueSky
  • Visit Society for Neuroscience on Facebook
  • Follow Society for Neuroscience on Twitter
  • Follow Society for Neuroscience on LinkedIn
  • Visit Society for Neuroscience on Youtube
  • Follow our RSS feeds

Content

  • Early Release
  • Current Issue
  • Issue Archive
  • Collections

Information

  • For Authors
  • For Advertisers
  • For the Media
  • For Subscribers

About

  • About the Journal
  • Editorial Board
  • Privacy Notice
  • Contact
  • Accessibility
(JNeurosci logo)
(SfN logo)

Copyright © 2025 by the Society for Neuroscience.
JNeurosci Online ISSN: 1529-2401

The ideas and opinions expressed in JNeurosci do not necessarily reflect those of SfN or the JNeurosci Editorial Board. Publication of an advertisement or other product mention in JNeurosci should not be construed as an endorsement of the manufacturer’s claims. SfN does not assume any responsibility for any injury and/or damage to persons or property arising from or related to any use of any material contained in JNeurosci.