Is there hard evidence that the grant peer review system performs significantly better than random?












41















Funding agencies in academia use peer review to assess the quality of submitted grant proposals and to decide which proposals should be funded. I have some doubts that this system actually works based on the following findings:




  1. A recent study in PNAS "found no agreement among reviewers in
    evaluating the same application".

  2. We have by ourselves performed boot strapping on reviews provided
    for the same proposal submitted at multiple calls in multiple years
    (yes, the guidelines allowed this) and obtained on average almost exactly the overall average acceptance rate of the calls.


My question is now in the opposite direction ... Given that the grant review and panel system is quite costly: is there hard evidence that the grant peer review system provides results significantly better than random?



E.g. Do studies exist where e.g. 50 proposals were funded based on the standard review process and 50 random proposals were funded at the same call and then 5 or 7 years later the number of high profile papers, citations or similar was counted? Or do studies exist were re-evaluations show the same outcome as the original evaluations of proposals?










share|improve this question


















  • 29





    A quick comment: The PNAS study you mention could not reliably distinguish grants within a pool of previously deemed excellent grants (they had all been funded by the NIH). That is not the same as finding that the process could not distinguish mediocre from excellent grants. Just something to keep in mind.

    – user2705196
    Apr 18 at 12:27








  • 16





    If proposals were decided on wholly by RNG, I know that I would have handed in a lot more proposals than I have so far.

    – sgf
    Apr 18 at 13:06






  • 9





    Random allocation on which sample set of researchers? All university staff? All people that declare themselves interested in doing research on a certain topic (for instance, "vaccines" or "global warming")? All registered users on Arxiv? All registered users on Vixra?

    – Federico Poloni
    Apr 18 at 14:49






  • 17





    @sgf and FedericoPoloni make the most compelling challenge to your question. The fact that there is known to be a review process will alter who submits what. Comparing accepted/rejected proposals submitted to review, versus a random sample of proposals submitted to review, would likely yield very different results than comparing accepted/rejected proposals submitted to review versus a random sample of proposals submitted to a funding source known to randomly select proposals for funding.

    – dwizum
    Apr 18 at 15:14






  • 2





    You might want to check out this paper which, indeed, proposes to add randomness for efficient funding strategies.

    – Alberto Santini
    Apr 19 at 18:16
















41















Funding agencies in academia use peer review to assess the quality of submitted grant proposals and to decide which proposals should be funded. I have some doubts that this system actually works based on the following findings:




  1. A recent study in PNAS "found no agreement among reviewers in
    evaluating the same application".

  2. We have by ourselves performed boot strapping on reviews provided
    for the same proposal submitted at multiple calls in multiple years
    (yes, the guidelines allowed this) and obtained on average almost exactly the overall average acceptance rate of the calls.


My question is now in the opposite direction ... Given that the grant review and panel system is quite costly: is there hard evidence that the grant peer review system provides results significantly better than random?



E.g. Do studies exist where e.g. 50 proposals were funded based on the standard review process and 50 random proposals were funded at the same call and then 5 or 7 years later the number of high profile papers, citations or similar was counted? Or do studies exist were re-evaluations show the same outcome as the original evaluations of proposals?










share|improve this question


















  • 29





    A quick comment: The PNAS study you mention could not reliably distinguish grants within a pool of previously deemed excellent grants (they had all been funded by the NIH). That is not the same as finding that the process could not distinguish mediocre from excellent grants. Just something to keep in mind.

    – user2705196
    Apr 18 at 12:27








  • 16





    If proposals were decided on wholly by RNG, I know that I would have handed in a lot more proposals than I have so far.

    – sgf
    Apr 18 at 13:06






  • 9





    Random allocation on which sample set of researchers? All university staff? All people that declare themselves interested in doing research on a certain topic (for instance, "vaccines" or "global warming")? All registered users on Arxiv? All registered users on Vixra?

    – Federico Poloni
    Apr 18 at 14:49






  • 17





    @sgf and FedericoPoloni make the most compelling challenge to your question. The fact that there is known to be a review process will alter who submits what. Comparing accepted/rejected proposals submitted to review, versus a random sample of proposals submitted to review, would likely yield very different results than comparing accepted/rejected proposals submitted to review versus a random sample of proposals submitted to a funding source known to randomly select proposals for funding.

    – dwizum
    Apr 18 at 15:14






  • 2





    You might want to check out this paper which, indeed, proposes to add randomness for efficient funding strategies.

    – Alberto Santini
    Apr 19 at 18:16














41












41








41


7






Funding agencies in academia use peer review to assess the quality of submitted grant proposals and to decide which proposals should be funded. I have some doubts that this system actually works based on the following findings:




  1. A recent study in PNAS "found no agreement among reviewers in
    evaluating the same application".

  2. We have by ourselves performed boot strapping on reviews provided
    for the same proposal submitted at multiple calls in multiple years
    (yes, the guidelines allowed this) and obtained on average almost exactly the overall average acceptance rate of the calls.


My question is now in the opposite direction ... Given that the grant review and panel system is quite costly: is there hard evidence that the grant peer review system provides results significantly better than random?



E.g. Do studies exist where e.g. 50 proposals were funded based on the standard review process and 50 random proposals were funded at the same call and then 5 or 7 years later the number of high profile papers, citations or similar was counted? Or do studies exist were re-evaluations show the same outcome as the original evaluations of proposals?










share|improve this question














Funding agencies in academia use peer review to assess the quality of submitted grant proposals and to decide which proposals should be funded. I have some doubts that this system actually works based on the following findings:




  1. A recent study in PNAS "found no agreement among reviewers in
    evaluating the same application".

  2. We have by ourselves performed boot strapping on reviews provided
    for the same proposal submitted at multiple calls in multiple years
    (yes, the guidelines allowed this) and obtained on average almost exactly the overall average acceptance rate of the calls.


My question is now in the opposite direction ... Given that the grant review and panel system is quite costly: is there hard evidence that the grant peer review system provides results significantly better than random?



E.g. Do studies exist where e.g. 50 proposals were funded based on the standard review process and 50 random proposals were funded at the same call and then 5 or 7 years later the number of high profile papers, citations or similar was counted? Or do studies exist were re-evaluations show the same outcome as the original evaluations of proposals?







peer-review funding






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Apr 18 at 7:13









lordylordy

2,891818




2,891818








  • 29





    A quick comment: The PNAS study you mention could not reliably distinguish grants within a pool of previously deemed excellent grants (they had all been funded by the NIH). That is not the same as finding that the process could not distinguish mediocre from excellent grants. Just something to keep in mind.

    – user2705196
    Apr 18 at 12:27








  • 16





    If proposals were decided on wholly by RNG, I know that I would have handed in a lot more proposals than I have so far.

    – sgf
    Apr 18 at 13:06






  • 9





    Random allocation on which sample set of researchers? All university staff? All people that declare themselves interested in doing research on a certain topic (for instance, "vaccines" or "global warming")? All registered users on Arxiv? All registered users on Vixra?

    – Federico Poloni
    Apr 18 at 14:49






  • 17





    @sgf and FedericoPoloni make the most compelling challenge to your question. The fact that there is known to be a review process will alter who submits what. Comparing accepted/rejected proposals submitted to review, versus a random sample of proposals submitted to review, would likely yield very different results than comparing accepted/rejected proposals submitted to review versus a random sample of proposals submitted to a funding source known to randomly select proposals for funding.

    – dwizum
    Apr 18 at 15:14






  • 2





    You might want to check out this paper which, indeed, proposes to add randomness for efficient funding strategies.

    – Alberto Santini
    Apr 19 at 18:16














  • 29





    A quick comment: The PNAS study you mention could not reliably distinguish grants within a pool of previously deemed excellent grants (they had all been funded by the NIH). That is not the same as finding that the process could not distinguish mediocre from excellent grants. Just something to keep in mind.

    – user2705196
    Apr 18 at 12:27








  • 16





    If proposals were decided on wholly by RNG, I know that I would have handed in a lot more proposals than I have so far.

    – sgf
    Apr 18 at 13:06






  • 9





    Random allocation on which sample set of researchers? All university staff? All people that declare themselves interested in doing research on a certain topic (for instance, "vaccines" or "global warming")? All registered users on Arxiv? All registered users on Vixra?

    – Federico Poloni
    Apr 18 at 14:49






  • 17





    @sgf and FedericoPoloni make the most compelling challenge to your question. The fact that there is known to be a review process will alter who submits what. Comparing accepted/rejected proposals submitted to review, versus a random sample of proposals submitted to review, would likely yield very different results than comparing accepted/rejected proposals submitted to review versus a random sample of proposals submitted to a funding source known to randomly select proposals for funding.

    – dwizum
    Apr 18 at 15:14






  • 2





    You might want to check out this paper which, indeed, proposes to add randomness for efficient funding strategies.

    – Alberto Santini
    Apr 19 at 18:16








29




29





A quick comment: The PNAS study you mention could not reliably distinguish grants within a pool of previously deemed excellent grants (they had all been funded by the NIH). That is not the same as finding that the process could not distinguish mediocre from excellent grants. Just something to keep in mind.

– user2705196
Apr 18 at 12:27







A quick comment: The PNAS study you mention could not reliably distinguish grants within a pool of previously deemed excellent grants (they had all been funded by the NIH). That is not the same as finding that the process could not distinguish mediocre from excellent grants. Just something to keep in mind.

– user2705196
Apr 18 at 12:27






16




16





If proposals were decided on wholly by RNG, I know that I would have handed in a lot more proposals than I have so far.

– sgf
Apr 18 at 13:06





If proposals were decided on wholly by RNG, I know that I would have handed in a lot more proposals than I have so far.

– sgf
Apr 18 at 13:06




9




9





Random allocation on which sample set of researchers? All university staff? All people that declare themselves interested in doing research on a certain topic (for instance, "vaccines" or "global warming")? All registered users on Arxiv? All registered users on Vixra?

– Federico Poloni
Apr 18 at 14:49





Random allocation on which sample set of researchers? All university staff? All people that declare themselves interested in doing research on a certain topic (for instance, "vaccines" or "global warming")? All registered users on Arxiv? All registered users on Vixra?

– Federico Poloni
Apr 18 at 14:49




17




17





@sgf and FedericoPoloni make the most compelling challenge to your question. The fact that there is known to be a review process will alter who submits what. Comparing accepted/rejected proposals submitted to review, versus a random sample of proposals submitted to review, would likely yield very different results than comparing accepted/rejected proposals submitted to review versus a random sample of proposals submitted to a funding source known to randomly select proposals for funding.

– dwizum
Apr 18 at 15:14





@sgf and FedericoPoloni make the most compelling challenge to your question. The fact that there is known to be a review process will alter who submits what. Comparing accepted/rejected proposals submitted to review, versus a random sample of proposals submitted to review, would likely yield very different results than comparing accepted/rejected proposals submitted to review versus a random sample of proposals submitted to a funding source known to randomly select proposals for funding.

– dwizum
Apr 18 at 15:14




2




2





You might want to check out this paper which, indeed, proposes to add randomness for efficient funding strategies.

– Alberto Santini
Apr 19 at 18:16





You might want to check out this paper which, indeed, proposes to add randomness for efficient funding strategies.

– Alberto Santini
Apr 19 at 18:16










8 Answers
8






active

oldest

votes


















53














No such study exists.



You have to realize that the current model of funding research through grants is rather recent. Fifty years ago, research was funded with recurring credits: a lab/researcher got a fixed amount of money each year to do research as they saw fit. Only applied researchers got industrial grants to develop precise new applications, but this wasn't the core of their funding.



The current system arose due to the combination of two factors:




  • The growing hegemony of the neo-liberal ideology in the 80s-90s and the commodification of society as a whole. According to this ideology, research as a whole is supposedly more efficient (for an ill-defined notion of "efficient") if researchers compete with one another on the global market of research funding. They will "naturally" tend to research what works in order to keep their funding.

  • Politicians' will to govern research. Research, and especially fundamental research, is a notoriously difficult thing to govern: you never know where an idea might lead, or what new ideas will come up along the way. You don't know what will work and what won't. You don't know what will be applicable and what will remain a nice theoretical idea. Otherwise it wouldn't be research.

    But rulers want their word in this: they want to tell their voters that their policies improved lives in a very direct way as a result of such and such decision they took. This is why words like "governance" and "steering" are becoming more and more prevalent in modern academia: administrators are not in the business of just helping researchers carry out their research, they are in the business of ensuring that researchers are researching what they are supposed to. And how do you force someone to do what you want, short of physical violence? Through economical violence.


Of course, this yields absurd situations, such as the fabled year when the French National Research Agency spent more on administration than on research. And such thinking paves the way for short-term research with zero long term goals, and especially not goals reachable in more than five years (the usual length of "long-term" grants). But these decisions are not driven by rational thought; they are driven by ideology.






share|improve this answer



















  • 9





    Only the first sentence answers the question.

    – Anonymous Physicist
    Apr 18 at 11:44






  • 21





    @AnonymousPhysicist Context and explanations are always welcome. I'm pretty sure an answer just saying "No" (technically correct as you said) wouldn't be accepted as an answer, and if so, it wouldn't be a good one.

    – user4052054
    Apr 18 at 15:00











  • Comments are not for extended discussion; this conversation has been moved to chat.

    – eykanal
    Apr 22 at 14:44



















38














I don't know about such studies, but I have served on ~20 panels to review proposals. While I'm entirely willing to believe that multiple panels will not agree on the relative ordering of proposals, I am quite convinced that they will in general agree on which proposals are "good" and which are "not good".



In any given round, a panel (at the National Science Foundation) will review on the order of 20 proposals. What may not be obvious to outsiders is that of these 10 are pretty obviously not fundable. 3-5 are eventually rated as excellent and the remainder as "pretty good". I am pretty convinced that if you ran multiple panels on these 20 proposals, that this classification into three groups will be more or less stable. What I am also convinced of is that panels will produce different rankings within each group.



The consequence is that it is quite possible that different panels will results in a different set of proposals being funded, given that only ~4 proposals out of the 20 will be funded. But overall, the result will still be far from random. The top 3-5 are still likely to be funded, the bottom 10 will definitely not be funded, and it's a toss-up in between.






share|improve this answer





















  • 11





    Having sat on a similar number of panels (for internal proposals) I am entirely in agreement on this. You get 10 people to rank 20 proposals, and they fall into those three categories in about those ratios. If anything, it is pleasantly surprising to see a room full of people with different backgrounds and priorities generally agree on 80% of the proposals (12 are bad, 4 are excellent) and only need to hash out which of the remaining 4 are worthy and then only if there is enough money.

    – Jon Custer
    Apr 18 at 13:46






  • 3





    Hmm the study linked above finds the opposite: "discussion during study section meetings worsened rather than improved disagreement among different study sections. " (study section meetings = panel meetings)

    – lordy
    Apr 18 at 14:56






  • 3





    I've never served on a grant review board, but this agrees with my experience in other settings where I had to choose from a set of candidate (selecting papers for conferences, best paper awards and recruiting). So from outside it looks as if the final winner is a random choice from the top X submissions. But @Buffy's notes above how the peer review does improve the grant proposals. So we could peer review to get the grant accepted in a pool, then randomly select from the pool?

    – rcriii
    Apr 18 at 17:34








  • 1





    @rcriii: Yes, that's my experience: It would be enough to cluster proposals into pools. The NSF asks panelists to also rank proposals in the top and middle pool, presumably so that they have some backing for when they make the call on a particular proposal to be funded or not. I suspect that the outcome is not vastly better than by just randomly picking from the pool.

    – Wolfgang Bangerth
    Apr 18 at 19:06






  • 1





    @lordy I have never personally been part of a grant selection committee where what is described has occurred. I know anecdotally that - depending on the field - some will vigorously depend their choices but quite honestly true curmudgeons are rarely regularly invited to such panels largely because committee chairs and grant managers seek consensus rather than division.

    – ZeroTheHero
    Apr 19 at 1:00



















21














There is another aspect to the question at hand. A random assignment of monies to projects can be expected to fail because the system in place, whatever it is, induces certain behaviors. One wants to design a system so that positive behavior is encouraged and negative behavior discouraged.



In a system of peer review those seeking funds are induced to give their analysis of why their proposed project should be funded and why it is highly likely to be successful. They know that their words will have consequences and that the review will be carried out by knowledgeable people who have assumed a certain responsibility.



But if the system were random, and known to be random, the behavior induced would be quite different. There would be no real need to spend the time and effort to examine and explain the background and methodology to be used and to guarantee that it meets scientific rigor. The consequence would be that many more proposals would be made and most of them would be junk.



With that background, note that the proposals that are actually made to funding agencies are pretty much all guaranteed to be at least adequate. So the peer review boards are choosing from among the best proposals that might be expected, rather from the trash that might be thrown out by charlatans.



This brings me to a thought experiment that I've used in other contexts. Suppose, in a system that repeats, you rank a set of things on some linear scale from best to worst, and you discard the worst. The worst is replaced. This is sometimes what happens in companies that rank employees on some numeric scale and fire the worst performer.



But, if you started out with the, say, ten best performers in the world, your system will only worsen your overall system, since everyone else will be worse than the ten you start with.



Thus the question asked by the OP doesn't occur in a vacuum. Nor, I suggest, can the various proposals be ranked in theory on a best to worst scale. There are too many variables and something better on one scale isn't as good on another. So, while the system has flaws, it is in the nature of the world that some flaws will exist. But a system that induces and rewards good behavior is, in principle, about as good as we can hope for.



The various judgments made by the review boards are, in the larger sense of things, relatively minor.






share|improve this answer



















  • 7





    "the proposals that are actually made to funding agencies are pretty much all guaranteed to be at least adequate" -- this is empirically not true. About 20-30% of the proposals I've reviewed (which must now be in the range of 200-300) are not adequate by any measure.

    – Wolfgang Bangerth
    Apr 18 at 19:08






  • 7





    But I'll add that these are not written by charlatans, but typically by younger scientists at smaller universities with inadequate mentoring by those who know how winning proposals look like.

    – Wolfgang Bangerth
    Apr 18 at 19:09






  • 1





    You are assuming that all submitted proposals enter the lottery. This need not be the case. You can still screen the proposals and dismiss those which are clearly inadequate.

    – Carl Christian
    Apr 19 at 20:15











  • @CarlChristian that is not the definition of "random". Currently we have some degree of randomness in that different panels have different people. And different competition. People re-submit grants without any edits and get them just because competition or review panel changed

    – aaaaaa
    Apr 20 at 19:37











  • @aaaaaa I do not see that your comment has any bearing on the fact that Buffy assumes that all submitted proposals enter the lottery. Drawing lots among all adequate proposals eliminates any possibility of favoritism of any kind. This includes the well documented bias against female scientists. If there is a downside to drawing lots which compares against such gains, I simply do not see it.

    – Carl Christian
    Apr 20 at 22:01



















5














Short answer:



Such hard evidence probably does not currently exist, but finally a random "funding lottery" scheme has been implemented in New Zealand and is being studied. However, the sample size is small, and it may until at least 2026 before real data is obtained.





Why there is a lack of evidence:



In the RAND report Alternatives to Peer Review in Research Project Funding: 2013 Update it was noted that "the idea of randomly allocating research funding has been developed theoretically and is used by some institutions for small grants". While it's clear that the award of said small grants avoids some of the biases inherent in peer review-based funding and much of the administrative burden, the fact that the grants are small (e.g. some are appropriately sized for travel funding) means that the one cannot really infer anything about large-scale random funding schemes.



Things have evolved somewhat since. A 2018 paper titled "Policy Considerations for Random Allocation of Research Funds" by Shahar Avin notes that at least three major funding bodies have started allocating some funds using a degree of randomization. This includes




  1. The Health Research Council (HRC) of New Zealand's “Explorer Grants”.

  2. New Zealand’s Science for Technology Innovation's (SfTI) “Seed Projects”.

  3. Volkswagen Foundation’s “Experiment!” grants.


HRC describes the Explorer Grants as follows:




Explorer grants support transformative research ideas that have a good chance of making a revolutionary change to how we manage New Zealanders' health. They are available in any health research discipline and are worth $150,000 for a term of up to 24 months.

...
Applications for explorer grants are assessed by subpanels within the HRC's Explorer Grant Assessing Committee to see if they meet the criteria of being both transformative and viable. Unlike with any of our other grants, the assessment process for explorer grant applications is anonymous and all applications that meet the criteria are equally eligible to receive funding. A random number generator prioritises these applications.




The HRC does not use this scheme for other grants, but believes




that random funding is a fair and transparent way to choose between equally qualified applicants, and it's particularly suited to Explorer Grants where it may not be appropriate to rank or score high-risk applications with less predictable outcomes.




The SfTI Seed project proposals similarly undergo an initial assessment, and then a some of them form a special pool, from which proposals are randomly drawn. Proposals for the Volkswagen Foundation's Experiment! are screened by an interdisciplinary jury that funds some proposals, rejects some, and leaves some for a funding lottery.



As such, HRC's Explorer Grants are the most interesting from this aspect, as all proposals meeting the minimum requirements go into the lottery. This is why they are currently being studied:




A funding lottery creates a perfect randomized trial because we have equally worthy researchers who are funded at random. We can then track their careers from the point of randomization and compare them in terms of metrics such as publications, citations, and other funding, as well as perhaps more-complex outcomes, such as innovation. We are currently following researchers who applied for funding with the New Zealand Health Research Council and were randomly allocated funding (3); however, the sample size is small, and it may be at least a decade before we have accumulated enough data to show meaningful differences.






Summary:



In conclusion, there does not seem to be any hard evidence that the grant peer review system performs better than a process with random selection for proposals meeting minimum standards. (Like the other answer writers, I'm fairly sure a random system with no quality checks would lead to opportunistic low-standard applications...) The reason is that such randomized funding has basically not been attempted. The same letter cited above hints at the real reason why this hasn't been studied more, despite lacking evidence for the peer review-based system:




We have spoken with Australian funding agencies about using a lottery, and the reaction was strongly negative, with one staff member saying, “It would make it look like we don’t know what we’re doing.” A key concern is that politicians and the public would react negatively <...>







share|improve this answer































    2














    This will only be a supplement to the answer here of user106886. That answer discusses an interpretation of the actions of political actors. However, not all funding is from the government, so I'll write here about another option: private funding.



    Back during the Renaissance, artists and scientists were funded by patrons, often kings and princes, who controlled vast sums. There is still a vestige of patronage now in some circles. Web comics can be funded now, for example, by Patreon, thorough which people (like myself) pledge a few dollars a month to support the creators of the comics. We, the patrons, have no real influence on what gets created as each is a small part of a larger whole.



    But most private funding is from profit making companies, usually publicly traded companies who must justify expenditures. The rules of the game are that those deciding how to spend money have a fiduciary responsibility to shareholders that monies are properly spent in furtherance of the goals of the company. Therefore, money can't just be given out randomly, but requires some assurance that it will be well spent. This results in a pre-review of some kind, though not exactly the same as that required for government funding.



    There is a review and it is similar, at least, to peer review. It could even be peer review, as most companies wanting to fund research in X normally also have research departments skilled already in X. If a company gets 10 proposals and can only fund 3 of them, it needs to do a cost-benefit analysis before it can award funds, even if the analysis is flawed. It has to appear to be valid on its face.



    Bill Gates and Elon Musk, could, in principle use their own money to conduct a random experiment, but Microsoft and Tesla cannot. If the company gives you money they need some expectation that it will be properly spent and not wasted.






    share|improve this answer































      1














      Your original question assumes that the denominator (i.e. the set of proposals submitted) is likely to stay the same (or similar) in the quality of submissions across both pools (peer-reviewed vs. random chance). But that may no longer be true once a program puts out a call with a 50% random chance - as the quality demographics could dramatically change at submission.






      share|improve this answer































        0














        Unfortunately, I don't know of such a study, so this is more like comment that was too long for the comment box:



        Grants, especially big ones, also include a lot of politics. "Is this research topic interesting for the international reputation of our country?", etc. Thus, also the peer review is needed, because "I gave these guys 50 millions because a coin flip decided on that" doesn't sound as good as "a committee of experts discussed for 8 months and decided...".

        I personally (without backing of any data or study) think that it is really hard to determine high quality (number of papers, etc.) before hand, so it might well be that random would be just as good; but as explained above, even if peer review is no better than random, we still need it for political reasons, and of course to prevent total garbage or spam submissions just to get funds.






        share|improve this answer































          -2














          I do not know your definition of "hard evidence". But if it has any clause about anything needing to pass peer-review then you should think twice and realize the catch 22 in this situation.



          Anyone trying to publish such a thing, do you think they would be afraid of not getting grant next time, maybe? I would not expect them to get renewed grants anytime soon as what they are trying to publish is undermining the influence of how the grants work.






          share|improve this answer
























          • "Next time" they won't have the same reviewers. Also, "the grants" and a venue publishing such a study are not identical.

            – henning
            Apr 19 at 16:50













          • @henning the reviewers supposedly also want grants in the future no matter who they are for the moment.

            – mathreadler
            Apr 19 at 18:05










          protected by Alexandros Apr 19 at 18:16



          Thank you for your interest in this question.
          Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



          Would you like to answer one of these unanswered questions instead?














          8 Answers
          8






          active

          oldest

          votes








          8 Answers
          8






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          53














          No such study exists.



          You have to realize that the current model of funding research through grants is rather recent. Fifty years ago, research was funded with recurring credits: a lab/researcher got a fixed amount of money each year to do research as they saw fit. Only applied researchers got industrial grants to develop precise new applications, but this wasn't the core of their funding.



          The current system arose due to the combination of two factors:




          • The growing hegemony of the neo-liberal ideology in the 80s-90s and the commodification of society as a whole. According to this ideology, research as a whole is supposedly more efficient (for an ill-defined notion of "efficient") if researchers compete with one another on the global market of research funding. They will "naturally" tend to research what works in order to keep their funding.

          • Politicians' will to govern research. Research, and especially fundamental research, is a notoriously difficult thing to govern: you never know where an idea might lead, or what new ideas will come up along the way. You don't know what will work and what won't. You don't know what will be applicable and what will remain a nice theoretical idea. Otherwise it wouldn't be research.

            But rulers want their word in this: they want to tell their voters that their policies improved lives in a very direct way as a result of such and such decision they took. This is why words like "governance" and "steering" are becoming more and more prevalent in modern academia: administrators are not in the business of just helping researchers carry out their research, they are in the business of ensuring that researchers are researching what they are supposed to. And how do you force someone to do what you want, short of physical violence? Through economical violence.


          Of course, this yields absurd situations, such as the fabled year when the French National Research Agency spent more on administration than on research. And such thinking paves the way for short-term research with zero long term goals, and especially not goals reachable in more than five years (the usual length of "long-term" grants). But these decisions are not driven by rational thought; they are driven by ideology.






          share|improve this answer



















          • 9





            Only the first sentence answers the question.

            – Anonymous Physicist
            Apr 18 at 11:44






          • 21





            @AnonymousPhysicist Context and explanations are always welcome. I'm pretty sure an answer just saying "No" (technically correct as you said) wouldn't be accepted as an answer, and if so, it wouldn't be a good one.

            – user4052054
            Apr 18 at 15:00











          • Comments are not for extended discussion; this conversation has been moved to chat.

            – eykanal
            Apr 22 at 14:44
















          53














          No such study exists.



          You have to realize that the current model of funding research through grants is rather recent. Fifty years ago, research was funded with recurring credits: a lab/researcher got a fixed amount of money each year to do research as they saw fit. Only applied researchers got industrial grants to develop precise new applications, but this wasn't the core of their funding.



          The current system arose due to the combination of two factors:




          • The growing hegemony of the neo-liberal ideology in the 80s-90s and the commodification of society as a whole. According to this ideology, research as a whole is supposedly more efficient (for an ill-defined notion of "efficient") if researchers compete with one another on the global market of research funding. They will "naturally" tend to research what works in order to keep their funding.

          • Politicians' will to govern research. Research, and especially fundamental research, is a notoriously difficult thing to govern: you never know where an idea might lead, or what new ideas will come up along the way. You don't know what will work and what won't. You don't know what will be applicable and what will remain a nice theoretical idea. Otherwise it wouldn't be research.

            But rulers want their word in this: they want to tell their voters that their policies improved lives in a very direct way as a result of such and such decision they took. This is why words like "governance" and "steering" are becoming more and more prevalent in modern academia: administrators are not in the business of just helping researchers carry out their research, they are in the business of ensuring that researchers are researching what they are supposed to. And how do you force someone to do what you want, short of physical violence? Through economical violence.


          Of course, this yields absurd situations, such as the fabled year when the French National Research Agency spent more on administration than on research. And such thinking paves the way for short-term research with zero long term goals, and especially not goals reachable in more than five years (the usual length of "long-term" grants). But these decisions are not driven by rational thought; they are driven by ideology.






          share|improve this answer



















          • 9





            Only the first sentence answers the question.

            – Anonymous Physicist
            Apr 18 at 11:44






          • 21





            @AnonymousPhysicist Context and explanations are always welcome. I'm pretty sure an answer just saying "No" (technically correct as you said) wouldn't be accepted as an answer, and if so, it wouldn't be a good one.

            – user4052054
            Apr 18 at 15:00











          • Comments are not for extended discussion; this conversation has been moved to chat.

            – eykanal
            Apr 22 at 14:44














          53












          53








          53







          No such study exists.



          You have to realize that the current model of funding research through grants is rather recent. Fifty years ago, research was funded with recurring credits: a lab/researcher got a fixed amount of money each year to do research as they saw fit. Only applied researchers got industrial grants to develop precise new applications, but this wasn't the core of their funding.



          The current system arose due to the combination of two factors:




          • The growing hegemony of the neo-liberal ideology in the 80s-90s and the commodification of society as a whole. According to this ideology, research as a whole is supposedly more efficient (for an ill-defined notion of "efficient") if researchers compete with one another on the global market of research funding. They will "naturally" tend to research what works in order to keep their funding.

          • Politicians' will to govern research. Research, and especially fundamental research, is a notoriously difficult thing to govern: you never know where an idea might lead, or what new ideas will come up along the way. You don't know what will work and what won't. You don't know what will be applicable and what will remain a nice theoretical idea. Otherwise it wouldn't be research.

            But rulers want their word in this: they want to tell their voters that their policies improved lives in a very direct way as a result of such and such decision they took. This is why words like "governance" and "steering" are becoming more and more prevalent in modern academia: administrators are not in the business of just helping researchers carry out their research, they are in the business of ensuring that researchers are researching what they are supposed to. And how do you force someone to do what you want, short of physical violence? Through economical violence.


          Of course, this yields absurd situations, such as the fabled year when the French National Research Agency spent more on administration than on research. And such thinking paves the way for short-term research with zero long term goals, and especially not goals reachable in more than five years (the usual length of "long-term" grants). But these decisions are not driven by rational thought; they are driven by ideology.






          share|improve this answer













          No such study exists.



          You have to realize that the current model of funding research through grants is rather recent. Fifty years ago, research was funded with recurring credits: a lab/researcher got a fixed amount of money each year to do research as they saw fit. Only applied researchers got industrial grants to develop precise new applications, but this wasn't the core of their funding.



          The current system arose due to the combination of two factors:




          • The growing hegemony of the neo-liberal ideology in the 80s-90s and the commodification of society as a whole. According to this ideology, research as a whole is supposedly more efficient (for an ill-defined notion of "efficient") if researchers compete with one another on the global market of research funding. They will "naturally" tend to research what works in order to keep their funding.

          • Politicians' will to govern research. Research, and especially fundamental research, is a notoriously difficult thing to govern: you never know where an idea might lead, or what new ideas will come up along the way. You don't know what will work and what won't. You don't know what will be applicable and what will remain a nice theoretical idea. Otherwise it wouldn't be research.

            But rulers want their word in this: they want to tell their voters that their policies improved lives in a very direct way as a result of such and such decision they took. This is why words like "governance" and "steering" are becoming more and more prevalent in modern academia: administrators are not in the business of just helping researchers carry out their research, they are in the business of ensuring that researchers are researching what they are supposed to. And how do you force someone to do what you want, short of physical violence? Through economical violence.


          Of course, this yields absurd situations, such as the fabled year when the French National Research Agency spent more on administration than on research. And such thinking paves the way for short-term research with zero long term goals, and especially not goals reachable in more than five years (the usual length of "long-term" grants). But these decisions are not driven by rational thought; they are driven by ideology.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Apr 18 at 11:28









          user106886user106886

          53423




          53423








          • 9





            Only the first sentence answers the question.

            – Anonymous Physicist
            Apr 18 at 11:44






          • 21





            @AnonymousPhysicist Context and explanations are always welcome. I'm pretty sure an answer just saying "No" (technically correct as you said) wouldn't be accepted as an answer, and if so, it wouldn't be a good one.

            – user4052054
            Apr 18 at 15:00











          • Comments are not for extended discussion; this conversation has been moved to chat.

            – eykanal
            Apr 22 at 14:44














          • 9





            Only the first sentence answers the question.

            – Anonymous Physicist
            Apr 18 at 11:44






          • 21





            @AnonymousPhysicist Context and explanations are always welcome. I'm pretty sure an answer just saying "No" (technically correct as you said) wouldn't be accepted as an answer, and if so, it wouldn't be a good one.

            – user4052054
            Apr 18 at 15:00











          • Comments are not for extended discussion; this conversation has been moved to chat.

            – eykanal
            Apr 22 at 14:44








          9




          9





          Only the first sentence answers the question.

          – Anonymous Physicist
          Apr 18 at 11:44





          Only the first sentence answers the question.

          – Anonymous Physicist
          Apr 18 at 11:44




          21




          21





          @AnonymousPhysicist Context and explanations are always welcome. I'm pretty sure an answer just saying "No" (technically correct as you said) wouldn't be accepted as an answer, and if so, it wouldn't be a good one.

          – user4052054
          Apr 18 at 15:00





          @AnonymousPhysicist Context and explanations are always welcome. I'm pretty sure an answer just saying "No" (technically correct as you said) wouldn't be accepted as an answer, and if so, it wouldn't be a good one.

          – user4052054
          Apr 18 at 15:00













          Comments are not for extended discussion; this conversation has been moved to chat.

          – eykanal
          Apr 22 at 14:44





          Comments are not for extended discussion; this conversation has been moved to chat.

          – eykanal
          Apr 22 at 14:44











          38














          I don't know about such studies, but I have served on ~20 panels to review proposals. While I'm entirely willing to believe that multiple panels will not agree on the relative ordering of proposals, I am quite convinced that they will in general agree on which proposals are "good" and which are "not good".



          In any given round, a panel (at the National Science Foundation) will review on the order of 20 proposals. What may not be obvious to outsiders is that of these 10 are pretty obviously not fundable. 3-5 are eventually rated as excellent and the remainder as "pretty good". I am pretty convinced that if you ran multiple panels on these 20 proposals, that this classification into three groups will be more or less stable. What I am also convinced of is that panels will produce different rankings within each group.



          The consequence is that it is quite possible that different panels will results in a different set of proposals being funded, given that only ~4 proposals out of the 20 will be funded. But overall, the result will still be far from random. The top 3-5 are still likely to be funded, the bottom 10 will definitely not be funded, and it's a toss-up in between.






          share|improve this answer





















          • 11





            Having sat on a similar number of panels (for internal proposals) I am entirely in agreement on this. You get 10 people to rank 20 proposals, and they fall into those three categories in about those ratios. If anything, it is pleasantly surprising to see a room full of people with different backgrounds and priorities generally agree on 80% of the proposals (12 are bad, 4 are excellent) and only need to hash out which of the remaining 4 are worthy and then only if there is enough money.

            – Jon Custer
            Apr 18 at 13:46






          • 3





            Hmm the study linked above finds the opposite: "discussion during study section meetings worsened rather than improved disagreement among different study sections. " (study section meetings = panel meetings)

            – lordy
            Apr 18 at 14:56






          • 3





            I've never served on a grant review board, but this agrees with my experience in other settings where I had to choose from a set of candidate (selecting papers for conferences, best paper awards and recruiting). So from outside it looks as if the final winner is a random choice from the top X submissions. But @Buffy's notes above how the peer review does improve the grant proposals. So we could peer review to get the grant accepted in a pool, then randomly select from the pool?

            – rcriii
            Apr 18 at 17:34








          • 1





            @rcriii: Yes, that's my experience: It would be enough to cluster proposals into pools. The NSF asks panelists to also rank proposals in the top and middle pool, presumably so that they have some backing for when they make the call on a particular proposal to be funded or not. I suspect that the outcome is not vastly better than by just randomly picking from the pool.

            – Wolfgang Bangerth
            Apr 18 at 19:06






          • 1





            @lordy I have never personally been part of a grant selection committee where what is described has occurred. I know anecdotally that - depending on the field - some will vigorously depend their choices but quite honestly true curmudgeons are rarely regularly invited to such panels largely because committee chairs and grant managers seek consensus rather than division.

            – ZeroTheHero
            Apr 19 at 1:00
















          38














          I don't know about such studies, but I have served on ~20 panels to review proposals. While I'm entirely willing to believe that multiple panels will not agree on the relative ordering of proposals, I am quite convinced that they will in general agree on which proposals are "good" and which are "not good".



          In any given round, a panel (at the National Science Foundation) will review on the order of 20 proposals. What may not be obvious to outsiders is that of these 10 are pretty obviously not fundable. 3-5 are eventually rated as excellent and the remainder as "pretty good". I am pretty convinced that if you ran multiple panels on these 20 proposals, that this classification into three groups will be more or less stable. What I am also convinced of is that panels will produce different rankings within each group.



          The consequence is that it is quite possible that different panels will results in a different set of proposals being funded, given that only ~4 proposals out of the 20 will be funded. But overall, the result will still be far from random. The top 3-5 are still likely to be funded, the bottom 10 will definitely not be funded, and it's a toss-up in between.






          share|improve this answer





















          • 11





            Having sat on a similar number of panels (for internal proposals) I am entirely in agreement on this. You get 10 people to rank 20 proposals, and they fall into those three categories in about those ratios. If anything, it is pleasantly surprising to see a room full of people with different backgrounds and priorities generally agree on 80% of the proposals (12 are bad, 4 are excellent) and only need to hash out which of the remaining 4 are worthy and then only if there is enough money.

            – Jon Custer
            Apr 18 at 13:46






          • 3





            Hmm the study linked above finds the opposite: "discussion during study section meetings worsened rather than improved disagreement among different study sections. " (study section meetings = panel meetings)

            – lordy
            Apr 18 at 14:56






          • 3





            I've never served on a grant review board, but this agrees with my experience in other settings where I had to choose from a set of candidate (selecting papers for conferences, best paper awards and recruiting). So from outside it looks as if the final winner is a random choice from the top X submissions. But @Buffy's notes above how the peer review does improve the grant proposals. So we could peer review to get the grant accepted in a pool, then randomly select from the pool?

            – rcriii
            Apr 18 at 17:34








          • 1





            @rcriii: Yes, that's my experience: It would be enough to cluster proposals into pools. The NSF asks panelists to also rank proposals in the top and middle pool, presumably so that they have some backing for when they make the call on a particular proposal to be funded or not. I suspect that the outcome is not vastly better than by just randomly picking from the pool.

            – Wolfgang Bangerth
            Apr 18 at 19:06






          • 1





            @lordy I have never personally been part of a grant selection committee where what is described has occurred. I know anecdotally that - depending on the field - some will vigorously depend their choices but quite honestly true curmudgeons are rarely regularly invited to such panels largely because committee chairs and grant managers seek consensus rather than division.

            – ZeroTheHero
            Apr 19 at 1:00














          38












          38








          38







          I don't know about such studies, but I have served on ~20 panels to review proposals. While I'm entirely willing to believe that multiple panels will not agree on the relative ordering of proposals, I am quite convinced that they will in general agree on which proposals are "good" and which are "not good".



          In any given round, a panel (at the National Science Foundation) will review on the order of 20 proposals. What may not be obvious to outsiders is that of these 10 are pretty obviously not fundable. 3-5 are eventually rated as excellent and the remainder as "pretty good". I am pretty convinced that if you ran multiple panels on these 20 proposals, that this classification into three groups will be more or less stable. What I am also convinced of is that panels will produce different rankings within each group.



          The consequence is that it is quite possible that different panels will results in a different set of proposals being funded, given that only ~4 proposals out of the 20 will be funded. But overall, the result will still be far from random. The top 3-5 are still likely to be funded, the bottom 10 will definitely not be funded, and it's a toss-up in between.






          share|improve this answer















          I don't know about such studies, but I have served on ~20 panels to review proposals. While I'm entirely willing to believe that multiple panels will not agree on the relative ordering of proposals, I am quite convinced that they will in general agree on which proposals are "good" and which are "not good".



          In any given round, a panel (at the National Science Foundation) will review on the order of 20 proposals. What may not be obvious to outsiders is that of these 10 are pretty obviously not fundable. 3-5 are eventually rated as excellent and the remainder as "pretty good". I am pretty convinced that if you ran multiple panels on these 20 proposals, that this classification into three groups will be more or less stable. What I am also convinced of is that panels will produce different rankings within each group.



          The consequence is that it is quite possible that different panels will results in a different set of proposals being funded, given that only ~4 proposals out of the 20 will be funded. But overall, the result will still be far from random. The top 3-5 are still likely to be funded, the bottom 10 will definitely not be funded, and it's a toss-up in between.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Apr 18 at 19:06

























          answered Apr 18 at 13:16









          Wolfgang BangerthWolfgang Bangerth

          35.6k471124




          35.6k471124








          • 11





            Having sat on a similar number of panels (for internal proposals) I am entirely in agreement on this. You get 10 people to rank 20 proposals, and they fall into those three categories in about those ratios. If anything, it is pleasantly surprising to see a room full of people with different backgrounds and priorities generally agree on 80% of the proposals (12 are bad, 4 are excellent) and only need to hash out which of the remaining 4 are worthy and then only if there is enough money.

            – Jon Custer
            Apr 18 at 13:46






          • 3





            Hmm the study linked above finds the opposite: "discussion during study section meetings worsened rather than improved disagreement among different study sections. " (study section meetings = panel meetings)

            – lordy
            Apr 18 at 14:56






          • 3





            I've never served on a grant review board, but this agrees with my experience in other settings where I had to choose from a set of candidate (selecting papers for conferences, best paper awards and recruiting). So from outside it looks as if the final winner is a random choice from the top X submissions. But @Buffy's notes above how the peer review does improve the grant proposals. So we could peer review to get the grant accepted in a pool, then randomly select from the pool?

            – rcriii
            Apr 18 at 17:34








          • 1





            @rcriii: Yes, that's my experience: It would be enough to cluster proposals into pools. The NSF asks panelists to also rank proposals in the top and middle pool, presumably so that they have some backing for when they make the call on a particular proposal to be funded or not. I suspect that the outcome is not vastly better than by just randomly picking from the pool.

            – Wolfgang Bangerth
            Apr 18 at 19:06






          • 1





            @lordy I have never personally been part of a grant selection committee where what is described has occurred. I know anecdotally that - depending on the field - some will vigorously depend their choices but quite honestly true curmudgeons are rarely regularly invited to such panels largely because committee chairs and grant managers seek consensus rather than division.

            – ZeroTheHero
            Apr 19 at 1:00














          • 11





            Having sat on a similar number of panels (for internal proposals) I am entirely in agreement on this. You get 10 people to rank 20 proposals, and they fall into those three categories in about those ratios. If anything, it is pleasantly surprising to see a room full of people with different backgrounds and priorities generally agree on 80% of the proposals (12 are bad, 4 are excellent) and only need to hash out which of the remaining 4 are worthy and then only if there is enough money.

            – Jon Custer
            Apr 18 at 13:46






          • 3





            Hmm the study linked above finds the opposite: "discussion during study section meetings worsened rather than improved disagreement among different study sections. " (study section meetings = panel meetings)

            – lordy
            Apr 18 at 14:56






          • 3





            I've never served on a grant review board, but this agrees with my experience in other settings where I had to choose from a set of candidate (selecting papers for conferences, best paper awards and recruiting). So from outside it looks as if the final winner is a random choice from the top X submissions. But @Buffy's notes above how the peer review does improve the grant proposals. So we could peer review to get the grant accepted in a pool, then randomly select from the pool?

            – rcriii
            Apr 18 at 17:34








          • 1





            @rcriii: Yes, that's my experience: It would be enough to cluster proposals into pools. The NSF asks panelists to also rank proposals in the top and middle pool, presumably so that they have some backing for when they make the call on a particular proposal to be funded or not. I suspect that the outcome is not vastly better than by just randomly picking from the pool.

            – Wolfgang Bangerth
            Apr 18 at 19:06






          • 1





            @lordy I have never personally been part of a grant selection committee where what is described has occurred. I know anecdotally that - depending on the field - some will vigorously depend their choices but quite honestly true curmudgeons are rarely regularly invited to such panels largely because committee chairs and grant managers seek consensus rather than division.

            – ZeroTheHero
            Apr 19 at 1:00








          11




          11





          Having sat on a similar number of panels (for internal proposals) I am entirely in agreement on this. You get 10 people to rank 20 proposals, and they fall into those three categories in about those ratios. If anything, it is pleasantly surprising to see a room full of people with different backgrounds and priorities generally agree on 80% of the proposals (12 are bad, 4 are excellent) and only need to hash out which of the remaining 4 are worthy and then only if there is enough money.

          – Jon Custer
          Apr 18 at 13:46





          Having sat on a similar number of panels (for internal proposals) I am entirely in agreement on this. You get 10 people to rank 20 proposals, and they fall into those three categories in about those ratios. If anything, it is pleasantly surprising to see a room full of people with different backgrounds and priorities generally agree on 80% of the proposals (12 are bad, 4 are excellent) and only need to hash out which of the remaining 4 are worthy and then only if there is enough money.

          – Jon Custer
          Apr 18 at 13:46




          3




          3





          Hmm the study linked above finds the opposite: "discussion during study section meetings worsened rather than improved disagreement among different study sections. " (study section meetings = panel meetings)

          – lordy
          Apr 18 at 14:56





          Hmm the study linked above finds the opposite: "discussion during study section meetings worsened rather than improved disagreement among different study sections. " (study section meetings = panel meetings)

          – lordy
          Apr 18 at 14:56




          3




          3





          I've never served on a grant review board, but this agrees with my experience in other settings where I had to choose from a set of candidate (selecting papers for conferences, best paper awards and recruiting). So from outside it looks as if the final winner is a random choice from the top X submissions. But @Buffy's notes above how the peer review does improve the grant proposals. So we could peer review to get the grant accepted in a pool, then randomly select from the pool?

          – rcriii
          Apr 18 at 17:34







          I've never served on a grant review board, but this agrees with my experience in other settings where I had to choose from a set of candidate (selecting papers for conferences, best paper awards and recruiting). So from outside it looks as if the final winner is a random choice from the top X submissions. But @Buffy's notes above how the peer review does improve the grant proposals. So we could peer review to get the grant accepted in a pool, then randomly select from the pool?

          – rcriii
          Apr 18 at 17:34






          1




          1





          @rcriii: Yes, that's my experience: It would be enough to cluster proposals into pools. The NSF asks panelists to also rank proposals in the top and middle pool, presumably so that they have some backing for when they make the call on a particular proposal to be funded or not. I suspect that the outcome is not vastly better than by just randomly picking from the pool.

          – Wolfgang Bangerth
          Apr 18 at 19:06





          @rcriii: Yes, that's my experience: It would be enough to cluster proposals into pools. The NSF asks panelists to also rank proposals in the top and middle pool, presumably so that they have some backing for when they make the call on a particular proposal to be funded or not. I suspect that the outcome is not vastly better than by just randomly picking from the pool.

          – Wolfgang Bangerth
          Apr 18 at 19:06




          1




          1





          @lordy I have never personally been part of a grant selection committee where what is described has occurred. I know anecdotally that - depending on the field - some will vigorously depend their choices but quite honestly true curmudgeons are rarely regularly invited to such panels largely because committee chairs and grant managers seek consensus rather than division.

          – ZeroTheHero
          Apr 19 at 1:00





          @lordy I have never personally been part of a grant selection committee where what is described has occurred. I know anecdotally that - depending on the field - some will vigorously depend their choices but quite honestly true curmudgeons are rarely regularly invited to such panels largely because committee chairs and grant managers seek consensus rather than division.

          – ZeroTheHero
          Apr 19 at 1:00











          21














          There is another aspect to the question at hand. A random assignment of monies to projects can be expected to fail because the system in place, whatever it is, induces certain behaviors. One wants to design a system so that positive behavior is encouraged and negative behavior discouraged.



          In a system of peer review those seeking funds are induced to give their analysis of why their proposed project should be funded and why it is highly likely to be successful. They know that their words will have consequences and that the review will be carried out by knowledgeable people who have assumed a certain responsibility.



          But if the system were random, and known to be random, the behavior induced would be quite different. There would be no real need to spend the time and effort to examine and explain the background and methodology to be used and to guarantee that it meets scientific rigor. The consequence would be that many more proposals would be made and most of them would be junk.



          With that background, note that the proposals that are actually made to funding agencies are pretty much all guaranteed to be at least adequate. So the peer review boards are choosing from among the best proposals that might be expected, rather from the trash that might be thrown out by charlatans.



          This brings me to a thought experiment that I've used in other contexts. Suppose, in a system that repeats, you rank a set of things on some linear scale from best to worst, and you discard the worst. The worst is replaced. This is sometimes what happens in companies that rank employees on some numeric scale and fire the worst performer.



          But, if you started out with the, say, ten best performers in the world, your system will only worsen your overall system, since everyone else will be worse than the ten you start with.



          Thus the question asked by the OP doesn't occur in a vacuum. Nor, I suggest, can the various proposals be ranked in theory on a best to worst scale. There are too many variables and something better on one scale isn't as good on another. So, while the system has flaws, it is in the nature of the world that some flaws will exist. But a system that induces and rewards good behavior is, in principle, about as good as we can hope for.



          The various judgments made by the review boards are, in the larger sense of things, relatively minor.






          share|improve this answer



















          • 7





            "the proposals that are actually made to funding agencies are pretty much all guaranteed to be at least adequate" -- this is empirically not true. About 20-30% of the proposals I've reviewed (which must now be in the range of 200-300) are not adequate by any measure.

            – Wolfgang Bangerth
            Apr 18 at 19:08






          • 7





            But I'll add that these are not written by charlatans, but typically by younger scientists at smaller universities with inadequate mentoring by those who know how winning proposals look like.

            – Wolfgang Bangerth
            Apr 18 at 19:09






          • 1





            You are assuming that all submitted proposals enter the lottery. This need not be the case. You can still screen the proposals and dismiss those which are clearly inadequate.

            – Carl Christian
            Apr 19 at 20:15











          • @CarlChristian that is not the definition of "random". Currently we have some degree of randomness in that different panels have different people. And different competition. People re-submit grants without any edits and get them just because competition or review panel changed

            – aaaaaa
            Apr 20 at 19:37











          • @aaaaaa I do not see that your comment has any bearing on the fact that Buffy assumes that all submitted proposals enter the lottery. Drawing lots among all adequate proposals eliminates any possibility of favoritism of any kind. This includes the well documented bias against female scientists. If there is a downside to drawing lots which compares against such gains, I simply do not see it.

            – Carl Christian
            Apr 20 at 22:01
















          21














          There is another aspect to the question at hand. A random assignment of monies to projects can be expected to fail because the system in place, whatever it is, induces certain behaviors. One wants to design a system so that positive behavior is encouraged and negative behavior discouraged.



          In a system of peer review those seeking funds are induced to give their analysis of why their proposed project should be funded and why it is highly likely to be successful. They know that their words will have consequences and that the review will be carried out by knowledgeable people who have assumed a certain responsibility.



          But if the system were random, and known to be random, the behavior induced would be quite different. There would be no real need to spend the time and effort to examine and explain the background and methodology to be used and to guarantee that it meets scientific rigor. The consequence would be that many more proposals would be made and most of them would be junk.



          With that background, note that the proposals that are actually made to funding agencies are pretty much all guaranteed to be at least adequate. So the peer review boards are choosing from among the best proposals that might be expected, rather from the trash that might be thrown out by charlatans.



          This brings me to a thought experiment that I've used in other contexts. Suppose, in a system that repeats, you rank a set of things on some linear scale from best to worst, and you discard the worst. The worst is replaced. This is sometimes what happens in companies that rank employees on some numeric scale and fire the worst performer.



          But, if you started out with the, say, ten best performers in the world, your system will only worsen your overall system, since everyone else will be worse than the ten you start with.



          Thus the question asked by the OP doesn't occur in a vacuum. Nor, I suggest, can the various proposals be ranked in theory on a best to worst scale. There are too many variables and something better on one scale isn't as good on another. So, while the system has flaws, it is in the nature of the world that some flaws will exist. But a system that induces and rewards good behavior is, in principle, about as good as we can hope for.



          The various judgments made by the review boards are, in the larger sense of things, relatively minor.






          share|improve this answer



















          • 7





            "the proposals that are actually made to funding agencies are pretty much all guaranteed to be at least adequate" -- this is empirically not true. About 20-30% of the proposals I've reviewed (which must now be in the range of 200-300) are not adequate by any measure.

            – Wolfgang Bangerth
            Apr 18 at 19:08






          • 7





            But I'll add that these are not written by charlatans, but typically by younger scientists at smaller universities with inadequate mentoring by those who know how winning proposals look like.

            – Wolfgang Bangerth
            Apr 18 at 19:09






          • 1





            You are assuming that all submitted proposals enter the lottery. This need not be the case. You can still screen the proposals and dismiss those which are clearly inadequate.

            – Carl Christian
            Apr 19 at 20:15











          • @CarlChristian that is not the definition of "random". Currently we have some degree of randomness in that different panels have different people. And different competition. People re-submit grants without any edits and get them just because competition or review panel changed

            – aaaaaa
            Apr 20 at 19:37











          • @aaaaaa I do not see that your comment has any bearing on the fact that Buffy assumes that all submitted proposals enter the lottery. Drawing lots among all adequate proposals eliminates any possibility of favoritism of any kind. This includes the well documented bias against female scientists. If there is a downside to drawing lots which compares against such gains, I simply do not see it.

            – Carl Christian
            Apr 20 at 22:01














          21












          21








          21







          There is another aspect to the question at hand. A random assignment of monies to projects can be expected to fail because the system in place, whatever it is, induces certain behaviors. One wants to design a system so that positive behavior is encouraged and negative behavior discouraged.



          In a system of peer review those seeking funds are induced to give their analysis of why their proposed project should be funded and why it is highly likely to be successful. They know that their words will have consequences and that the review will be carried out by knowledgeable people who have assumed a certain responsibility.



          But if the system were random, and known to be random, the behavior induced would be quite different. There would be no real need to spend the time and effort to examine and explain the background and methodology to be used and to guarantee that it meets scientific rigor. The consequence would be that many more proposals would be made and most of them would be junk.



          With that background, note that the proposals that are actually made to funding agencies are pretty much all guaranteed to be at least adequate. So the peer review boards are choosing from among the best proposals that might be expected, rather from the trash that might be thrown out by charlatans.



          This brings me to a thought experiment that I've used in other contexts. Suppose, in a system that repeats, you rank a set of things on some linear scale from best to worst, and you discard the worst. The worst is replaced. This is sometimes what happens in companies that rank employees on some numeric scale and fire the worst performer.



          But, if you started out with the, say, ten best performers in the world, your system will only worsen your overall system, since everyone else will be worse than the ten you start with.



          Thus the question asked by the OP doesn't occur in a vacuum. Nor, I suggest, can the various proposals be ranked in theory on a best to worst scale. There are too many variables and something better on one scale isn't as good on another. So, while the system has flaws, it is in the nature of the world that some flaws will exist. But a system that induces and rewards good behavior is, in principle, about as good as we can hope for.



          The various judgments made by the review boards are, in the larger sense of things, relatively minor.






          share|improve this answer













          There is another aspect to the question at hand. A random assignment of monies to projects can be expected to fail because the system in place, whatever it is, induces certain behaviors. One wants to design a system so that positive behavior is encouraged and negative behavior discouraged.



          In a system of peer review those seeking funds are induced to give their analysis of why their proposed project should be funded and why it is highly likely to be successful. They know that their words will have consequences and that the review will be carried out by knowledgeable people who have assumed a certain responsibility.



          But if the system were random, and known to be random, the behavior induced would be quite different. There would be no real need to spend the time and effort to examine and explain the background and methodology to be used and to guarantee that it meets scientific rigor. The consequence would be that many more proposals would be made and most of them would be junk.



          With that background, note that the proposals that are actually made to funding agencies are pretty much all guaranteed to be at least adequate. So the peer review boards are choosing from among the best proposals that might be expected, rather from the trash that might be thrown out by charlatans.



          This brings me to a thought experiment that I've used in other contexts. Suppose, in a system that repeats, you rank a set of things on some linear scale from best to worst, and you discard the worst. The worst is replaced. This is sometimes what happens in companies that rank employees on some numeric scale and fire the worst performer.



          But, if you started out with the, say, ten best performers in the world, your system will only worsen your overall system, since everyone else will be worse than the ten you start with.



          Thus the question asked by the OP doesn't occur in a vacuum. Nor, I suggest, can the various proposals be ranked in theory on a best to worst scale. There are too many variables and something better on one scale isn't as good on another. So, while the system has flaws, it is in the nature of the world that some flaws will exist. But a system that induces and rewards good behavior is, in principle, about as good as we can hope for.



          The various judgments made by the review boards are, in the larger sense of things, relatively minor.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Apr 18 at 13:47









          BuffyBuffy

          59.6k17182281




          59.6k17182281








          • 7





            "the proposals that are actually made to funding agencies are pretty much all guaranteed to be at least adequate" -- this is empirically not true. About 20-30% of the proposals I've reviewed (which must now be in the range of 200-300) are not adequate by any measure.

            – Wolfgang Bangerth
            Apr 18 at 19:08






          • 7





            But I'll add that these are not written by charlatans, but typically by younger scientists at smaller universities with inadequate mentoring by those who know how winning proposals look like.

            – Wolfgang Bangerth
            Apr 18 at 19:09






          • 1





            You are assuming that all submitted proposals enter the lottery. This need not be the case. You can still screen the proposals and dismiss those which are clearly inadequate.

            – Carl Christian
            Apr 19 at 20:15











          • @CarlChristian that is not the definition of "random". Currently we have some degree of randomness in that different panels have different people. And different competition. People re-submit grants without any edits and get them just because competition or review panel changed

            – aaaaaa
            Apr 20 at 19:37











          • @aaaaaa I do not see that your comment has any bearing on the fact that Buffy assumes that all submitted proposals enter the lottery. Drawing lots among all adequate proposals eliminates any possibility of favoritism of any kind. This includes the well documented bias against female scientists. If there is a downside to drawing lots which compares against such gains, I simply do not see it.

            – Carl Christian
            Apr 20 at 22:01














          • 7





            "the proposals that are actually made to funding agencies are pretty much all guaranteed to be at least adequate" -- this is empirically not true. About 20-30% of the proposals I've reviewed (which must now be in the range of 200-300) are not adequate by any measure.

            – Wolfgang Bangerth
            Apr 18 at 19:08






          • 7





            But I'll add that these are not written by charlatans, but typically by younger scientists at smaller universities with inadequate mentoring by those who know how winning proposals look like.

            – Wolfgang Bangerth
            Apr 18 at 19:09






          • 1





            You are assuming that all submitted proposals enter the lottery. This need not be the case. You can still screen the proposals and dismiss those which are clearly inadequate.

            – Carl Christian
            Apr 19 at 20:15











          • @CarlChristian that is not the definition of "random". Currently we have some degree of randomness in that different panels have different people. And different competition. People re-submit grants without any edits and get them just because competition or review panel changed

            – aaaaaa
            Apr 20 at 19:37











          • @aaaaaa I do not see that your comment has any bearing on the fact that Buffy assumes that all submitted proposals enter the lottery. Drawing lots among all adequate proposals eliminates any possibility of favoritism of any kind. This includes the well documented bias against female scientists. If there is a downside to drawing lots which compares against such gains, I simply do not see it.

            – Carl Christian
            Apr 20 at 22:01








          7




          7





          "the proposals that are actually made to funding agencies are pretty much all guaranteed to be at least adequate" -- this is empirically not true. About 20-30% of the proposals I've reviewed (which must now be in the range of 200-300) are not adequate by any measure.

          – Wolfgang Bangerth
          Apr 18 at 19:08





          "the proposals that are actually made to funding agencies are pretty much all guaranteed to be at least adequate" -- this is empirically not true. About 20-30% of the proposals I've reviewed (which must now be in the range of 200-300) are not adequate by any measure.

          – Wolfgang Bangerth
          Apr 18 at 19:08




          7




          7





          But I'll add that these are not written by charlatans, but typically by younger scientists at smaller universities with inadequate mentoring by those who know how winning proposals look like.

          – Wolfgang Bangerth
          Apr 18 at 19:09





          But I'll add that these are not written by charlatans, but typically by younger scientists at smaller universities with inadequate mentoring by those who know how winning proposals look like.

          – Wolfgang Bangerth
          Apr 18 at 19:09




          1




          1





          You are assuming that all submitted proposals enter the lottery. This need not be the case. You can still screen the proposals and dismiss those which are clearly inadequate.

          – Carl Christian
          Apr 19 at 20:15





          You are assuming that all submitted proposals enter the lottery. This need not be the case. You can still screen the proposals and dismiss those which are clearly inadequate.

          – Carl Christian
          Apr 19 at 20:15













          @CarlChristian that is not the definition of "random". Currently we have some degree of randomness in that different panels have different people. And different competition. People re-submit grants without any edits and get them just because competition or review panel changed

          – aaaaaa
          Apr 20 at 19:37





          @CarlChristian that is not the definition of "random". Currently we have some degree of randomness in that different panels have different people. And different competition. People re-submit grants without any edits and get them just because competition or review panel changed

          – aaaaaa
          Apr 20 at 19:37













          @aaaaaa I do not see that your comment has any bearing on the fact that Buffy assumes that all submitted proposals enter the lottery. Drawing lots among all adequate proposals eliminates any possibility of favoritism of any kind. This includes the well documented bias against female scientists. If there is a downside to drawing lots which compares against such gains, I simply do not see it.

          – Carl Christian
          Apr 20 at 22:01





          @aaaaaa I do not see that your comment has any bearing on the fact that Buffy assumes that all submitted proposals enter the lottery. Drawing lots among all adequate proposals eliminates any possibility of favoritism of any kind. This includes the well documented bias against female scientists. If there is a downside to drawing lots which compares against such gains, I simply do not see it.

          – Carl Christian
          Apr 20 at 22:01











          5














          Short answer:



          Such hard evidence probably does not currently exist, but finally a random "funding lottery" scheme has been implemented in New Zealand and is being studied. However, the sample size is small, and it may until at least 2026 before real data is obtained.





          Why there is a lack of evidence:



          In the RAND report Alternatives to Peer Review in Research Project Funding: 2013 Update it was noted that "the idea of randomly allocating research funding has been developed theoretically and is used by some institutions for small grants". While it's clear that the award of said small grants avoids some of the biases inherent in peer review-based funding and much of the administrative burden, the fact that the grants are small (e.g. some are appropriately sized for travel funding) means that the one cannot really infer anything about large-scale random funding schemes.



          Things have evolved somewhat since. A 2018 paper titled "Policy Considerations for Random Allocation of Research Funds" by Shahar Avin notes that at least three major funding bodies have started allocating some funds using a degree of randomization. This includes




          1. The Health Research Council (HRC) of New Zealand's “Explorer Grants”.

          2. New Zealand’s Science for Technology Innovation's (SfTI) “Seed Projects”.

          3. Volkswagen Foundation’s “Experiment!” grants.


          HRC describes the Explorer Grants as follows:




          Explorer grants support transformative research ideas that have a good chance of making a revolutionary change to how we manage New Zealanders' health. They are available in any health research discipline and are worth $150,000 for a term of up to 24 months.

          ...
          Applications for explorer grants are assessed by subpanels within the HRC's Explorer Grant Assessing Committee to see if they meet the criteria of being both transformative and viable. Unlike with any of our other grants, the assessment process for explorer grant applications is anonymous and all applications that meet the criteria are equally eligible to receive funding. A random number generator prioritises these applications.




          The HRC does not use this scheme for other grants, but believes




          that random funding is a fair and transparent way to choose between equally qualified applicants, and it's particularly suited to Explorer Grants where it may not be appropriate to rank or score high-risk applications with less predictable outcomes.




          The SfTI Seed project proposals similarly undergo an initial assessment, and then a some of them form a special pool, from which proposals are randomly drawn. Proposals for the Volkswagen Foundation's Experiment! are screened by an interdisciplinary jury that funds some proposals, rejects some, and leaves some for a funding lottery.



          As such, HRC's Explorer Grants are the most interesting from this aspect, as all proposals meeting the minimum requirements go into the lottery. This is why they are currently being studied:




          A funding lottery creates a perfect randomized trial because we have equally worthy researchers who are funded at random. We can then track their careers from the point of randomization and compare them in terms of metrics such as publications, citations, and other funding, as well as perhaps more-complex outcomes, such as innovation. We are currently following researchers who applied for funding with the New Zealand Health Research Council and were randomly allocated funding (3); however, the sample size is small, and it may be at least a decade before we have accumulated enough data to show meaningful differences.






          Summary:



          In conclusion, there does not seem to be any hard evidence that the grant peer review system performs better than a process with random selection for proposals meeting minimum standards. (Like the other answer writers, I'm fairly sure a random system with no quality checks would lead to opportunistic low-standard applications...) The reason is that such randomized funding has basically not been attempted. The same letter cited above hints at the real reason why this hasn't been studied more, despite lacking evidence for the peer review-based system:




          We have spoken with Australian funding agencies about using a lottery, and the reaction was strongly negative, with one staff member saying, “It would make it look like we don’t know what we’re doing.” A key concern is that politicians and the public would react negatively <...>







          share|improve this answer




























            5














            Short answer:



            Such hard evidence probably does not currently exist, but finally a random "funding lottery" scheme has been implemented in New Zealand and is being studied. However, the sample size is small, and it may until at least 2026 before real data is obtained.





            Why there is a lack of evidence:



            In the RAND report Alternatives to Peer Review in Research Project Funding: 2013 Update it was noted that "the idea of randomly allocating research funding has been developed theoretically and is used by some institutions for small grants". While it's clear that the award of said small grants avoids some of the biases inherent in peer review-based funding and much of the administrative burden, the fact that the grants are small (e.g. some are appropriately sized for travel funding) means that the one cannot really infer anything about large-scale random funding schemes.



            Things have evolved somewhat since. A 2018 paper titled "Policy Considerations for Random Allocation of Research Funds" by Shahar Avin notes that at least three major funding bodies have started allocating some funds using a degree of randomization. This includes




            1. The Health Research Council (HRC) of New Zealand's “Explorer Grants”.

            2. New Zealand’s Science for Technology Innovation's (SfTI) “Seed Projects”.

            3. Volkswagen Foundation’s “Experiment!” grants.


            HRC describes the Explorer Grants as follows:




            Explorer grants support transformative research ideas that have a good chance of making a revolutionary change to how we manage New Zealanders' health. They are available in any health research discipline and are worth $150,000 for a term of up to 24 months.

            ...
            Applications for explorer grants are assessed by subpanels within the HRC's Explorer Grant Assessing Committee to see if they meet the criteria of being both transformative and viable. Unlike with any of our other grants, the assessment process for explorer grant applications is anonymous and all applications that meet the criteria are equally eligible to receive funding. A random number generator prioritises these applications.




            The HRC does not use this scheme for other grants, but believes




            that random funding is a fair and transparent way to choose between equally qualified applicants, and it's particularly suited to Explorer Grants where it may not be appropriate to rank or score high-risk applications with less predictable outcomes.




            The SfTI Seed project proposals similarly undergo an initial assessment, and then a some of them form a special pool, from which proposals are randomly drawn. Proposals for the Volkswagen Foundation's Experiment! are screened by an interdisciplinary jury that funds some proposals, rejects some, and leaves some for a funding lottery.



            As such, HRC's Explorer Grants are the most interesting from this aspect, as all proposals meeting the minimum requirements go into the lottery. This is why they are currently being studied:




            A funding lottery creates a perfect randomized trial because we have equally worthy researchers who are funded at random. We can then track their careers from the point of randomization and compare them in terms of metrics such as publications, citations, and other funding, as well as perhaps more-complex outcomes, such as innovation. We are currently following researchers who applied for funding with the New Zealand Health Research Council and were randomly allocated funding (3); however, the sample size is small, and it may be at least a decade before we have accumulated enough data to show meaningful differences.






            Summary:



            In conclusion, there does not seem to be any hard evidence that the grant peer review system performs better than a process with random selection for proposals meeting minimum standards. (Like the other answer writers, I'm fairly sure a random system with no quality checks would lead to opportunistic low-standard applications...) The reason is that such randomized funding has basically not been attempted. The same letter cited above hints at the real reason why this hasn't been studied more, despite lacking evidence for the peer review-based system:




            We have spoken with Australian funding agencies about using a lottery, and the reaction was strongly negative, with one staff member saying, “It would make it look like we don’t know what we’re doing.” A key concern is that politicians and the public would react negatively <...>







            share|improve this answer


























              5












              5








              5







              Short answer:



              Such hard evidence probably does not currently exist, but finally a random "funding lottery" scheme has been implemented in New Zealand and is being studied. However, the sample size is small, and it may until at least 2026 before real data is obtained.





              Why there is a lack of evidence:



              In the RAND report Alternatives to Peer Review in Research Project Funding: 2013 Update it was noted that "the idea of randomly allocating research funding has been developed theoretically and is used by some institutions for small grants". While it's clear that the award of said small grants avoids some of the biases inherent in peer review-based funding and much of the administrative burden, the fact that the grants are small (e.g. some are appropriately sized for travel funding) means that the one cannot really infer anything about large-scale random funding schemes.



              Things have evolved somewhat since. A 2018 paper titled "Policy Considerations for Random Allocation of Research Funds" by Shahar Avin notes that at least three major funding bodies have started allocating some funds using a degree of randomization. This includes




              1. The Health Research Council (HRC) of New Zealand's “Explorer Grants”.

              2. New Zealand’s Science for Technology Innovation's (SfTI) “Seed Projects”.

              3. Volkswagen Foundation’s “Experiment!” grants.


              HRC describes the Explorer Grants as follows:




              Explorer grants support transformative research ideas that have a good chance of making a revolutionary change to how we manage New Zealanders' health. They are available in any health research discipline and are worth $150,000 for a term of up to 24 months.

              ...
              Applications for explorer grants are assessed by subpanels within the HRC's Explorer Grant Assessing Committee to see if they meet the criteria of being both transformative and viable. Unlike with any of our other grants, the assessment process for explorer grant applications is anonymous and all applications that meet the criteria are equally eligible to receive funding. A random number generator prioritises these applications.




              The HRC does not use this scheme for other grants, but believes




              that random funding is a fair and transparent way to choose between equally qualified applicants, and it's particularly suited to Explorer Grants where it may not be appropriate to rank or score high-risk applications with less predictable outcomes.




              The SfTI Seed project proposals similarly undergo an initial assessment, and then a some of them form a special pool, from which proposals are randomly drawn. Proposals for the Volkswagen Foundation's Experiment! are screened by an interdisciplinary jury that funds some proposals, rejects some, and leaves some for a funding lottery.



              As such, HRC's Explorer Grants are the most interesting from this aspect, as all proposals meeting the minimum requirements go into the lottery. This is why they are currently being studied:




              A funding lottery creates a perfect randomized trial because we have equally worthy researchers who are funded at random. We can then track their careers from the point of randomization and compare them in terms of metrics such as publications, citations, and other funding, as well as perhaps more-complex outcomes, such as innovation. We are currently following researchers who applied for funding with the New Zealand Health Research Council and were randomly allocated funding (3); however, the sample size is small, and it may be at least a decade before we have accumulated enough data to show meaningful differences.






              Summary:



              In conclusion, there does not seem to be any hard evidence that the grant peer review system performs better than a process with random selection for proposals meeting minimum standards. (Like the other answer writers, I'm fairly sure a random system with no quality checks would lead to opportunistic low-standard applications...) The reason is that such randomized funding has basically not been attempted. The same letter cited above hints at the real reason why this hasn't been studied more, despite lacking evidence for the peer review-based system:




              We have spoken with Australian funding agencies about using a lottery, and the reaction was strongly negative, with one staff member saying, “It would make it look like we don’t know what we’re doing.” A key concern is that politicians and the public would react negatively <...>







              share|improve this answer













              Short answer:



              Such hard evidence probably does not currently exist, but finally a random "funding lottery" scheme has been implemented in New Zealand and is being studied. However, the sample size is small, and it may until at least 2026 before real data is obtained.





              Why there is a lack of evidence:



              In the RAND report Alternatives to Peer Review in Research Project Funding: 2013 Update it was noted that "the idea of randomly allocating research funding has been developed theoretically and is used by some institutions for small grants". While it's clear that the award of said small grants avoids some of the biases inherent in peer review-based funding and much of the administrative burden, the fact that the grants are small (e.g. some are appropriately sized for travel funding) means that the one cannot really infer anything about large-scale random funding schemes.



              Things have evolved somewhat since. A 2018 paper titled "Policy Considerations for Random Allocation of Research Funds" by Shahar Avin notes that at least three major funding bodies have started allocating some funds using a degree of randomization. This includes




              1. The Health Research Council (HRC) of New Zealand's “Explorer Grants”.

              2. New Zealand’s Science for Technology Innovation's (SfTI) “Seed Projects”.

              3. Volkswagen Foundation’s “Experiment!” grants.


              HRC describes the Explorer Grants as follows:




              Explorer grants support transformative research ideas that have a good chance of making a revolutionary change to how we manage New Zealanders' health. They are available in any health research discipline and are worth $150,000 for a term of up to 24 months.

              ...
              Applications for explorer grants are assessed by subpanels within the HRC's Explorer Grant Assessing Committee to see if they meet the criteria of being both transformative and viable. Unlike with any of our other grants, the assessment process for explorer grant applications is anonymous and all applications that meet the criteria are equally eligible to receive funding. A random number generator prioritises these applications.




              The HRC does not use this scheme for other grants, but believes




              that random funding is a fair and transparent way to choose between equally qualified applicants, and it's particularly suited to Explorer Grants where it may not be appropriate to rank or score high-risk applications with less predictable outcomes.




              The SfTI Seed project proposals similarly undergo an initial assessment, and then a some of them form a special pool, from which proposals are randomly drawn. Proposals for the Volkswagen Foundation's Experiment! are screened by an interdisciplinary jury that funds some proposals, rejects some, and leaves some for a funding lottery.



              As such, HRC's Explorer Grants are the most interesting from this aspect, as all proposals meeting the minimum requirements go into the lottery. This is why they are currently being studied:




              A funding lottery creates a perfect randomized trial because we have equally worthy researchers who are funded at random. We can then track their careers from the point of randomization and compare them in terms of metrics such as publications, citations, and other funding, as well as perhaps more-complex outcomes, such as innovation. We are currently following researchers who applied for funding with the New Zealand Health Research Council and were randomly allocated funding (3); however, the sample size is small, and it may be at least a decade before we have accumulated enough data to show meaningful differences.






              Summary:



              In conclusion, there does not seem to be any hard evidence that the grant peer review system performs better than a process with random selection for proposals meeting minimum standards. (Like the other answer writers, I'm fairly sure a random system with no quality checks would lead to opportunistic low-standard applications...) The reason is that such randomized funding has basically not been attempted. The same letter cited above hints at the real reason why this hasn't been studied more, despite lacking evidence for the peer review-based system:




              We have spoken with Australian funding agencies about using a lottery, and the reaction was strongly negative, with one staff member saying, “It would make it look like we don’t know what we’re doing.” A key concern is that politicians and the public would react negatively <...>








              share|improve this answer












              share|improve this answer



              share|improve this answer










              answered Apr 19 at 19:55









              AnyonAnyon

              8,83523345




              8,83523345























                  2














                  This will only be a supplement to the answer here of user106886. That answer discusses an interpretation of the actions of political actors. However, not all funding is from the government, so I'll write here about another option: private funding.



                  Back during the Renaissance, artists and scientists were funded by patrons, often kings and princes, who controlled vast sums. There is still a vestige of patronage now in some circles. Web comics can be funded now, for example, by Patreon, thorough which people (like myself) pledge a few dollars a month to support the creators of the comics. We, the patrons, have no real influence on what gets created as each is a small part of a larger whole.



                  But most private funding is from profit making companies, usually publicly traded companies who must justify expenditures. The rules of the game are that those deciding how to spend money have a fiduciary responsibility to shareholders that monies are properly spent in furtherance of the goals of the company. Therefore, money can't just be given out randomly, but requires some assurance that it will be well spent. This results in a pre-review of some kind, though not exactly the same as that required for government funding.



                  There is a review and it is similar, at least, to peer review. It could even be peer review, as most companies wanting to fund research in X normally also have research departments skilled already in X. If a company gets 10 proposals and can only fund 3 of them, it needs to do a cost-benefit analysis before it can award funds, even if the analysis is flawed. It has to appear to be valid on its face.



                  Bill Gates and Elon Musk, could, in principle use their own money to conduct a random experiment, but Microsoft and Tesla cannot. If the company gives you money they need some expectation that it will be properly spent and not wasted.






                  share|improve this answer




























                    2














                    This will only be a supplement to the answer here of user106886. That answer discusses an interpretation of the actions of political actors. However, not all funding is from the government, so I'll write here about another option: private funding.



                    Back during the Renaissance, artists and scientists were funded by patrons, often kings and princes, who controlled vast sums. There is still a vestige of patronage now in some circles. Web comics can be funded now, for example, by Patreon, thorough which people (like myself) pledge a few dollars a month to support the creators of the comics. We, the patrons, have no real influence on what gets created as each is a small part of a larger whole.



                    But most private funding is from profit making companies, usually publicly traded companies who must justify expenditures. The rules of the game are that those deciding how to spend money have a fiduciary responsibility to shareholders that monies are properly spent in furtherance of the goals of the company. Therefore, money can't just be given out randomly, but requires some assurance that it will be well spent. This results in a pre-review of some kind, though not exactly the same as that required for government funding.



                    There is a review and it is similar, at least, to peer review. It could even be peer review, as most companies wanting to fund research in X normally also have research departments skilled already in X. If a company gets 10 proposals and can only fund 3 of them, it needs to do a cost-benefit analysis before it can award funds, even if the analysis is flawed. It has to appear to be valid on its face.



                    Bill Gates and Elon Musk, could, in principle use their own money to conduct a random experiment, but Microsoft and Tesla cannot. If the company gives you money they need some expectation that it will be properly spent and not wasted.






                    share|improve this answer


























                      2












                      2








                      2







                      This will only be a supplement to the answer here of user106886. That answer discusses an interpretation of the actions of political actors. However, not all funding is from the government, so I'll write here about another option: private funding.



                      Back during the Renaissance, artists and scientists were funded by patrons, often kings and princes, who controlled vast sums. There is still a vestige of patronage now in some circles. Web comics can be funded now, for example, by Patreon, thorough which people (like myself) pledge a few dollars a month to support the creators of the comics. We, the patrons, have no real influence on what gets created as each is a small part of a larger whole.



                      But most private funding is from profit making companies, usually publicly traded companies who must justify expenditures. The rules of the game are that those deciding how to spend money have a fiduciary responsibility to shareholders that monies are properly spent in furtherance of the goals of the company. Therefore, money can't just be given out randomly, but requires some assurance that it will be well spent. This results in a pre-review of some kind, though not exactly the same as that required for government funding.



                      There is a review and it is similar, at least, to peer review. It could even be peer review, as most companies wanting to fund research in X normally also have research departments skilled already in X. If a company gets 10 proposals and can only fund 3 of them, it needs to do a cost-benefit analysis before it can award funds, even if the analysis is flawed. It has to appear to be valid on its face.



                      Bill Gates and Elon Musk, could, in principle use their own money to conduct a random experiment, but Microsoft and Tesla cannot. If the company gives you money they need some expectation that it will be properly spent and not wasted.






                      share|improve this answer













                      This will only be a supplement to the answer here of user106886. That answer discusses an interpretation of the actions of political actors. However, not all funding is from the government, so I'll write here about another option: private funding.



                      Back during the Renaissance, artists and scientists were funded by patrons, often kings and princes, who controlled vast sums. There is still a vestige of patronage now in some circles. Web comics can be funded now, for example, by Patreon, thorough which people (like myself) pledge a few dollars a month to support the creators of the comics. We, the patrons, have no real influence on what gets created as each is a small part of a larger whole.



                      But most private funding is from profit making companies, usually publicly traded companies who must justify expenditures. The rules of the game are that those deciding how to spend money have a fiduciary responsibility to shareholders that monies are properly spent in furtherance of the goals of the company. Therefore, money can't just be given out randomly, but requires some assurance that it will be well spent. This results in a pre-review of some kind, though not exactly the same as that required for government funding.



                      There is a review and it is similar, at least, to peer review. It could even be peer review, as most companies wanting to fund research in X normally also have research departments skilled already in X. If a company gets 10 proposals and can only fund 3 of them, it needs to do a cost-benefit analysis before it can award funds, even if the analysis is flawed. It has to appear to be valid on its face.



                      Bill Gates and Elon Musk, could, in principle use their own money to conduct a random experiment, but Microsoft and Tesla cannot. If the company gives you money they need some expectation that it will be properly spent and not wasted.







                      share|improve this answer












                      share|improve this answer



                      share|improve this answer










                      answered Apr 18 at 13:10









                      BuffyBuffy

                      59.6k17182281




                      59.6k17182281























                          1














                          Your original question assumes that the denominator (i.e. the set of proposals submitted) is likely to stay the same (or similar) in the quality of submissions across both pools (peer-reviewed vs. random chance). But that may no longer be true once a program puts out a call with a 50% random chance - as the quality demographics could dramatically change at submission.






                          share|improve this answer




























                            1














                            Your original question assumes that the denominator (i.e. the set of proposals submitted) is likely to stay the same (or similar) in the quality of submissions across both pools (peer-reviewed vs. random chance). But that may no longer be true once a program puts out a call with a 50% random chance - as the quality demographics could dramatically change at submission.






                            share|improve this answer


























                              1












                              1








                              1







                              Your original question assumes that the denominator (i.e. the set of proposals submitted) is likely to stay the same (or similar) in the quality of submissions across both pools (peer-reviewed vs. random chance). But that may no longer be true once a program puts out a call with a 50% random chance - as the quality demographics could dramatically change at submission.






                              share|improve this answer













                              Your original question assumes that the denominator (i.e. the set of proposals submitted) is likely to stay the same (or similar) in the quality of submissions across both pools (peer-reviewed vs. random chance). But that may no longer be true once a program puts out a call with a 50% random chance - as the quality demographics could dramatically change at submission.







                              share|improve this answer












                              share|improve this answer



                              share|improve this answer










                              answered Apr 19 at 18:05









                              AnanthAnanth

                              111




                              111























                                  0














                                  Unfortunately, I don't know of such a study, so this is more like comment that was too long for the comment box:



                                  Grants, especially big ones, also include a lot of politics. "Is this research topic interesting for the international reputation of our country?", etc. Thus, also the peer review is needed, because "I gave these guys 50 millions because a coin flip decided on that" doesn't sound as good as "a committee of experts discussed for 8 months and decided...".

                                  I personally (without backing of any data or study) think that it is really hard to determine high quality (number of papers, etc.) before hand, so it might well be that random would be just as good; but as explained above, even if peer review is no better than random, we still need it for political reasons, and of course to prevent total garbage or spam submissions just to get funds.






                                  share|improve this answer




























                                    0














                                    Unfortunately, I don't know of such a study, so this is more like comment that was too long for the comment box:



                                    Grants, especially big ones, also include a lot of politics. "Is this research topic interesting for the international reputation of our country?", etc. Thus, also the peer review is needed, because "I gave these guys 50 millions because a coin flip decided on that" doesn't sound as good as "a committee of experts discussed for 8 months and decided...".

                                    I personally (without backing of any data or study) think that it is really hard to determine high quality (number of papers, etc.) before hand, so it might well be that random would be just as good; but as explained above, even if peer review is no better than random, we still need it for political reasons, and of course to prevent total garbage or spam submissions just to get funds.






                                    share|improve this answer


























                                      0












                                      0








                                      0







                                      Unfortunately, I don't know of such a study, so this is more like comment that was too long for the comment box:



                                      Grants, especially big ones, also include a lot of politics. "Is this research topic interesting for the international reputation of our country?", etc. Thus, also the peer review is needed, because "I gave these guys 50 millions because a coin flip decided on that" doesn't sound as good as "a committee of experts discussed for 8 months and decided...".

                                      I personally (without backing of any data or study) think that it is really hard to determine high quality (number of papers, etc.) before hand, so it might well be that random would be just as good; but as explained above, even if peer review is no better than random, we still need it for political reasons, and of course to prevent total garbage or spam submissions just to get funds.






                                      share|improve this answer













                                      Unfortunately, I don't know of such a study, so this is more like comment that was too long for the comment box:



                                      Grants, especially big ones, also include a lot of politics. "Is this research topic interesting for the international reputation of our country?", etc. Thus, also the peer review is needed, because "I gave these guys 50 millions because a coin flip decided on that" doesn't sound as good as "a committee of experts discussed for 8 months and decided...".

                                      I personally (without backing of any data or study) think that it is really hard to determine high quality (number of papers, etc.) before hand, so it might well be that random would be just as good; but as explained above, even if peer review is no better than random, we still need it for political reasons, and of course to prevent total garbage or spam submissions just to get funds.







                                      share|improve this answer












                                      share|improve this answer



                                      share|improve this answer










                                      answered Apr 18 at 9:22









                                      DirkDirk

                                      6,5062128




                                      6,5062128























                                          -2














                                          I do not know your definition of "hard evidence". But if it has any clause about anything needing to pass peer-review then you should think twice and realize the catch 22 in this situation.



                                          Anyone trying to publish such a thing, do you think they would be afraid of not getting grant next time, maybe? I would not expect them to get renewed grants anytime soon as what they are trying to publish is undermining the influence of how the grants work.






                                          share|improve this answer
























                                          • "Next time" they won't have the same reviewers. Also, "the grants" and a venue publishing such a study are not identical.

                                            – henning
                                            Apr 19 at 16:50













                                          • @henning the reviewers supposedly also want grants in the future no matter who they are for the moment.

                                            – mathreadler
                                            Apr 19 at 18:05
















                                          -2














                                          I do not know your definition of "hard evidence". But if it has any clause about anything needing to pass peer-review then you should think twice and realize the catch 22 in this situation.



                                          Anyone trying to publish such a thing, do you think they would be afraid of not getting grant next time, maybe? I would not expect them to get renewed grants anytime soon as what they are trying to publish is undermining the influence of how the grants work.






                                          share|improve this answer
























                                          • "Next time" they won't have the same reviewers. Also, "the grants" and a venue publishing such a study are not identical.

                                            – henning
                                            Apr 19 at 16:50













                                          • @henning the reviewers supposedly also want grants in the future no matter who they are for the moment.

                                            – mathreadler
                                            Apr 19 at 18:05














                                          -2












                                          -2








                                          -2







                                          I do not know your definition of "hard evidence". But if it has any clause about anything needing to pass peer-review then you should think twice and realize the catch 22 in this situation.



                                          Anyone trying to publish such a thing, do you think they would be afraid of not getting grant next time, maybe? I would not expect them to get renewed grants anytime soon as what they are trying to publish is undermining the influence of how the grants work.






                                          share|improve this answer













                                          I do not know your definition of "hard evidence". But if it has any clause about anything needing to pass peer-review then you should think twice and realize the catch 22 in this situation.



                                          Anyone trying to publish such a thing, do you think they would be afraid of not getting grant next time, maybe? I would not expect them to get renewed grants anytime soon as what they are trying to publish is undermining the influence of how the grants work.







                                          share|improve this answer












                                          share|improve this answer



                                          share|improve this answer










                                          answered Apr 19 at 16:21









                                          mathreadlermathreadler

                                          1,125510




                                          1,125510













                                          • "Next time" they won't have the same reviewers. Also, "the grants" and a venue publishing such a study are not identical.

                                            – henning
                                            Apr 19 at 16:50













                                          • @henning the reviewers supposedly also want grants in the future no matter who they are for the moment.

                                            – mathreadler
                                            Apr 19 at 18:05



















                                          • "Next time" they won't have the same reviewers. Also, "the grants" and a venue publishing such a study are not identical.

                                            – henning
                                            Apr 19 at 16:50













                                          • @henning the reviewers supposedly also want grants in the future no matter who they are for the moment.

                                            – mathreadler
                                            Apr 19 at 18:05

















                                          "Next time" they won't have the same reviewers. Also, "the grants" and a venue publishing such a study are not identical.

                                          – henning
                                          Apr 19 at 16:50







                                          "Next time" they won't have the same reviewers. Also, "the grants" and a venue publishing such a study are not identical.

                                          – henning
                                          Apr 19 at 16:50















                                          @henning the reviewers supposedly also want grants in the future no matter who they are for the moment.

                                          – mathreadler
                                          Apr 19 at 18:05





                                          @henning the reviewers supposedly also want grants in the future no matter who they are for the moment.

                                          – mathreadler
                                          Apr 19 at 18:05





                                          protected by Alexandros Apr 19 at 18:16



                                          Thank you for your interest in this question.
                                          Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).



                                          Would you like to answer one of these unanswered questions instead?



                                          Popular posts from this blog

                                          Masuk log Menu navigasi

                                          Identifying “long and narrow” polygons in with PostGISlength and width of polygonWhy postgis st_overlaps reports Qgis' “avoid intersections” generated polygon as overlapping with others?Adjusting polygons to boundary and filling holesDrawing polygons with fixed area?How to remove spikes in Polygons with PostGISDeleting sliver polygons after difference operation in QGIS?Snapping boundaries in PostGISSplit polygon into parts adding attributes based on underlying polygon in QGISSplitting overlap between polygons and assign to nearest polygon using PostGIS?Expanding polygons and clipping at midpoint?Removing Intersection of Buffers in Same Layers

                                          Старые Смолеговицы Содержание История | География | Демография | Достопримечательности | Примечания | НавигацияHGЯOLHGЯOL41 206 832 01641 606 406 141Административно-территориальное деление Ленинградской области«Переписная оброчная книга Водской пятины 1500 года», С. 793«Карта Ингерманландии: Ивангорода, Яма, Копорья, Нотеборга», по материалам 1676 г.«Генеральная карта провинции Ингерманландии» Э. Белинга и А. Андерсина, 1704 г., составлена по материалам 1678 г.«Географический чертёж над Ижорскою землей со своими городами» Адриана Шонбека 1705 г.Новая и достоверная всей Ингерманландии ланткарта. Грав. А. Ростовцев. СПб., 1727 г.Топографическая карта Санкт-Петербургской губернии. 5-и верстка. Шуберт. 1834 г.Описание Санкт-Петербургской губернии по уездам и станамСпецкарта западной части России Ф. Ф. Шуберта. 1844 г.Алфавитный список селений по уездам и станам С.-Петербургской губернииСписки населённых мест Российской Империи, составленные и издаваемые центральным статистическим комитетом министерства внутренних дел. XXXVII. Санкт-Петербургская губерния. По состоянию на 1862 год. СПб. 1864. С. 203Материалы по статистике народного хозяйства в С.-Петербургской губернии. Вып. IX. Частновладельческое хозяйство в Ямбургском уезде. СПб, 1888, С. 146, С. 2, 7, 54Положение о гербе муниципального образования Курское сельское поселениеСправочник истории административно-территориального деления Ленинградской области.Топографическая карта Ленинградской области, квадрат О-35-23-В (Хотыницы), 1930 г.АрхивированоАдминистративно-территориальное деление Ленинградской области. — Л., 1933, С. 27, 198АрхивированоАдминистративно-экономический справочник по Ленинградской области. — Л., 1936, с. 219АрхивированоАдминистративно-территориальное деление Ленинградской области. — Л., 1966, с. 175АрхивированоАдминистративно-территориальное деление Ленинградской области. — Лениздат, 1973, С. 180АрхивированоАдминистративно-территориальное деление Ленинградской области. — Лениздат, 1990, ISBN 5-289-00612-5, С. 38АрхивированоАдминистративно-территориальное деление Ленинградской области. — СПб., 2007, с. 60АрхивированоКоряков Юрий База данных «Этно-языковой состав населённых пунктов России». Ленинградская область.Административно-территориальное деление Ленинградской области. — СПб, 1997, ISBN 5-86153-055-6, С. 41АрхивированоКультовый комплекс Старые Смолеговицы // Электронная энциклопедия ЭрмитажаПроблемы выявления, изучения и сохранения культовых комплексов с каменными крестами: по материалам работ 2016-2017 гг. в Ленинградской области