A social experiment. What is the worst that can happen?












40















I am a postdoc and I have been applying for jobs in both industry and academia. My h-index is good enough for junior faculty (~7).



Along my academic CV I have an industry-oriented CV and send both of them out accordingly. I have had a handful of final stage interviews (faculty/scientist) for academia but none in the industry so far.



I suspect I am being interviewed as the "token diverse female" (I'm asian) as my area of science is white male dominated. The whole experience, along with prior job hunts, has led me to suspect that my gender and race may be hindering my earning potential. I believe I have the required qualifications and skills.



I am thinking of reapplying as a white male to these same industry jobs I got rejected for (especially the rejections without interview) just to see if how far along I would get. Only for industry jobs because those CVs don't make it to the chief scientist's table. Maybe make a documentary or blog about this if there are significant findings. Now put your imagination to the test: what is the worst that can happen?










share|improve this question

























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – StrongBad
    8 hours ago






  • 2





    Hiring biases in industry are off-topic.

    – Azor Ahai
    6 hours ago
















40















I am a postdoc and I have been applying for jobs in both industry and academia. My h-index is good enough for junior faculty (~7).



Along my academic CV I have an industry-oriented CV and send both of them out accordingly. I have had a handful of final stage interviews (faculty/scientist) for academia but none in the industry so far.



I suspect I am being interviewed as the "token diverse female" (I'm asian) as my area of science is white male dominated. The whole experience, along with prior job hunts, has led me to suspect that my gender and race may be hindering my earning potential. I believe I have the required qualifications and skills.



I am thinking of reapplying as a white male to these same industry jobs I got rejected for (especially the rejections without interview) just to see if how far along I would get. Only for industry jobs because those CVs don't make it to the chief scientist's table. Maybe make a documentary or blog about this if there are significant findings. Now put your imagination to the test: what is the worst that can happen?










share|improve this question

























  • Comments are not for extended discussion; this conversation has been moved to chat.

    – StrongBad
    8 hours ago






  • 2





    Hiring biases in industry are off-topic.

    – Azor Ahai
    6 hours ago














40












40








40


10






I am a postdoc and I have been applying for jobs in both industry and academia. My h-index is good enough for junior faculty (~7).



Along my academic CV I have an industry-oriented CV and send both of them out accordingly. I have had a handful of final stage interviews (faculty/scientist) for academia but none in the industry so far.



I suspect I am being interviewed as the "token diverse female" (I'm asian) as my area of science is white male dominated. The whole experience, along with prior job hunts, has led me to suspect that my gender and race may be hindering my earning potential. I believe I have the required qualifications and skills.



I am thinking of reapplying as a white male to these same industry jobs I got rejected for (especially the rejections without interview) just to see if how far along I would get. Only for industry jobs because those CVs don't make it to the chief scientist's table. Maybe make a documentary or blog about this if there are significant findings. Now put your imagination to the test: what is the worst that can happen?










share|improve this question
















I am a postdoc and I have been applying for jobs in both industry and academia. My h-index is good enough for junior faculty (~7).



Along my academic CV I have an industry-oriented CV and send both of them out accordingly. I have had a handful of final stage interviews (faculty/scientist) for academia but none in the industry so far.



I suspect I am being interviewed as the "token diverse female" (I'm asian) as my area of science is white male dominated. The whole experience, along with prior job hunts, has led me to suspect that my gender and race may be hindering my earning potential. I believe I have the required qualifications and skills.



I am thinking of reapplying as a white male to these same industry jobs I got rejected for (especially the rejections without interview) just to see if how far along I would get. Only for industry jobs because those CVs don't make it to the chief scientist's table. Maybe make a documentary or blog about this if there are significant findings. Now put your imagination to the test: what is the worst that can happen?







postdocs job-search job gender ethnicity






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited yesterday









kubanczyk

1032




1032










asked Mar 22 at 23:42









FrostedCentralFrostedCentral

373248




373248













  • Comments are not for extended discussion; this conversation has been moved to chat.

    – StrongBad
    8 hours ago






  • 2





    Hiring biases in industry are off-topic.

    – Azor Ahai
    6 hours ago



















  • Comments are not for extended discussion; this conversation has been moved to chat.

    – StrongBad
    8 hours ago






  • 2





    Hiring biases in industry are off-topic.

    – Azor Ahai
    6 hours ago

















Comments are not for extended discussion; this conversation has been moved to chat.

– StrongBad
8 hours ago





Comments are not for extended discussion; this conversation has been moved to chat.

– StrongBad
8 hours ago




2




2





Hiring biases in industry are off-topic.

– Azor Ahai
6 hours ago





Hiring biases in industry are off-topic.

– Azor Ahai
6 hours ago










3 Answers
3






active

oldest

votes


















129














I like to think I have a pretty vivid imagination, so the worst thing I can reasonably imagine happening is that you would be seen as trying to perform an experiment with human participants without proper controls, questionable experimental design, possibly a lack of appropriately rigorous analysis (if this isn't your specialty), and lack of ethical review and oversight. From an ethics perspective, you propose to use deception on uninformed participants while possibly obtaining personally identifiable information on them which could lead to them facing serious social consequences if they were identified. With the mob mentality of the viral internet as it is, the consequences could be pretty darn severe - up to loss of job, death threats, actual violence, and more. Sensitive topics require sensitive handling, which is what any functioning IRB and responsible researcher would insist on.



In reporting on the results, even if informally on a blog, you could be seen as offering a pseudo-scientific view (or worse) on a controversial and important topic. If your experiment was performed poorly or your analysis done improperly, you could lend weight to an incorrect view - either providing what could be cited as evidence that discrimination does not exist where it does (thus making it harder for people discriminated against to make changes or be taken seriously), or supporting the view that discrimination does exist where it doesn't (leading to negative consequences for people who are doing nothing wrong and deflecting attention away from more pressing, extant issues). Bad science, even done with good intention, can easily make the world worse.



If some random blogger or journalist did this, most of us can gratingly dismiss it as "they don't know any better", or just the world of click-bait, etc. But if you were a qualified scientist who should know better, people might not be so willing to dismiss such activity just because it wasn't intended to be scientific and it wasn't intended for publication. Most people won't even know the difference in what is and isn't intended to be scientific when done by a scientist, and many that do know the difference might not consider it an excuse.



As Dawn pointed out in a comment, one version of this is called an audit study, and there is a pretty large body of literature that tries to do basically what you are suggesting in a systematic way. I cannot even try to count how many studies of this sort are published, but I'd be surprised if it wasn't already in the thousands, looking at everything from gender to race to the impact of varying lengths of time gaps on a resume.



Finally, the nature of this sort of field study is that even with everything going your way they are hard to do correctly. No simple analysis method works even if you did everything right and collected all the data appropriately. There is too much randomness, too much heterogeneity, too much structure, to allow any simple bit of statistics to give the correct interpretation. In short, unless this is your specialty, it would be trivially easy to get everything else right and still come to exactly the wrong conclusion.



For those who are not familiar with this kind of statistics, a classic example of how a simple analysis can go wrong is Sex Bias in Graduate Admissions: Data from Berkeley. In a simple aggregate analysis, it looked quite clearly that women were being admitted at lower rates than men, and thus bias was quite obvious to the point that the deans of the school were concerned this could be the basis for a lawsuit. It turned out it was a nice example of Simpson's Paradox, as it turned out the cause for the difference was that women were more likely to apply to departments that were crowded and competitive and thus harder to get into for everyone, while men were more likely to apply to departments that were less competitive.



If a similar condition existed in the employment sector, where you were applying to jobs in industry that turned out to vary in their selectivity in a way that you were not considering, this would mess up your analysis, and you cannot easily collect more information that would allow you to fix it. After all, I'm sure you weren't inclined to use random selection in your own employment search!



So, in summation, the worst that I could imagine happening is: you end up doing bad science that would reflect badly on you and would not be easily excused just because it wasn't intended for publication; you come to the wrong conclusions and in a way which could hurt innocent people; you casually report information that could be used in dangerous and damaging way; you end up being identified as the person responsible and it goes viral, so now the most famous thing you'll ever be known for was this thing you didn't intend as a serious study (and which could have gone horribly wrong); and, as a bonus, you could just end up wasting your time and the time of others for no benefit.



And since its the worst thing that can happen, I suppose you could also end up with a headache. Things can always be worse by adding a headache.






share|improve this answer



















  • 13





    The question was "what's the worst that can happen", so this answer rightly applies a strict standard. But we don't generally demand scientific rigour from blog posts, journalism, or political advocacy. For example, interviewing an arbitrary sample of people in the streets is not a representative opinion poll, yet it is common practice in TV and newspaper reporting.

    – henning
    yesterday








  • 4





    @henning those organization do pay a price for their lack of rigor. In lost reputation as well as in lawsuits.

    – A Simple Algorithm
    yesterday






  • 6





    @A Simple Algorithm Not really.

    – henning
    yesterday





















27














You're asking the wrong question.




... what is the worst that can happen?




Others have answered this. But it's the wrong question. What you should really ask is:




What's likely to happen?




You are likely to not get any statistically significant information, and probably not even a sound hunch about the reason you got more offers than your friend. You are likely to get into a mild amount of trouble at least with some of the potential workplaces when you retract your application or when they call up your references/former universities/etc. IMHO it is likely you will not have contributed even marginally with your experiment.



If you believe "affirmative-action"-type hiring is nothing but Tokenism and is inappropriate / discriminatory against people like your friend - act against it where you actually are present and have access to information, such as your next workplace; and in wider social contexts (e.g. participate in public awareness-raising campaigns, lobbying elected officials, organizing petitions, demonstrations etc.)



PS - Please do not construe this answer as an endorsement or criticism of "affirmative-action"-type hiring practices.






share|improve this answer

































    3














    The worst thing which can happen is that you get the study setup or the statistics wrong, and the interpretation in the blog. The world is too full of oversimplified blogs and video documentaries on perceived gender biases.



    There is a lot of research by people who devote their scientific career to this, and the statistics of "who studies what" is quite influential. For example I (as a technical team lead doing a lot of technical interviews) observe that most women who started to study engineering 10 years ago (Central Europe) did think about the study choice well before studying, while for many men it was "the default option". This alone makes it reasonable to select them for the interviews with a higher percentage. I cant quantify this -> read research on it.



    In the worst case (i assume that you are in a STEM field) it could cost you job opportunities to present a flawed analysis without peer-review.






    share|improve this answer























      Your Answer








      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "415"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2facademia.stackexchange.com%2fquestions%2f126930%2fa-social-experiment-what-is-the-worst-that-can-happen%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      129














      I like to think I have a pretty vivid imagination, so the worst thing I can reasonably imagine happening is that you would be seen as trying to perform an experiment with human participants without proper controls, questionable experimental design, possibly a lack of appropriately rigorous analysis (if this isn't your specialty), and lack of ethical review and oversight. From an ethics perspective, you propose to use deception on uninformed participants while possibly obtaining personally identifiable information on them which could lead to them facing serious social consequences if they were identified. With the mob mentality of the viral internet as it is, the consequences could be pretty darn severe - up to loss of job, death threats, actual violence, and more. Sensitive topics require sensitive handling, which is what any functioning IRB and responsible researcher would insist on.



      In reporting on the results, even if informally on a blog, you could be seen as offering a pseudo-scientific view (or worse) on a controversial and important topic. If your experiment was performed poorly or your analysis done improperly, you could lend weight to an incorrect view - either providing what could be cited as evidence that discrimination does not exist where it does (thus making it harder for people discriminated against to make changes or be taken seriously), or supporting the view that discrimination does exist where it doesn't (leading to negative consequences for people who are doing nothing wrong and deflecting attention away from more pressing, extant issues). Bad science, even done with good intention, can easily make the world worse.



      If some random blogger or journalist did this, most of us can gratingly dismiss it as "they don't know any better", or just the world of click-bait, etc. But if you were a qualified scientist who should know better, people might not be so willing to dismiss such activity just because it wasn't intended to be scientific and it wasn't intended for publication. Most people won't even know the difference in what is and isn't intended to be scientific when done by a scientist, and many that do know the difference might not consider it an excuse.



      As Dawn pointed out in a comment, one version of this is called an audit study, and there is a pretty large body of literature that tries to do basically what you are suggesting in a systematic way. I cannot even try to count how many studies of this sort are published, but I'd be surprised if it wasn't already in the thousands, looking at everything from gender to race to the impact of varying lengths of time gaps on a resume.



      Finally, the nature of this sort of field study is that even with everything going your way they are hard to do correctly. No simple analysis method works even if you did everything right and collected all the data appropriately. There is too much randomness, too much heterogeneity, too much structure, to allow any simple bit of statistics to give the correct interpretation. In short, unless this is your specialty, it would be trivially easy to get everything else right and still come to exactly the wrong conclusion.



      For those who are not familiar with this kind of statistics, a classic example of how a simple analysis can go wrong is Sex Bias in Graduate Admissions: Data from Berkeley. In a simple aggregate analysis, it looked quite clearly that women were being admitted at lower rates than men, and thus bias was quite obvious to the point that the deans of the school were concerned this could be the basis for a lawsuit. It turned out it was a nice example of Simpson's Paradox, as it turned out the cause for the difference was that women were more likely to apply to departments that were crowded and competitive and thus harder to get into for everyone, while men were more likely to apply to departments that were less competitive.



      If a similar condition existed in the employment sector, where you were applying to jobs in industry that turned out to vary in their selectivity in a way that you were not considering, this would mess up your analysis, and you cannot easily collect more information that would allow you to fix it. After all, I'm sure you weren't inclined to use random selection in your own employment search!



      So, in summation, the worst that I could imagine happening is: you end up doing bad science that would reflect badly on you and would not be easily excused just because it wasn't intended for publication; you come to the wrong conclusions and in a way which could hurt innocent people; you casually report information that could be used in dangerous and damaging way; you end up being identified as the person responsible and it goes viral, so now the most famous thing you'll ever be known for was this thing you didn't intend as a serious study (and which could have gone horribly wrong); and, as a bonus, you could just end up wasting your time and the time of others for no benefit.



      And since its the worst thing that can happen, I suppose you could also end up with a headache. Things can always be worse by adding a headache.






      share|improve this answer



















      • 13





        The question was "what's the worst that can happen", so this answer rightly applies a strict standard. But we don't generally demand scientific rigour from blog posts, journalism, or political advocacy. For example, interviewing an arbitrary sample of people in the streets is not a representative opinion poll, yet it is common practice in TV and newspaper reporting.

        – henning
        yesterday








      • 4





        @henning those organization do pay a price for their lack of rigor. In lost reputation as well as in lawsuits.

        – A Simple Algorithm
        yesterday






      • 6





        @A Simple Algorithm Not really.

        – henning
        yesterday


















      129














      I like to think I have a pretty vivid imagination, so the worst thing I can reasonably imagine happening is that you would be seen as trying to perform an experiment with human participants without proper controls, questionable experimental design, possibly a lack of appropriately rigorous analysis (if this isn't your specialty), and lack of ethical review and oversight. From an ethics perspective, you propose to use deception on uninformed participants while possibly obtaining personally identifiable information on them which could lead to them facing serious social consequences if they were identified. With the mob mentality of the viral internet as it is, the consequences could be pretty darn severe - up to loss of job, death threats, actual violence, and more. Sensitive topics require sensitive handling, which is what any functioning IRB and responsible researcher would insist on.



      In reporting on the results, even if informally on a blog, you could be seen as offering a pseudo-scientific view (or worse) on a controversial and important topic. If your experiment was performed poorly or your analysis done improperly, you could lend weight to an incorrect view - either providing what could be cited as evidence that discrimination does not exist where it does (thus making it harder for people discriminated against to make changes or be taken seriously), or supporting the view that discrimination does exist where it doesn't (leading to negative consequences for people who are doing nothing wrong and deflecting attention away from more pressing, extant issues). Bad science, even done with good intention, can easily make the world worse.



      If some random blogger or journalist did this, most of us can gratingly dismiss it as "they don't know any better", or just the world of click-bait, etc. But if you were a qualified scientist who should know better, people might not be so willing to dismiss such activity just because it wasn't intended to be scientific and it wasn't intended for publication. Most people won't even know the difference in what is and isn't intended to be scientific when done by a scientist, and many that do know the difference might not consider it an excuse.



      As Dawn pointed out in a comment, one version of this is called an audit study, and there is a pretty large body of literature that tries to do basically what you are suggesting in a systematic way. I cannot even try to count how many studies of this sort are published, but I'd be surprised if it wasn't already in the thousands, looking at everything from gender to race to the impact of varying lengths of time gaps on a resume.



      Finally, the nature of this sort of field study is that even with everything going your way they are hard to do correctly. No simple analysis method works even if you did everything right and collected all the data appropriately. There is too much randomness, too much heterogeneity, too much structure, to allow any simple bit of statistics to give the correct interpretation. In short, unless this is your specialty, it would be trivially easy to get everything else right and still come to exactly the wrong conclusion.



      For those who are not familiar with this kind of statistics, a classic example of how a simple analysis can go wrong is Sex Bias in Graduate Admissions: Data from Berkeley. In a simple aggregate analysis, it looked quite clearly that women were being admitted at lower rates than men, and thus bias was quite obvious to the point that the deans of the school were concerned this could be the basis for a lawsuit. It turned out it was a nice example of Simpson's Paradox, as it turned out the cause for the difference was that women were more likely to apply to departments that were crowded and competitive and thus harder to get into for everyone, while men were more likely to apply to departments that were less competitive.



      If a similar condition existed in the employment sector, where you were applying to jobs in industry that turned out to vary in their selectivity in a way that you were not considering, this would mess up your analysis, and you cannot easily collect more information that would allow you to fix it. After all, I'm sure you weren't inclined to use random selection in your own employment search!



      So, in summation, the worst that I could imagine happening is: you end up doing bad science that would reflect badly on you and would not be easily excused just because it wasn't intended for publication; you come to the wrong conclusions and in a way which could hurt innocent people; you casually report information that could be used in dangerous and damaging way; you end up being identified as the person responsible and it goes viral, so now the most famous thing you'll ever be known for was this thing you didn't intend as a serious study (and which could have gone horribly wrong); and, as a bonus, you could just end up wasting your time and the time of others for no benefit.



      And since its the worst thing that can happen, I suppose you could also end up with a headache. Things can always be worse by adding a headache.






      share|improve this answer



















      • 13





        The question was "what's the worst that can happen", so this answer rightly applies a strict standard. But we don't generally demand scientific rigour from blog posts, journalism, or political advocacy. For example, interviewing an arbitrary sample of people in the streets is not a representative opinion poll, yet it is common practice in TV and newspaper reporting.

        – henning
        yesterday








      • 4





        @henning those organization do pay a price for their lack of rigor. In lost reputation as well as in lawsuits.

        – A Simple Algorithm
        yesterday






      • 6





        @A Simple Algorithm Not really.

        – henning
        yesterday
















      129












      129








      129







      I like to think I have a pretty vivid imagination, so the worst thing I can reasonably imagine happening is that you would be seen as trying to perform an experiment with human participants without proper controls, questionable experimental design, possibly a lack of appropriately rigorous analysis (if this isn't your specialty), and lack of ethical review and oversight. From an ethics perspective, you propose to use deception on uninformed participants while possibly obtaining personally identifiable information on them which could lead to them facing serious social consequences if they were identified. With the mob mentality of the viral internet as it is, the consequences could be pretty darn severe - up to loss of job, death threats, actual violence, and more. Sensitive topics require sensitive handling, which is what any functioning IRB and responsible researcher would insist on.



      In reporting on the results, even if informally on a blog, you could be seen as offering a pseudo-scientific view (or worse) on a controversial and important topic. If your experiment was performed poorly or your analysis done improperly, you could lend weight to an incorrect view - either providing what could be cited as evidence that discrimination does not exist where it does (thus making it harder for people discriminated against to make changes or be taken seriously), or supporting the view that discrimination does exist where it doesn't (leading to negative consequences for people who are doing nothing wrong and deflecting attention away from more pressing, extant issues). Bad science, even done with good intention, can easily make the world worse.



      If some random blogger or journalist did this, most of us can gratingly dismiss it as "they don't know any better", or just the world of click-bait, etc. But if you were a qualified scientist who should know better, people might not be so willing to dismiss such activity just because it wasn't intended to be scientific and it wasn't intended for publication. Most people won't even know the difference in what is and isn't intended to be scientific when done by a scientist, and many that do know the difference might not consider it an excuse.



      As Dawn pointed out in a comment, one version of this is called an audit study, and there is a pretty large body of literature that tries to do basically what you are suggesting in a systematic way. I cannot even try to count how many studies of this sort are published, but I'd be surprised if it wasn't already in the thousands, looking at everything from gender to race to the impact of varying lengths of time gaps on a resume.



      Finally, the nature of this sort of field study is that even with everything going your way they are hard to do correctly. No simple analysis method works even if you did everything right and collected all the data appropriately. There is too much randomness, too much heterogeneity, too much structure, to allow any simple bit of statistics to give the correct interpretation. In short, unless this is your specialty, it would be trivially easy to get everything else right and still come to exactly the wrong conclusion.



      For those who are not familiar with this kind of statistics, a classic example of how a simple analysis can go wrong is Sex Bias in Graduate Admissions: Data from Berkeley. In a simple aggregate analysis, it looked quite clearly that women were being admitted at lower rates than men, and thus bias was quite obvious to the point that the deans of the school were concerned this could be the basis for a lawsuit. It turned out it was a nice example of Simpson's Paradox, as it turned out the cause for the difference was that women were more likely to apply to departments that were crowded and competitive and thus harder to get into for everyone, while men were more likely to apply to departments that were less competitive.



      If a similar condition existed in the employment sector, where you were applying to jobs in industry that turned out to vary in their selectivity in a way that you were not considering, this would mess up your analysis, and you cannot easily collect more information that would allow you to fix it. After all, I'm sure you weren't inclined to use random selection in your own employment search!



      So, in summation, the worst that I could imagine happening is: you end up doing bad science that would reflect badly on you and would not be easily excused just because it wasn't intended for publication; you come to the wrong conclusions and in a way which could hurt innocent people; you casually report information that could be used in dangerous and damaging way; you end up being identified as the person responsible and it goes viral, so now the most famous thing you'll ever be known for was this thing you didn't intend as a serious study (and which could have gone horribly wrong); and, as a bonus, you could just end up wasting your time and the time of others for no benefit.



      And since its the worst thing that can happen, I suppose you could also end up with a headache. Things can always be worse by adding a headache.






      share|improve this answer













      I like to think I have a pretty vivid imagination, so the worst thing I can reasonably imagine happening is that you would be seen as trying to perform an experiment with human participants without proper controls, questionable experimental design, possibly a lack of appropriately rigorous analysis (if this isn't your specialty), and lack of ethical review and oversight. From an ethics perspective, you propose to use deception on uninformed participants while possibly obtaining personally identifiable information on them which could lead to them facing serious social consequences if they were identified. With the mob mentality of the viral internet as it is, the consequences could be pretty darn severe - up to loss of job, death threats, actual violence, and more. Sensitive topics require sensitive handling, which is what any functioning IRB and responsible researcher would insist on.



      In reporting on the results, even if informally on a blog, you could be seen as offering a pseudo-scientific view (or worse) on a controversial and important topic. If your experiment was performed poorly or your analysis done improperly, you could lend weight to an incorrect view - either providing what could be cited as evidence that discrimination does not exist where it does (thus making it harder for people discriminated against to make changes or be taken seriously), or supporting the view that discrimination does exist where it doesn't (leading to negative consequences for people who are doing nothing wrong and deflecting attention away from more pressing, extant issues). Bad science, even done with good intention, can easily make the world worse.



      If some random blogger or journalist did this, most of us can gratingly dismiss it as "they don't know any better", or just the world of click-bait, etc. But if you were a qualified scientist who should know better, people might not be so willing to dismiss such activity just because it wasn't intended to be scientific and it wasn't intended for publication. Most people won't even know the difference in what is and isn't intended to be scientific when done by a scientist, and many that do know the difference might not consider it an excuse.



      As Dawn pointed out in a comment, one version of this is called an audit study, and there is a pretty large body of literature that tries to do basically what you are suggesting in a systematic way. I cannot even try to count how many studies of this sort are published, but I'd be surprised if it wasn't already in the thousands, looking at everything from gender to race to the impact of varying lengths of time gaps on a resume.



      Finally, the nature of this sort of field study is that even with everything going your way they are hard to do correctly. No simple analysis method works even if you did everything right and collected all the data appropriately. There is too much randomness, too much heterogeneity, too much structure, to allow any simple bit of statistics to give the correct interpretation. In short, unless this is your specialty, it would be trivially easy to get everything else right and still come to exactly the wrong conclusion.



      For those who are not familiar with this kind of statistics, a classic example of how a simple analysis can go wrong is Sex Bias in Graduate Admissions: Data from Berkeley. In a simple aggregate analysis, it looked quite clearly that women were being admitted at lower rates than men, and thus bias was quite obvious to the point that the deans of the school were concerned this could be the basis for a lawsuit. It turned out it was a nice example of Simpson's Paradox, as it turned out the cause for the difference was that women were more likely to apply to departments that were crowded and competitive and thus harder to get into for everyone, while men were more likely to apply to departments that were less competitive.



      If a similar condition existed in the employment sector, where you were applying to jobs in industry that turned out to vary in their selectivity in a way that you were not considering, this would mess up your analysis, and you cannot easily collect more information that would allow you to fix it. After all, I'm sure you weren't inclined to use random selection in your own employment search!



      So, in summation, the worst that I could imagine happening is: you end up doing bad science that would reflect badly on you and would not be easily excused just because it wasn't intended for publication; you come to the wrong conclusions and in a way which could hurt innocent people; you casually report information that could be used in dangerous and damaging way; you end up being identified as the person responsible and it goes viral, so now the most famous thing you'll ever be known for was this thing you didn't intend as a serious study (and which could have gone horribly wrong); and, as a bonus, you could just end up wasting your time and the time of others for no benefit.



      And since its the worst thing that can happen, I suppose you could also end up with a headache. Things can always be worse by adding a headache.







      share|improve this answer












      share|improve this answer



      share|improve this answer










      answered 2 days ago









      BrianHBrianH

      17.5k64172




      17.5k64172








      • 13





        The question was "what's the worst that can happen", so this answer rightly applies a strict standard. But we don't generally demand scientific rigour from blog posts, journalism, or political advocacy. For example, interviewing an arbitrary sample of people in the streets is not a representative opinion poll, yet it is common practice in TV and newspaper reporting.

        – henning
        yesterday








      • 4





        @henning those organization do pay a price for their lack of rigor. In lost reputation as well as in lawsuits.

        – A Simple Algorithm
        yesterday






      • 6





        @A Simple Algorithm Not really.

        – henning
        yesterday
















      • 13





        The question was "what's the worst that can happen", so this answer rightly applies a strict standard. But we don't generally demand scientific rigour from blog posts, journalism, or political advocacy. For example, interviewing an arbitrary sample of people in the streets is not a representative opinion poll, yet it is common practice in TV and newspaper reporting.

        – henning
        yesterday








      • 4





        @henning those organization do pay a price for their lack of rigor. In lost reputation as well as in lawsuits.

        – A Simple Algorithm
        yesterday






      • 6





        @A Simple Algorithm Not really.

        – henning
        yesterday










      13




      13





      The question was "what's the worst that can happen", so this answer rightly applies a strict standard. But we don't generally demand scientific rigour from blog posts, journalism, or political advocacy. For example, interviewing an arbitrary sample of people in the streets is not a representative opinion poll, yet it is common practice in TV and newspaper reporting.

      – henning
      yesterday







      The question was "what's the worst that can happen", so this answer rightly applies a strict standard. But we don't generally demand scientific rigour from blog posts, journalism, or political advocacy. For example, interviewing an arbitrary sample of people in the streets is not a representative opinion poll, yet it is common practice in TV and newspaper reporting.

      – henning
      yesterday






      4




      4





      @henning those organization do pay a price for their lack of rigor. In lost reputation as well as in lawsuits.

      – A Simple Algorithm
      yesterday





      @henning those organization do pay a price for their lack of rigor. In lost reputation as well as in lawsuits.

      – A Simple Algorithm
      yesterday




      6




      6





      @A Simple Algorithm Not really.

      – henning
      yesterday







      @A Simple Algorithm Not really.

      – henning
      yesterday













      27














      You're asking the wrong question.




      ... what is the worst that can happen?




      Others have answered this. But it's the wrong question. What you should really ask is:




      What's likely to happen?




      You are likely to not get any statistically significant information, and probably not even a sound hunch about the reason you got more offers than your friend. You are likely to get into a mild amount of trouble at least with some of the potential workplaces when you retract your application or when they call up your references/former universities/etc. IMHO it is likely you will not have contributed even marginally with your experiment.



      If you believe "affirmative-action"-type hiring is nothing but Tokenism and is inappropriate / discriminatory against people like your friend - act against it where you actually are present and have access to information, such as your next workplace; and in wider social contexts (e.g. participate in public awareness-raising campaigns, lobbying elected officials, organizing petitions, demonstrations etc.)



      PS - Please do not construe this answer as an endorsement or criticism of "affirmative-action"-type hiring practices.






      share|improve this answer






























        27














        You're asking the wrong question.




        ... what is the worst that can happen?




        Others have answered this. But it's the wrong question. What you should really ask is:




        What's likely to happen?




        You are likely to not get any statistically significant information, and probably not even a sound hunch about the reason you got more offers than your friend. You are likely to get into a mild amount of trouble at least with some of the potential workplaces when you retract your application or when they call up your references/former universities/etc. IMHO it is likely you will not have contributed even marginally with your experiment.



        If you believe "affirmative-action"-type hiring is nothing but Tokenism and is inappropriate / discriminatory against people like your friend - act against it where you actually are present and have access to information, such as your next workplace; and in wider social contexts (e.g. participate in public awareness-raising campaigns, lobbying elected officials, organizing petitions, demonstrations etc.)



        PS - Please do not construe this answer as an endorsement or criticism of "affirmative-action"-type hiring practices.






        share|improve this answer




























          27












          27








          27







          You're asking the wrong question.




          ... what is the worst that can happen?




          Others have answered this. But it's the wrong question. What you should really ask is:




          What's likely to happen?




          You are likely to not get any statistically significant information, and probably not even a sound hunch about the reason you got more offers than your friend. You are likely to get into a mild amount of trouble at least with some of the potential workplaces when you retract your application or when they call up your references/former universities/etc. IMHO it is likely you will not have contributed even marginally with your experiment.



          If you believe "affirmative-action"-type hiring is nothing but Tokenism and is inappropriate / discriminatory against people like your friend - act against it where you actually are present and have access to information, such as your next workplace; and in wider social contexts (e.g. participate in public awareness-raising campaigns, lobbying elected officials, organizing petitions, demonstrations etc.)



          PS - Please do not construe this answer as an endorsement or criticism of "affirmative-action"-type hiring practices.






          share|improve this answer















          You're asking the wrong question.




          ... what is the worst that can happen?




          Others have answered this. But it's the wrong question. What you should really ask is:




          What's likely to happen?




          You are likely to not get any statistically significant information, and probably not even a sound hunch about the reason you got more offers than your friend. You are likely to get into a mild amount of trouble at least with some of the potential workplaces when you retract your application or when they call up your references/former universities/etc. IMHO it is likely you will not have contributed even marginally with your experiment.



          If you believe "affirmative-action"-type hiring is nothing but Tokenism and is inappropriate / discriminatory against people like your friend - act against it where you actually are present and have access to information, such as your next workplace; and in wider social contexts (e.g. participate in public awareness-raising campaigns, lobbying elected officials, organizing petitions, demonstrations etc.)



          PS - Please do not construe this answer as an endorsement or criticism of "affirmative-action"-type hiring practices.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited yesterday

























          answered yesterday









          einpoklumeinpoklum

          25k140143




          25k140143























              3














              The worst thing which can happen is that you get the study setup or the statistics wrong, and the interpretation in the blog. The world is too full of oversimplified blogs and video documentaries on perceived gender biases.



              There is a lot of research by people who devote their scientific career to this, and the statistics of "who studies what" is quite influential. For example I (as a technical team lead doing a lot of technical interviews) observe that most women who started to study engineering 10 years ago (Central Europe) did think about the study choice well before studying, while for many men it was "the default option". This alone makes it reasonable to select them for the interviews with a higher percentage. I cant quantify this -> read research on it.



              In the worst case (i assume that you are in a STEM field) it could cost you job opportunities to present a flawed analysis without peer-review.






              share|improve this answer




























                3














                The worst thing which can happen is that you get the study setup or the statistics wrong, and the interpretation in the blog. The world is too full of oversimplified blogs and video documentaries on perceived gender biases.



                There is a lot of research by people who devote their scientific career to this, and the statistics of "who studies what" is quite influential. For example I (as a technical team lead doing a lot of technical interviews) observe that most women who started to study engineering 10 years ago (Central Europe) did think about the study choice well before studying, while for many men it was "the default option". This alone makes it reasonable to select them for the interviews with a higher percentage. I cant quantify this -> read research on it.



                In the worst case (i assume that you are in a STEM field) it could cost you job opportunities to present a flawed analysis without peer-review.






                share|improve this answer


























                  3












                  3








                  3







                  The worst thing which can happen is that you get the study setup or the statistics wrong, and the interpretation in the blog. The world is too full of oversimplified blogs and video documentaries on perceived gender biases.



                  There is a lot of research by people who devote their scientific career to this, and the statistics of "who studies what" is quite influential. For example I (as a technical team lead doing a lot of technical interviews) observe that most women who started to study engineering 10 years ago (Central Europe) did think about the study choice well before studying, while for many men it was "the default option". This alone makes it reasonable to select them for the interviews with a higher percentage. I cant quantify this -> read research on it.



                  In the worst case (i assume that you are in a STEM field) it could cost you job opportunities to present a flawed analysis without peer-review.






                  share|improve this answer













                  The worst thing which can happen is that you get the study setup or the statistics wrong, and the interpretation in the blog. The world is too full of oversimplified blogs and video documentaries on perceived gender biases.



                  There is a lot of research by people who devote their scientific career to this, and the statistics of "who studies what" is quite influential. For example I (as a technical team lead doing a lot of technical interviews) observe that most women who started to study engineering 10 years ago (Central Europe) did think about the study choice well before studying, while for many men it was "the default option". This alone makes it reasonable to select them for the interviews with a higher percentage. I cant quantify this -> read research on it.



                  In the worst case (i assume that you are in a STEM field) it could cost you job opportunities to present a flawed analysis without peer-review.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered 2 days ago









                  SaschaSascha

                  1,626313




                  1,626313






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Academia Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2facademia.stackexchange.com%2fquestions%2f126930%2fa-social-experiment-what-is-the-worst-that-can-happen%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      Masuk log Menu navigasi

                      Identifying “long and narrow” polygons in with PostGISlength and width of polygonWhy postgis st_overlaps reports Qgis' “avoid intersections” generated polygon as overlapping with others?Adjusting polygons to boundary and filling holesDrawing polygons with fixed area?How to remove spikes in Polygons with PostGISDeleting sliver polygons after difference operation in QGIS?Snapping boundaries in PostGISSplit polygon into parts adding attributes based on underlying polygon in QGISSplitting overlap between polygons and assign to nearest polygon using PostGIS?Expanding polygons and clipping at midpoint?Removing Intersection of Buffers in Same Layers

                      Старые Смолеговицы Содержание История | География | Демография | Достопримечательности | Примечания | НавигацияHGЯOLHGЯOL41 206 832 01641 606 406 141Административно-территориальное деление Ленинградской области«Переписная оброчная книга Водской пятины 1500 года», С. 793«Карта Ингерманландии: Ивангорода, Яма, Копорья, Нотеборга», по материалам 1676 г.«Генеральная карта провинции Ингерманландии» Э. Белинга и А. Андерсина, 1704 г., составлена по материалам 1678 г.«Географический чертёж над Ижорскою землей со своими городами» Адриана Шонбека 1705 г.Новая и достоверная всей Ингерманландии ланткарта. Грав. А. Ростовцев. СПб., 1727 г.Топографическая карта Санкт-Петербургской губернии. 5-и верстка. Шуберт. 1834 г.Описание Санкт-Петербургской губернии по уездам и станамСпецкарта западной части России Ф. Ф. Шуберта. 1844 г.Алфавитный список селений по уездам и станам С.-Петербургской губернииСписки населённых мест Российской Империи, составленные и издаваемые центральным статистическим комитетом министерства внутренних дел. XXXVII. Санкт-Петербургская губерния. По состоянию на 1862 год. СПб. 1864. С. 203Материалы по статистике народного хозяйства в С.-Петербургской губернии. Вып. IX. Частновладельческое хозяйство в Ямбургском уезде. СПб, 1888, С. 146, С. 2, 7, 54Положение о гербе муниципального образования Курское сельское поселениеСправочник истории административно-территориального деления Ленинградской области.Топографическая карта Ленинградской области, квадрат О-35-23-В (Хотыницы), 1930 г.АрхивированоАдминистративно-территориальное деление Ленинградской области. — Л., 1933, С. 27, 198АрхивированоАдминистративно-экономический справочник по Ленинградской области. — Л., 1936, с. 219АрхивированоАдминистративно-территориальное деление Ленинградской области. — Л., 1966, с. 175АрхивированоАдминистративно-территориальное деление Ленинградской области. — Лениздат, 1973, С. 180АрхивированоАдминистративно-территориальное деление Ленинградской области. — Лениздат, 1990, ISBN 5-289-00612-5, С. 38АрхивированоАдминистративно-территориальное деление Ленинградской области. — СПб., 2007, с. 60АрхивированоКоряков Юрий База данных «Этно-языковой состав населённых пунктов России». Ленинградская область.Административно-территориальное деление Ленинградской области. — СПб, 1997, ISBN 5-86153-055-6, С. 41АрхивированоКультовый комплекс Старые Смолеговицы // Электронная энциклопедия ЭрмитажаПроблемы выявления, изучения и сохранения культовых комплексов с каменными крестами: по материалам работ 2016-2017 гг. в Ленинградской области