Training a classifier when some of the features are unknown












2












$begingroup$


I am training a classifier in Matlab with a dataset that I created.
Unfortunately some of the features in the dataset were not recorded.



I currently have the unknown features set as -99999.



So, for example my dataset looks something like this:



class1: 10 1 12 -99999 6 8
class1: 11 2 13 7 6 10
...
class2: 5 -99999 4 3 2 -99999
class2: -99999 16 4 3 1 8
...
class3: 18 2 11 22 7 5
class3: 19 1 9 25 7 5
...


and so on, where the -99999 are the places where the features werent able to be measured. In this case, each class has 6 features.



I don't want to bias my classifier with the unknown features so I thought it would be a good idea to set the unknowns to -99999 so it would be way out of the range of normal features.



I tested the classifier with the -99999's and it was 78% accurate.
Then I changed the -99999 to 0's and tested the classifier again, this time it was 91% accurate.



So my question is, what is a general rule for training a classifier when some of the features were not recorded? Was I right to assume setting the unknowns to a very high negative value? But why was it more accurate when I set the unknowns to 0s?



Thanks for reading!










share|improve this question









New contributor




Darklink9110 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$

















    2












    $begingroup$


    I am training a classifier in Matlab with a dataset that I created.
    Unfortunately some of the features in the dataset were not recorded.



    I currently have the unknown features set as -99999.



    So, for example my dataset looks something like this:



    class1: 10 1 12 -99999 6 8
    class1: 11 2 13 7 6 10
    ...
    class2: 5 -99999 4 3 2 -99999
    class2: -99999 16 4 3 1 8
    ...
    class3: 18 2 11 22 7 5
    class3: 19 1 9 25 7 5
    ...


    and so on, where the -99999 are the places where the features werent able to be measured. In this case, each class has 6 features.



    I don't want to bias my classifier with the unknown features so I thought it would be a good idea to set the unknowns to -99999 so it would be way out of the range of normal features.



    I tested the classifier with the -99999's and it was 78% accurate.
    Then I changed the -99999 to 0's and tested the classifier again, this time it was 91% accurate.



    So my question is, what is a general rule for training a classifier when some of the features were not recorded? Was I right to assume setting the unknowns to a very high negative value? But why was it more accurate when I set the unknowns to 0s?



    Thanks for reading!










    share|improve this question









    New contributor




    Darklink9110 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$















      2












      2








      2





      $begingroup$


      I am training a classifier in Matlab with a dataset that I created.
      Unfortunately some of the features in the dataset were not recorded.



      I currently have the unknown features set as -99999.



      So, for example my dataset looks something like this:



      class1: 10 1 12 -99999 6 8
      class1: 11 2 13 7 6 10
      ...
      class2: 5 -99999 4 3 2 -99999
      class2: -99999 16 4 3 1 8
      ...
      class3: 18 2 11 22 7 5
      class3: 19 1 9 25 7 5
      ...


      and so on, where the -99999 are the places where the features werent able to be measured. In this case, each class has 6 features.



      I don't want to bias my classifier with the unknown features so I thought it would be a good idea to set the unknowns to -99999 so it would be way out of the range of normal features.



      I tested the classifier with the -99999's and it was 78% accurate.
      Then I changed the -99999 to 0's and tested the classifier again, this time it was 91% accurate.



      So my question is, what is a general rule for training a classifier when some of the features were not recorded? Was I right to assume setting the unknowns to a very high negative value? But why was it more accurate when I set the unknowns to 0s?



      Thanks for reading!










      share|improve this question









      New contributor




      Darklink9110 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$




      I am training a classifier in Matlab with a dataset that I created.
      Unfortunately some of the features in the dataset were not recorded.



      I currently have the unknown features set as -99999.



      So, for example my dataset looks something like this:



      class1: 10 1 12 -99999 6 8
      class1: 11 2 13 7 6 10
      ...
      class2: 5 -99999 4 3 2 -99999
      class2: -99999 16 4 3 1 8
      ...
      class3: 18 2 11 22 7 5
      class3: 19 1 9 25 7 5
      ...


      and so on, where the -99999 are the places where the features werent able to be measured. In this case, each class has 6 features.



      I don't want to bias my classifier with the unknown features so I thought it would be a good idea to set the unknowns to -99999 so it would be way out of the range of normal features.



      I tested the classifier with the -99999's and it was 78% accurate.
      Then I changed the -99999 to 0's and tested the classifier again, this time it was 91% accurate.



      So my question is, what is a general rule for training a classifier when some of the features were not recorded? Was I right to assume setting the unknowns to a very high negative value? But why was it more accurate when I set the unknowns to 0s?



      Thanks for reading!







      machine-learning classification dataset matlab






      share|improve this question









      New contributor




      Darklink9110 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question









      New contributor




      Darklink9110 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question








      edited 3 hours ago







      Darklink9110













      New contributor




      Darklink9110 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 3 hours ago









      Darklink9110Darklink9110

      113




      113




      New contributor




      Darklink9110 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      Darklink9110 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      Darklink9110 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






















          1 Answer
          1






          active

          oldest

          votes


















          1












          $begingroup$

          Welcome to Data Science SE!



          Well, we say that most of our jobs is to wrangle with data, and that is because data is usually trying to deceive us... jokes aside:



          You have a missing data problem



          that means your have to clean your data and fill those missing values. To perform this cleaning process your must take the most classic statistician inside of you and ask:




          • Why is this data missing?

          • How much data is missing?


          There are many reasons for a specific information to be unavailable. This will demand you to make assumptions and decide how to deal with this.



          Jeff Sauro posted at MeasuringU: 7 Ways to Handle Missing Data, some which I list here:




          • Delete corrupted samples:


          If you have a large dataset and there is not much data missing, you can simply remove those corrupted data points and go on with life




          • Recover the values:


          Some problems will allow you to go back and get missing information.



          We usually ain't that lucky, then you can




          • Educated Guessing:


          Sometimes, you can infer what would be the feature value by simply looking their pears. That is a bit arbitrary but it might work.




          • Average:


          This is the most common approach, simply use the average of that value whenever it is missing. This might artificially reduce your variance but so does using 0 or -9999... for every missing value.




          • Regression Substitution:


          You can use a multiple regression to infer the missing value from the available values for each candidate.



          Some references on missing data are:




          • Allison, Paul D. 2001. Missing Data. Sage University Papers
            Series on Quantitative Applications in the Social Sciences.
            Thousand Oaks: Sage.

          • Enders, Craig. 2010. Applied Missing Data Analysis.
            Guilford Press: New York.

          • Little, Roderick J., Donald Rubin. 2002. Statistical Analysis
            with Missing Data. John Wiley & Sons, Inc: Hoboken.

          • Schafer, Joseph L., John W. Graham. 2002. “Missing Data:
            Our View of the State of the Art.” Psychological Methods.




          About your experiment:



          Adding -99... is creating outliers and that bit of information is heavy (numerically speaking, it is huge) and will affect parameter tuning. For example, suppose you have this data:



          | Feature1 | Feature2 | 
          |----------|----------|
          | 0 | 8 |
          | -1 | 7 |
          | 1 | - |
          | - | 8 |


          And you try filling the missing values with -99, now try to fit a linear regression trough the data. Can you see that you don't be able to fit it properly?



          The brown line



          The line won't fit, and this will yield bad performance.



          Adding 0 values on the other hand will give a slightly better line:



          The yellow line



          It is still not good, but slightly better since the scale of the parameters will be more realistic.



          Now, using average, is this case will give you even better curve, but using regression will give you a perfect fitting line:



          The perfect line



          Note: I need to remake those images, but these should do until I have the time for it.






          share|improve this answer











          $endgroup$














            Your Answer








            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "557"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });






            Darklink9110 is a new contributor. Be nice, and check out our Code of Conduct.










            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49298%2ftraining-a-classifier-when-some-of-the-features-are-unknown%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1












            $begingroup$

            Welcome to Data Science SE!



            Well, we say that most of our jobs is to wrangle with data, and that is because data is usually trying to deceive us... jokes aside:



            You have a missing data problem



            that means your have to clean your data and fill those missing values. To perform this cleaning process your must take the most classic statistician inside of you and ask:




            • Why is this data missing?

            • How much data is missing?


            There are many reasons for a specific information to be unavailable. This will demand you to make assumptions and decide how to deal with this.



            Jeff Sauro posted at MeasuringU: 7 Ways to Handle Missing Data, some which I list here:




            • Delete corrupted samples:


            If you have a large dataset and there is not much data missing, you can simply remove those corrupted data points and go on with life




            • Recover the values:


            Some problems will allow you to go back and get missing information.



            We usually ain't that lucky, then you can




            • Educated Guessing:


            Sometimes, you can infer what would be the feature value by simply looking their pears. That is a bit arbitrary but it might work.




            • Average:


            This is the most common approach, simply use the average of that value whenever it is missing. This might artificially reduce your variance but so does using 0 or -9999... for every missing value.




            • Regression Substitution:


            You can use a multiple regression to infer the missing value from the available values for each candidate.



            Some references on missing data are:




            • Allison, Paul D. 2001. Missing Data. Sage University Papers
              Series on Quantitative Applications in the Social Sciences.
              Thousand Oaks: Sage.

            • Enders, Craig. 2010. Applied Missing Data Analysis.
              Guilford Press: New York.

            • Little, Roderick J., Donald Rubin. 2002. Statistical Analysis
              with Missing Data. John Wiley & Sons, Inc: Hoboken.

            • Schafer, Joseph L., John W. Graham. 2002. “Missing Data:
              Our View of the State of the Art.” Psychological Methods.




            About your experiment:



            Adding -99... is creating outliers and that bit of information is heavy (numerically speaking, it is huge) and will affect parameter tuning. For example, suppose you have this data:



            | Feature1 | Feature2 | 
            |----------|----------|
            | 0 | 8 |
            | -1 | 7 |
            | 1 | - |
            | - | 8 |


            And you try filling the missing values with -99, now try to fit a linear regression trough the data. Can you see that you don't be able to fit it properly?



            The brown line



            The line won't fit, and this will yield bad performance.



            Adding 0 values on the other hand will give a slightly better line:



            The yellow line



            It is still not good, but slightly better since the scale of the parameters will be more realistic.



            Now, using average, is this case will give you even better curve, but using regression will give you a perfect fitting line:



            The perfect line



            Note: I need to remake those images, but these should do until I have the time for it.






            share|improve this answer











            $endgroup$


















              1












              $begingroup$

              Welcome to Data Science SE!



              Well, we say that most of our jobs is to wrangle with data, and that is because data is usually trying to deceive us... jokes aside:



              You have a missing data problem



              that means your have to clean your data and fill those missing values. To perform this cleaning process your must take the most classic statistician inside of you and ask:




              • Why is this data missing?

              • How much data is missing?


              There are many reasons for a specific information to be unavailable. This will demand you to make assumptions and decide how to deal with this.



              Jeff Sauro posted at MeasuringU: 7 Ways to Handle Missing Data, some which I list here:




              • Delete corrupted samples:


              If you have a large dataset and there is not much data missing, you can simply remove those corrupted data points and go on with life




              • Recover the values:


              Some problems will allow you to go back and get missing information.



              We usually ain't that lucky, then you can




              • Educated Guessing:


              Sometimes, you can infer what would be the feature value by simply looking their pears. That is a bit arbitrary but it might work.




              • Average:


              This is the most common approach, simply use the average of that value whenever it is missing. This might artificially reduce your variance but so does using 0 or -9999... for every missing value.




              • Regression Substitution:


              You can use a multiple regression to infer the missing value from the available values for each candidate.



              Some references on missing data are:




              • Allison, Paul D. 2001. Missing Data. Sage University Papers
                Series on Quantitative Applications in the Social Sciences.
                Thousand Oaks: Sage.

              • Enders, Craig. 2010. Applied Missing Data Analysis.
                Guilford Press: New York.

              • Little, Roderick J., Donald Rubin. 2002. Statistical Analysis
                with Missing Data. John Wiley & Sons, Inc: Hoboken.

              • Schafer, Joseph L., John W. Graham. 2002. “Missing Data:
                Our View of the State of the Art.” Psychological Methods.




              About your experiment:



              Adding -99... is creating outliers and that bit of information is heavy (numerically speaking, it is huge) and will affect parameter tuning. For example, suppose you have this data:



              | Feature1 | Feature2 | 
              |----------|----------|
              | 0 | 8 |
              | -1 | 7 |
              | 1 | - |
              | - | 8 |


              And you try filling the missing values with -99, now try to fit a linear regression trough the data. Can you see that you don't be able to fit it properly?



              The brown line



              The line won't fit, and this will yield bad performance.



              Adding 0 values on the other hand will give a slightly better line:



              The yellow line



              It is still not good, but slightly better since the scale of the parameters will be more realistic.



              Now, using average, is this case will give you even better curve, but using regression will give you a perfect fitting line:



              The perfect line



              Note: I need to remake those images, but these should do until I have the time for it.






              share|improve this answer











              $endgroup$
















                1












                1








                1





                $begingroup$

                Welcome to Data Science SE!



                Well, we say that most of our jobs is to wrangle with data, and that is because data is usually trying to deceive us... jokes aside:



                You have a missing data problem



                that means your have to clean your data and fill those missing values. To perform this cleaning process your must take the most classic statistician inside of you and ask:




                • Why is this data missing?

                • How much data is missing?


                There are many reasons for a specific information to be unavailable. This will demand you to make assumptions and decide how to deal with this.



                Jeff Sauro posted at MeasuringU: 7 Ways to Handle Missing Data, some which I list here:




                • Delete corrupted samples:


                If you have a large dataset and there is not much data missing, you can simply remove those corrupted data points and go on with life




                • Recover the values:


                Some problems will allow you to go back and get missing information.



                We usually ain't that lucky, then you can




                • Educated Guessing:


                Sometimes, you can infer what would be the feature value by simply looking their pears. That is a bit arbitrary but it might work.




                • Average:


                This is the most common approach, simply use the average of that value whenever it is missing. This might artificially reduce your variance but so does using 0 or -9999... for every missing value.




                • Regression Substitution:


                You can use a multiple regression to infer the missing value from the available values for each candidate.



                Some references on missing data are:




                • Allison, Paul D. 2001. Missing Data. Sage University Papers
                  Series on Quantitative Applications in the Social Sciences.
                  Thousand Oaks: Sage.

                • Enders, Craig. 2010. Applied Missing Data Analysis.
                  Guilford Press: New York.

                • Little, Roderick J., Donald Rubin. 2002. Statistical Analysis
                  with Missing Data. John Wiley & Sons, Inc: Hoboken.

                • Schafer, Joseph L., John W. Graham. 2002. “Missing Data:
                  Our View of the State of the Art.” Psychological Methods.




                About your experiment:



                Adding -99... is creating outliers and that bit of information is heavy (numerically speaking, it is huge) and will affect parameter tuning. For example, suppose you have this data:



                | Feature1 | Feature2 | 
                |----------|----------|
                | 0 | 8 |
                | -1 | 7 |
                | 1 | - |
                | - | 8 |


                And you try filling the missing values with -99, now try to fit a linear regression trough the data. Can you see that you don't be able to fit it properly?



                The brown line



                The line won't fit, and this will yield bad performance.



                Adding 0 values on the other hand will give a slightly better line:



                The yellow line



                It is still not good, but slightly better since the scale of the parameters will be more realistic.



                Now, using average, is this case will give you even better curve, but using regression will give you a perfect fitting line:



                The perfect line



                Note: I need to remake those images, but these should do until I have the time for it.






                share|improve this answer











                $endgroup$



                Welcome to Data Science SE!



                Well, we say that most of our jobs is to wrangle with data, and that is because data is usually trying to deceive us... jokes aside:



                You have a missing data problem



                that means your have to clean your data and fill those missing values. To perform this cleaning process your must take the most classic statistician inside of you and ask:




                • Why is this data missing?

                • How much data is missing?


                There are many reasons for a specific information to be unavailable. This will demand you to make assumptions and decide how to deal with this.



                Jeff Sauro posted at MeasuringU: 7 Ways to Handle Missing Data, some which I list here:




                • Delete corrupted samples:


                If you have a large dataset and there is not much data missing, you can simply remove those corrupted data points and go on with life




                • Recover the values:


                Some problems will allow you to go back and get missing information.



                We usually ain't that lucky, then you can




                • Educated Guessing:


                Sometimes, you can infer what would be the feature value by simply looking their pears. That is a bit arbitrary but it might work.




                • Average:


                This is the most common approach, simply use the average of that value whenever it is missing. This might artificially reduce your variance but so does using 0 or -9999... for every missing value.




                • Regression Substitution:


                You can use a multiple regression to infer the missing value from the available values for each candidate.



                Some references on missing data are:




                • Allison, Paul D. 2001. Missing Data. Sage University Papers
                  Series on Quantitative Applications in the Social Sciences.
                  Thousand Oaks: Sage.

                • Enders, Craig. 2010. Applied Missing Data Analysis.
                  Guilford Press: New York.

                • Little, Roderick J., Donald Rubin. 2002. Statistical Analysis
                  with Missing Data. John Wiley & Sons, Inc: Hoboken.

                • Schafer, Joseph L., John W. Graham. 2002. “Missing Data:
                  Our View of the State of the Art.” Psychological Methods.




                About your experiment:



                Adding -99... is creating outliers and that bit of information is heavy (numerically speaking, it is huge) and will affect parameter tuning. For example, suppose you have this data:



                | Feature1 | Feature2 | 
                |----------|----------|
                | 0 | 8 |
                | -1 | 7 |
                | 1 | - |
                | - | 8 |


                And you try filling the missing values with -99, now try to fit a linear regression trough the data. Can you see that you don't be able to fit it properly?



                The brown line



                The line won't fit, and this will yield bad performance.



                Adding 0 values on the other hand will give a slightly better line:



                The yellow line



                It is still not good, but slightly better since the scale of the parameters will be more realistic.



                Now, using average, is this case will give you even better curve, but using regression will give you a perfect fitting line:



                The perfect line



                Note: I need to remake those images, but these should do until I have the time for it.







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited 2 hours ago

























                answered 2 hours ago









                Pedro Henrique MonfortePedro Henrique Monforte

                466114




                466114






















                    Darklink9110 is a new contributor. Be nice, and check out our Code of Conduct.










                    draft saved

                    draft discarded


















                    Darklink9110 is a new contributor. Be nice, and check out our Code of Conduct.













                    Darklink9110 is a new contributor. Be nice, and check out our Code of Conduct.












                    Darklink9110 is a new contributor. Be nice, and check out our Code of Conduct.
















                    Thanks for contributing an answer to Data Science Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49298%2ftraining-a-classifier-when-some-of-the-features-are-unknown%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Masuk log Menu navigasi

                    Identifying “long and narrow” polygons in with PostGISlength and width of polygonWhy postgis st_overlaps reports Qgis' “avoid intersections” generated polygon as overlapping with others?Adjusting polygons to boundary and filling holesDrawing polygons with fixed area?How to remove spikes in Polygons with PostGISDeleting sliver polygons after difference operation in QGIS?Snapping boundaries in PostGISSplit polygon into parts adding attributes based on underlying polygon in QGISSplitting overlap between polygons and assign to nearest polygon using PostGIS?Expanding polygons and clipping at midpoint?Removing Intersection of Buffers in Same Layers

                    Старые Смолеговицы Содержание История | География | Демография | Достопримечательности | Примечания | НавигацияHGЯOLHGЯOL41 206 832 01641 606 406 141Административно-территориальное деление Ленинградской области«Переписная оброчная книга Водской пятины 1500 года», С. 793«Карта Ингерманландии: Ивангорода, Яма, Копорья, Нотеборга», по материалам 1676 г.«Генеральная карта провинции Ингерманландии» Э. Белинга и А. Андерсина, 1704 г., составлена по материалам 1678 г.«Географический чертёж над Ижорскою землей со своими городами» Адриана Шонбека 1705 г.Новая и достоверная всей Ингерманландии ланткарта. Грав. А. Ростовцев. СПб., 1727 г.Топографическая карта Санкт-Петербургской губернии. 5-и верстка. Шуберт. 1834 г.Описание Санкт-Петербургской губернии по уездам и станамСпецкарта западной части России Ф. Ф. Шуберта. 1844 г.Алфавитный список селений по уездам и станам С.-Петербургской губернииСписки населённых мест Российской Империи, составленные и издаваемые центральным статистическим комитетом министерства внутренних дел. XXXVII. Санкт-Петербургская губерния. По состоянию на 1862 год. СПб. 1864. С. 203Материалы по статистике народного хозяйства в С.-Петербургской губернии. Вып. IX. Частновладельческое хозяйство в Ямбургском уезде. СПб, 1888, С. 146, С. 2, 7, 54Положение о гербе муниципального образования Курское сельское поселениеСправочник истории административно-территориального деления Ленинградской области.Топографическая карта Ленинградской области, квадрат О-35-23-В (Хотыницы), 1930 г.АрхивированоАдминистративно-территориальное деление Ленинградской области. — Л., 1933, С. 27, 198АрхивированоАдминистративно-экономический справочник по Ленинградской области. — Л., 1936, с. 219АрхивированоАдминистративно-территориальное деление Ленинградской области. — Л., 1966, с. 175АрхивированоАдминистративно-территориальное деление Ленинградской области. — Лениздат, 1973, С. 180АрхивированоАдминистративно-территориальное деление Ленинградской области. — Лениздат, 1990, ISBN 5-289-00612-5, С. 38АрхивированоАдминистративно-территориальное деление Ленинградской области. — СПб., 2007, с. 60АрхивированоКоряков Юрий База данных «Этно-языковой состав населённых пунктов России». Ленинградская область.Административно-территориальное деление Ленинградской области. — СПб, 1997, ISBN 5-86153-055-6, С. 41АрхивированоКультовый комплекс Старые Смолеговицы // Электронная энциклопедия ЭрмитажаПроблемы выявления, изучения и сохранения культовых комплексов с каменными крестами: по материалам работ 2016-2017 гг. в Ленинградской области