Telemetry for feature healthReferences for resources used in testWhat are the best practices for error seedingAnybody know of any tools for tailing a users path through the logs please?Good Grep Tools for WindowsExamples and Best Practices for Seeding Defects?Best practices for managing code duplication?Comparison metrics for a static analysis toolMeasuring feature coverage with BDDsResponse for a class (RFC) metric tools for PHPA good name for live system performance testing?

Python password manager

Was there ever a Kickstart that took advantage of 68020+ instructions that would work on an A2000?

What word means "to make something obsolete"?

Manager is threatning to grade me poorly if I don't complete the project

I caught several of my students plagiarizing. Could it be my fault as a teacher?

Reconstruct a matrix from its traces

What are the spoon bit of a spoon and fork bit of a fork called?

Why wasn't the Night King naked in S08E03?

How to reply this mail from potential PhD professor?

Do I really need diodes to receive MIDI?

Can I get a paladin's steed by True Polymorphing into a monster that can cast Find Steed?

What was the state of the German rail system in 1944?

Point of the the Dothraki's attack in GoT S8E3?

What happens to the Time Stone

Selecting a secure PIN for building access

Moving the subject of the sentence into a dangling participle

In a vacuum triode, what prevents the grid from acting as another anode?

Casual versus formal jacket

Am I getting DDOS from crawlers?

60s (or earlier) SF short story with FTL Travel using electron psychology aka addiclenendar technology

If Earth is tilted, why is Polaris always above the same spot?

Roll Dice to get a random number between 1 and 150

What is a "listed natural gas appliance"?

Junior developer struggles: how to communicate with management?



Telemetry for feature health


References for resources used in testWhat are the best practices for error seedingAnybody know of any tools for tailing a users path through the logs please?Good Grep Tools for WindowsExamples and Best Practices for Seeding Defects?Best practices for managing code duplication?Comparison metrics for a static analysis toolMeasuring feature coverage with BDDsResponse for a class (RFC) metric tools for PHPA good name for live system performance testing?













6















Not sure how opinion based my question is, but when you have a released (client facing) feature, how are you evaluating with telemetry if it's healthy? By healthy I mean users can interact with it (it's accessible) and when there was an interaction the expected outcome happened.



Here is an example for the expected outcome case:
There is a delete button and if it's pressed, a request sent to a server, and if that returns 200, the delete functionality can be assumed to work.



Let's say we are sending a metric DeleteButtonPressed when the user presses the button and DeleteSuccessful upon receiving the 200 response code. If there is a drop in DeleteSuccessful/ DeleteButtonPressed it can be said that the feature is not healthy.



However how do we know if the feature is accessible? The problem is that it cannot really be distinguished if the users are not be able to use the feature (because of let's say in a specific OS version the app behaves differently and the feature is not visible) or they don't want to interact with it. Monitoring just the rate of DeleteButtonPressed therefore isn't a good indicator.



Maybe I'm missing something but what are good metrics to monitor feature health?










share|improve this question


























    6















    Not sure how opinion based my question is, but when you have a released (client facing) feature, how are you evaluating with telemetry if it's healthy? By healthy I mean users can interact with it (it's accessible) and when there was an interaction the expected outcome happened.



    Here is an example for the expected outcome case:
    There is a delete button and if it's pressed, a request sent to a server, and if that returns 200, the delete functionality can be assumed to work.



    Let's say we are sending a metric DeleteButtonPressed when the user presses the button and DeleteSuccessful upon receiving the 200 response code. If there is a drop in DeleteSuccessful/ DeleteButtonPressed it can be said that the feature is not healthy.



    However how do we know if the feature is accessible? The problem is that it cannot really be distinguished if the users are not be able to use the feature (because of let's say in a specific OS version the app behaves differently and the feature is not visible) or they don't want to interact with it. Monitoring just the rate of DeleteButtonPressed therefore isn't a good indicator.



    Maybe I'm missing something but what are good metrics to monitor feature health?










    share|improve this question
























      6












      6








      6


      1






      Not sure how opinion based my question is, but when you have a released (client facing) feature, how are you evaluating with telemetry if it's healthy? By healthy I mean users can interact with it (it's accessible) and when there was an interaction the expected outcome happened.



      Here is an example for the expected outcome case:
      There is a delete button and if it's pressed, a request sent to a server, and if that returns 200, the delete functionality can be assumed to work.



      Let's say we are sending a metric DeleteButtonPressed when the user presses the button and DeleteSuccessful upon receiving the 200 response code. If there is a drop in DeleteSuccessful/ DeleteButtonPressed it can be said that the feature is not healthy.



      However how do we know if the feature is accessible? The problem is that it cannot really be distinguished if the users are not be able to use the feature (because of let's say in a specific OS version the app behaves differently and the feature is not visible) or they don't want to interact with it. Monitoring just the rate of DeleteButtonPressed therefore isn't a good indicator.



      Maybe I'm missing something but what are good metrics to monitor feature health?










      share|improve this question














      Not sure how opinion based my question is, but when you have a released (client facing) feature, how are you evaluating with telemetry if it's healthy? By healthy I mean users can interact with it (it's accessible) and when there was an interaction the expected outcome happened.



      Here is an example for the expected outcome case:
      There is a delete button and if it's pressed, a request sent to a server, and if that returns 200, the delete functionality can be assumed to work.



      Let's say we are sending a metric DeleteButtonPressed when the user presses the button and DeleteSuccessful upon receiving the 200 response code. If there is a drop in DeleteSuccessful/ DeleteButtonPressed it can be said that the feature is not healthy.



      However how do we know if the feature is accessible? The problem is that it cannot really be distinguished if the users are not be able to use the feature (because of let's say in a specific OS version the app behaves differently and the feature is not visible) or they don't want to interact with it. Monitoring just the rate of DeleteButtonPressed therefore isn't a good indicator.



      Maybe I'm missing something but what are good metrics to monitor feature health?







      metrics logs monitoring






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Mar 20 at 15:27









      Dániel NagyDániel Nagy

      1334




      1334




















          3 Answers
          3






          active

          oldest

          votes


















          2














          Summary: It's a reasonable concern but not one that I encounter in practice.



          Create smoke tests that are high level are assert that basic functionality, such as visiting the first page, work. These should not be reliant on OS specific peculiarities about edge cases.



          Run these tests as part of the deployment to an environment process and only consider the application to be deployed if they all pass.



          Also, perform exploratory testing to ensure that the elements are truly visible.



          For a specific feature as you mention, part of the puzzle is making sure you have quality unit tests, good integration tests and good Acceptance tests. Acceptance tests including UI tests can become part of your regression suite but this should happen selectively and mostly not happen otherwise you start building a massive and slow regression suite. Most business struggle with this because it seems attractive except that... it gets slower and slower and businesses today want speed.



          I've been writing selenium tests for years and I have not experienced what you describe as a common issue actually occurring. I can certainly recall ONE TIME, where in IE you had to scroll down before the selenium finder would work - but even in this case, it worked for actual users, just not the automation without an additional scroll_to action.






          share|improve this answer
































            3














            Telemetry can't always pin point problems, but many times it can indicate the existence of a problem.



            If you expect some problems to occur you can sometimes add smarter telemetry and better analysis of other pieces of information, for example users skipping the Delete button and closing the application all together.



            A complementary approach is using A/B testing, give some users a Delete button of type A and some of type B and compare the results. You can use A/B testing to assess designs but also retrospectively to locate or fix problems.






            share|improve this answer






























              1














              To be able to effectively collect "telemetry" of a feature this capability should be initially (and intentionally) included into application architecture design. This is not that straightforward since it depends on many things like whether you need real-time monitoring, or, say, overnight analysis.



              Generally speaking when you say that a feature is "alive" it usually means that is passes all the step sequence to deliver the result to the end-user. That result might be wrong one (however in my understanding that still means the feature is alive, but having a defect). To achieve that each high-level logical step composing a feature should log the step definition to the audit storage. Having such entries associated with user session and feature identifier you would be able to analyze if all the features produced the end result.



              You would also need to define termination mark since sometimes the user just change their mind and do not complete the steps to accomplish the use-case.



              So the metric could be the number of sequences which have no end-points achieved.






              share|improve this answer























                Your Answer








                StackExchange.ready(function()
                var channelOptions =
                tags: "".split(" "),
                id: "244"
                ;
                initTagRenderer("".split(" "), "".split(" "), channelOptions);

                StackExchange.using("externalEditor", function()
                // Have to fire editor after snippets, if snippets enabled
                if (StackExchange.settings.snippets.snippetsEnabled)
                StackExchange.using("snippets", function()
                createEditor();
                );

                else
                createEditor();

                );

                function createEditor()
                StackExchange.prepareEditor(
                heartbeatType: 'answer',
                autoActivateHeartbeat: false,
                convertImagesToLinks: false,
                noModals: true,
                showLowRepImageUploadWarning: true,
                reputationToPostImages: null,
                bindNavPrevention: true,
                postfix: "",
                imageUploader:
                brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
                contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
                allowUrls: true
                ,
                onDemand: true,
                discardSelector: ".discard-answer"
                ,immediatelyShowMarkdownHelp:true
                );



                );













                draft saved

                draft discarded


















                StackExchange.ready(
                function ()
                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsqa.stackexchange.com%2fquestions%2f38342%2ftelemetry-for-feature-health%23new-answer', 'question_page');

                );

                Post as a guest















                Required, but never shown

























                3 Answers
                3






                active

                oldest

                votes








                3 Answers
                3






                active

                oldest

                votes









                active

                oldest

                votes






                active

                oldest

                votes









                2














                Summary: It's a reasonable concern but not one that I encounter in practice.



                Create smoke tests that are high level are assert that basic functionality, such as visiting the first page, work. These should not be reliant on OS specific peculiarities about edge cases.



                Run these tests as part of the deployment to an environment process and only consider the application to be deployed if they all pass.



                Also, perform exploratory testing to ensure that the elements are truly visible.



                For a specific feature as you mention, part of the puzzle is making sure you have quality unit tests, good integration tests and good Acceptance tests. Acceptance tests including UI tests can become part of your regression suite but this should happen selectively and mostly not happen otherwise you start building a massive and slow regression suite. Most business struggle with this because it seems attractive except that... it gets slower and slower and businesses today want speed.



                I've been writing selenium tests for years and I have not experienced what you describe as a common issue actually occurring. I can certainly recall ONE TIME, where in IE you had to scroll down before the selenium finder would work - but even in this case, it worked for actual users, just not the automation without an additional scroll_to action.






                share|improve this answer





























                  2














                  Summary: It's a reasonable concern but not one that I encounter in practice.



                  Create smoke tests that are high level are assert that basic functionality, such as visiting the first page, work. These should not be reliant on OS specific peculiarities about edge cases.



                  Run these tests as part of the deployment to an environment process and only consider the application to be deployed if they all pass.



                  Also, perform exploratory testing to ensure that the elements are truly visible.



                  For a specific feature as you mention, part of the puzzle is making sure you have quality unit tests, good integration tests and good Acceptance tests. Acceptance tests including UI tests can become part of your regression suite but this should happen selectively and mostly not happen otherwise you start building a massive and slow regression suite. Most business struggle with this because it seems attractive except that... it gets slower and slower and businesses today want speed.



                  I've been writing selenium tests for years and I have not experienced what you describe as a common issue actually occurring. I can certainly recall ONE TIME, where in IE you had to scroll down before the selenium finder would work - but even in this case, it worked for actual users, just not the automation without an additional scroll_to action.






                  share|improve this answer



























                    2












                    2








                    2







                    Summary: It's a reasonable concern but not one that I encounter in practice.



                    Create smoke tests that are high level are assert that basic functionality, such as visiting the first page, work. These should not be reliant on OS specific peculiarities about edge cases.



                    Run these tests as part of the deployment to an environment process and only consider the application to be deployed if they all pass.



                    Also, perform exploratory testing to ensure that the elements are truly visible.



                    For a specific feature as you mention, part of the puzzle is making sure you have quality unit tests, good integration tests and good Acceptance tests. Acceptance tests including UI tests can become part of your regression suite but this should happen selectively and mostly not happen otherwise you start building a massive and slow regression suite. Most business struggle with this because it seems attractive except that... it gets slower and slower and businesses today want speed.



                    I've been writing selenium tests for years and I have not experienced what you describe as a common issue actually occurring. I can certainly recall ONE TIME, where in IE you had to scroll down before the selenium finder would work - but even in this case, it worked for actual users, just not the automation without an additional scroll_to action.






                    share|improve this answer















                    Summary: It's a reasonable concern but not one that I encounter in practice.



                    Create smoke tests that are high level are assert that basic functionality, such as visiting the first page, work. These should not be reliant on OS specific peculiarities about edge cases.



                    Run these tests as part of the deployment to an environment process and only consider the application to be deployed if they all pass.



                    Also, perform exploratory testing to ensure that the elements are truly visible.



                    For a specific feature as you mention, part of the puzzle is making sure you have quality unit tests, good integration tests and good Acceptance tests. Acceptance tests including UI tests can become part of your regression suite but this should happen selectively and mostly not happen otherwise you start building a massive and slow regression suite. Most business struggle with this because it seems attractive except that... it gets slower and slower and businesses today want speed.



                    I've been writing selenium tests for years and I have not experienced what you describe as a common issue actually occurring. I can certainly recall ONE TIME, where in IE you had to scroll down before the selenium finder would work - but even in this case, it worked for actual users, just not the automation without an additional scroll_to action.







                    share|improve this answer














                    share|improve this answer



                    share|improve this answer








                    edited Mar 20 at 21:28

























                    answered Mar 20 at 16:27









                    Michael DurrantMichael Durrant

                    15.2k22165




                    15.2k22165





















                        3














                        Telemetry can't always pin point problems, but many times it can indicate the existence of a problem.



                        If you expect some problems to occur you can sometimes add smarter telemetry and better analysis of other pieces of information, for example users skipping the Delete button and closing the application all together.



                        A complementary approach is using A/B testing, give some users a Delete button of type A and some of type B and compare the results. You can use A/B testing to assess designs but also retrospectively to locate or fix problems.






                        share|improve this answer



























                          3














                          Telemetry can't always pin point problems, but many times it can indicate the existence of a problem.



                          If you expect some problems to occur you can sometimes add smarter telemetry and better analysis of other pieces of information, for example users skipping the Delete button and closing the application all together.



                          A complementary approach is using A/B testing, give some users a Delete button of type A and some of type B and compare the results. You can use A/B testing to assess designs but also retrospectively to locate or fix problems.






                          share|improve this answer

























                            3












                            3








                            3







                            Telemetry can't always pin point problems, but many times it can indicate the existence of a problem.



                            If you expect some problems to occur you can sometimes add smarter telemetry and better analysis of other pieces of information, for example users skipping the Delete button and closing the application all together.



                            A complementary approach is using A/B testing, give some users a Delete button of type A and some of type B and compare the results. You can use A/B testing to assess designs but also retrospectively to locate or fix problems.






                            share|improve this answer













                            Telemetry can't always pin point problems, but many times it can indicate the existence of a problem.



                            If you expect some problems to occur you can sometimes add smarter telemetry and better analysis of other pieces of information, for example users skipping the Delete button and closing the application all together.



                            A complementary approach is using A/B testing, give some users a Delete button of type A and some of type B and compare the results. You can use A/B testing to assess designs but also retrospectively to locate or fix problems.







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered Mar 20 at 15:44









                            RsfRsf

                            4,24311426




                            4,24311426





















                                1














                                To be able to effectively collect "telemetry" of a feature this capability should be initially (and intentionally) included into application architecture design. This is not that straightforward since it depends on many things like whether you need real-time monitoring, or, say, overnight analysis.



                                Generally speaking when you say that a feature is "alive" it usually means that is passes all the step sequence to deliver the result to the end-user. That result might be wrong one (however in my understanding that still means the feature is alive, but having a defect). To achieve that each high-level logical step composing a feature should log the step definition to the audit storage. Having such entries associated with user session and feature identifier you would be able to analyze if all the features produced the end result.



                                You would also need to define termination mark since sometimes the user just change their mind and do not complete the steps to accomplish the use-case.



                                So the metric could be the number of sequences which have no end-points achieved.






                                share|improve this answer



























                                  1














                                  To be able to effectively collect "telemetry" of a feature this capability should be initially (and intentionally) included into application architecture design. This is not that straightforward since it depends on many things like whether you need real-time monitoring, or, say, overnight analysis.



                                  Generally speaking when you say that a feature is "alive" it usually means that is passes all the step sequence to deliver the result to the end-user. That result might be wrong one (however in my understanding that still means the feature is alive, but having a defect). To achieve that each high-level logical step composing a feature should log the step definition to the audit storage. Having such entries associated with user session and feature identifier you would be able to analyze if all the features produced the end result.



                                  You would also need to define termination mark since sometimes the user just change their mind and do not complete the steps to accomplish the use-case.



                                  So the metric could be the number of sequences which have no end-points achieved.






                                  share|improve this answer

























                                    1












                                    1








                                    1







                                    To be able to effectively collect "telemetry" of a feature this capability should be initially (and intentionally) included into application architecture design. This is not that straightforward since it depends on many things like whether you need real-time monitoring, or, say, overnight analysis.



                                    Generally speaking when you say that a feature is "alive" it usually means that is passes all the step sequence to deliver the result to the end-user. That result might be wrong one (however in my understanding that still means the feature is alive, but having a defect). To achieve that each high-level logical step composing a feature should log the step definition to the audit storage. Having such entries associated with user session and feature identifier you would be able to analyze if all the features produced the end result.



                                    You would also need to define termination mark since sometimes the user just change their mind and do not complete the steps to accomplish the use-case.



                                    So the metric could be the number of sequences which have no end-points achieved.






                                    share|improve this answer













                                    To be able to effectively collect "telemetry" of a feature this capability should be initially (and intentionally) included into application architecture design. This is not that straightforward since it depends on many things like whether you need real-time monitoring, or, say, overnight analysis.



                                    Generally speaking when you say that a feature is "alive" it usually means that is passes all the step sequence to deliver the result to the end-user. That result might be wrong one (however in my understanding that still means the feature is alive, but having a defect). To achieve that each high-level logical step composing a feature should log the step definition to the audit storage. Having such entries associated with user session and feature identifier you would be able to analyze if all the features produced the end result.



                                    You would also need to define termination mark since sometimes the user just change their mind and do not complete the steps to accomplish the use-case.



                                    So the metric could be the number of sequences which have no end-points achieved.







                                    share|improve this answer












                                    share|improve this answer



                                    share|improve this answer










                                    answered Mar 20 at 15:57









                                    Alexey R.Alexey R.

                                    8,22721033




                                    8,22721033



























                                        draft saved

                                        draft discarded
















































                                        Thanks for contributing an answer to Software Quality Assurance & Testing Stack Exchange!


                                        • Please be sure to answer the question. Provide details and share your research!

                                        But avoid


                                        • Asking for help, clarification, or responding to other answers.

                                        • Making statements based on opinion; back them up with references or personal experience.

                                        To learn more, see our tips on writing great answers.




                                        draft saved


                                        draft discarded














                                        StackExchange.ready(
                                        function ()
                                        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsqa.stackexchange.com%2fquestions%2f38342%2ftelemetry-for-feature-health%23new-answer', 'question_page');

                                        );

                                        Post as a guest















                                        Required, but never shown





















































                                        Required, but never shown














                                        Required, but never shown












                                        Required, but never shown







                                        Required, but never shown

































                                        Required, but never shown














                                        Required, but never shown












                                        Required, but never shown







                                        Required, but never shown







                                        Popular posts from this blog

                                        Identifying “long and narrow” polygons in with PostGISlength and width of polygonWhy postgis st_overlaps reports Qgis' “avoid intersections” generated polygon as overlapping with others?Adjusting polygons to boundary and filling holesDrawing polygons with fixed area?How to remove spikes in Polygons with PostGISDeleting sliver polygons after difference operation in QGIS?Snapping boundaries in PostGISSplit polygon into parts adding attributes based on underlying polygon in QGISSplitting overlap between polygons and assign to nearest polygon using PostGIS?Expanding polygons and clipping at midpoint?Removing Intersection of Buffers in Same Layers

                                        Masuk log Menu navigasi

                                        อาณาจักร (ชีววิทยา) ดูเพิ่ม อ้างอิง รายการเลือกการนำทาง10.1086/39456810.5962/bhl.title.447410.1126/science.163.3863.150576276010.1007/BF01796092408502"Phylogenetic structure of the prokaryotic domain: the primary kingdoms"10.1073/pnas.74.11.5088432104270744"Towards a natural system of organisms: proposal for the domains Archaea, Bacteria, and Eucarya"1990PNAS...87.4576W10.1073/pnas.87.12.4576541592112744PubMedJump the queueexpand by handPubMedJump the queueexpand by handPubMedJump the queueexpand by hand"A revised six-kingdom system of life"10.1111/j.1469-185X.1998.tb00030.x9809012"Only six kingdoms of life"10.1098/rspb.2004.2705169172415306349"Kingdoms Protozoa and Chromista and the eozoan root of the eukaryotic tree"10.1098/rsbl.2009.0948288006020031978เพิ่มข้อมูล