Unnormalized Log Probability - RNNIs There any RNN method used for Object detectionHow is the LSTM RNN forget gate calculated?How to train the same RNN over multiple series?Find most important inputs of LSTM-RNN for multivariate time series modelingLoss function for an RNN used for binary classificationWhy the RNN has input shape error?Input and output Dimension of LSTM RNNWhat is the advantage of using RNN with fixed timestep length over Neural Network?Need to make an multivariate RNN, confused about input shape?RNN for prediciting Development over time

Is there any evidence that Cleopatra and Caesarion considered fleeting to India to escape the Romans?

Quoting Keynes in a lecture

Giving feedback to someone without sounding prejudiced

Why does a 97 / 92 key piano exist by Bösendorfer?

Should I assume I have passed probation?

Integral Notations in Quantum Mechanics

Can I say "fingers" when referring to toes?

What fields between the rationals and the reals allow a good notion of 2D distance?

How to preserve electronics (computers, iPads and phones) for hundreds of years

No results when searching for contacts on facet value

On definitions and explicit examples of pure-injective modules

Does Doodling or Improvising on the Piano Have Any Benefits?

How many people need to be born every 8 years to sustain population?

The Digit Triangles

Given this phrasing in the lease, when should I pay my rent?

Remove specific words in a string

Is it feasible to let a newcomer play the "Gandalf"-like figure I created for my campaign?

Assign a specific color to vector layer based on RGB codes in attribute table

What is the Difference Between Formulating the Answer via Quadratic Formula and Factoring?

If Captain Marvel (MCU) were to have a child with a human male, would the child be human or Kree?

Would a primitive species be able to learn English from reading books alone?

Logistic function with a slope but no asymptotes?

Is there a reason to prefer HFS+ over APFS for disk images in High Sierra and/or Mojave?

Why is participating in the European Parliamentary elections used as a threat?



Unnormalized Log Probability - RNN


Is There any RNN method used for Object detectionHow is the LSTM RNN forget gate calculated?How to train the same RNN over multiple series?Find most important inputs of LSTM-RNN for multivariate time series modelingLoss function for an RNN used for binary classificationWhy the RNN has input shape error?Input and output Dimension of LSTM RNNWhat is the advantage of using RNN with fixed timestep length over Neural Network?Need to make an multivariate RNN, confused about input shape?RNN for prediciting Development over time













2












$begingroup$


I am going through the deep learning book by Goodfellow. In the RNN section I am stuck with the following:



RNN is defined like following:



enter image description here



And the equations are :



enter image description here



enter image description here



Now the $O^(t)$ above is considered as unnormalized log probability. But if this is true, then the value of $O^(t)$ must be negative because,



Probability is always defined as a number between 0 and 1, i.e, $Pin[0,1]$, where brackets denote closed interval. And $log(P) le 0$ on this interval. But in the equations above, nowhere this condition that $O^(t) le 0$ is explicitly enforced.



What am I missing!










share|improve this question











$endgroup$
















    2












    $begingroup$


    I am going through the deep learning book by Goodfellow. In the RNN section I am stuck with the following:



    RNN is defined like following:



    enter image description here



    And the equations are :



    enter image description here



    enter image description here



    Now the $O^(t)$ above is considered as unnormalized log probability. But if this is true, then the value of $O^(t)$ must be negative because,



    Probability is always defined as a number between 0 and 1, i.e, $Pin[0,1]$, where brackets denote closed interval. And $log(P) le 0$ on this interval. But in the equations above, nowhere this condition that $O^(t) le 0$ is explicitly enforced.



    What am I missing!










    share|improve this question











    $endgroup$














      2












      2








      2





      $begingroup$


      I am going through the deep learning book by Goodfellow. In the RNN section I am stuck with the following:



      RNN is defined like following:



      enter image description here



      And the equations are :



      enter image description here



      enter image description here



      Now the $O^(t)$ above is considered as unnormalized log probability. But if this is true, then the value of $O^(t)$ must be negative because,



      Probability is always defined as a number between 0 and 1, i.e, $Pin[0,1]$, where brackets denote closed interval. And $log(P) le 0$ on this interval. But in the equations above, nowhere this condition that $O^(t) le 0$ is explicitly enforced.



      What am I missing!










      share|improve this question











      $endgroup$




      I am going through the deep learning book by Goodfellow. In the RNN section I am stuck with the following:



      RNN is defined like following:



      enter image description here



      And the equations are :



      enter image description here



      enter image description here



      Now the $O^(t)$ above is considered as unnormalized log probability. But if this is true, then the value of $O^(t)$ must be negative because,



      Probability is always defined as a number between 0 and 1, i.e, $Pin[0,1]$, where brackets denote closed interval. And $log(P) le 0$ on this interval. But in the equations above, nowhere this condition that $O^(t) le 0$ is explicitly enforced.



      What am I missing!







      deep-learning lstm recurrent-neural-net






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Mar 17 at 9:38









      Siong Thye Goh

      1,367519




      1,367519










      asked Mar 17 at 8:40









      user3001408user3001408

      360146




      360146




















          2 Answers
          2






          active

          oldest

          votes


















          3












          $begingroup$

          You are right in a sense that it is better to be called log of unnormalized probability. This way, the quantity could be positive or negative. For example, $textlog(0.5) < 0$ and $textlog(12) > 0$ are both valid log of unnormalized probabilities. Here, in more detail:



          1. Probability: $P(i) = e^o_i/sum_k=1^Ke^o_k$ (using softmax as mentioned in Figure 10.3 caption, and assuming $mathbfo=(o_1,..,o_K)$ is the output of layer before softmax),


          2. Unnormalized probability: $tildeP(i) = e^o_i$, which can be larger than 1,


          3. Log of unnormalized probability: $textlogtildeP(i) = o_i$, which can be positive or negative.






          share|improve this answer











          $endgroup$




















            2












            $begingroup$

            You are right, nothing stop $o_k^(t)$ from being nonnegative, the keyword here is "unnormalized".



            If we let $o_k^(t)=ln q_k^(t)$



            $$haty^(t)_k= fracexp(o^(t)_k)sum_k=1^K exp(o^(t)_k) = fracq_k^(t)sum_k=1^K q_k^(t)$$



            Here $q_k^(t)$ can be any positive number, they will be normalized to be sum to $1$.






            share|improve this answer









            $endgroup$












              Your Answer





              StackExchange.ifUsing("editor", function ()
              return StackExchange.using("mathjaxEditing", function ()
              StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
              StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
              );
              );
              , "mathjax-editing");

              StackExchange.ready(function()
              var channelOptions =
              tags: "".split(" "),
              id: "557"
              ;
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function()
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled)
              StackExchange.using("snippets", function()
              createEditor();
              );

              else
              createEditor();

              );

              function createEditor()
              StackExchange.prepareEditor(
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: false,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: null,
              bindNavPrevention: true,
              postfix: "",
              imageUploader:
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              ,
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              );



              );













              draft saved

              draft discarded


















              StackExchange.ready(
              function ()
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47460%2funnormalized-log-probability-rnn%23new-answer', 'question_page');

              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              3












              $begingroup$

              You are right in a sense that it is better to be called log of unnormalized probability. This way, the quantity could be positive or negative. For example, $textlog(0.5) < 0$ and $textlog(12) > 0$ are both valid log of unnormalized probabilities. Here, in more detail:



              1. Probability: $P(i) = e^o_i/sum_k=1^Ke^o_k$ (using softmax as mentioned in Figure 10.3 caption, and assuming $mathbfo=(o_1,..,o_K)$ is the output of layer before softmax),


              2. Unnormalized probability: $tildeP(i) = e^o_i$, which can be larger than 1,


              3. Log of unnormalized probability: $textlogtildeP(i) = o_i$, which can be positive or negative.






              share|improve this answer











              $endgroup$

















                3












                $begingroup$

                You are right in a sense that it is better to be called log of unnormalized probability. This way, the quantity could be positive or negative. For example, $textlog(0.5) < 0$ and $textlog(12) > 0$ are both valid log of unnormalized probabilities. Here, in more detail:



                1. Probability: $P(i) = e^o_i/sum_k=1^Ke^o_k$ (using softmax as mentioned in Figure 10.3 caption, and assuming $mathbfo=(o_1,..,o_K)$ is the output of layer before softmax),


                2. Unnormalized probability: $tildeP(i) = e^o_i$, which can be larger than 1,


                3. Log of unnormalized probability: $textlogtildeP(i) = o_i$, which can be positive or negative.






                share|improve this answer











                $endgroup$















                  3












                  3








                  3





                  $begingroup$

                  You are right in a sense that it is better to be called log of unnormalized probability. This way, the quantity could be positive or negative. For example, $textlog(0.5) < 0$ and $textlog(12) > 0$ are both valid log of unnormalized probabilities. Here, in more detail:



                  1. Probability: $P(i) = e^o_i/sum_k=1^Ke^o_k$ (using softmax as mentioned in Figure 10.3 caption, and assuming $mathbfo=(o_1,..,o_K)$ is the output of layer before softmax),


                  2. Unnormalized probability: $tildeP(i) = e^o_i$, which can be larger than 1,


                  3. Log of unnormalized probability: $textlogtildeP(i) = o_i$, which can be positive or negative.






                  share|improve this answer











                  $endgroup$



                  You are right in a sense that it is better to be called log of unnormalized probability. This way, the quantity could be positive or negative. For example, $textlog(0.5) < 0$ and $textlog(12) > 0$ are both valid log of unnormalized probabilities. Here, in more detail:



                  1. Probability: $P(i) = e^o_i/sum_k=1^Ke^o_k$ (using softmax as mentioned in Figure 10.3 caption, and assuming $mathbfo=(o_1,..,o_K)$ is the output of layer before softmax),


                  2. Unnormalized probability: $tildeP(i) = e^o_i$, which can be larger than 1,


                  3. Log of unnormalized probability: $textlogtildeP(i) = o_i$, which can be positive or negative.







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited Mar 17 at 13:29

























                  answered Mar 17 at 9:36









                  EsmailianEsmailian

                  1,601114




                  1,601114





















                      2












                      $begingroup$

                      You are right, nothing stop $o_k^(t)$ from being nonnegative, the keyword here is "unnormalized".



                      If we let $o_k^(t)=ln q_k^(t)$



                      $$haty^(t)_k= fracexp(o^(t)_k)sum_k=1^K exp(o^(t)_k) = fracq_k^(t)sum_k=1^K q_k^(t)$$



                      Here $q_k^(t)$ can be any positive number, they will be normalized to be sum to $1$.






                      share|improve this answer









                      $endgroup$

















                        2












                        $begingroup$

                        You are right, nothing stop $o_k^(t)$ from being nonnegative, the keyword here is "unnormalized".



                        If we let $o_k^(t)=ln q_k^(t)$



                        $$haty^(t)_k= fracexp(o^(t)_k)sum_k=1^K exp(o^(t)_k) = fracq_k^(t)sum_k=1^K q_k^(t)$$



                        Here $q_k^(t)$ can be any positive number, they will be normalized to be sum to $1$.






                        share|improve this answer









                        $endgroup$















                          2












                          2








                          2





                          $begingroup$

                          You are right, nothing stop $o_k^(t)$ from being nonnegative, the keyword here is "unnormalized".



                          If we let $o_k^(t)=ln q_k^(t)$



                          $$haty^(t)_k= fracexp(o^(t)_k)sum_k=1^K exp(o^(t)_k) = fracq_k^(t)sum_k=1^K q_k^(t)$$



                          Here $q_k^(t)$ can be any positive number, they will be normalized to be sum to $1$.






                          share|improve this answer









                          $endgroup$



                          You are right, nothing stop $o_k^(t)$ from being nonnegative, the keyword here is "unnormalized".



                          If we let $o_k^(t)=ln q_k^(t)$



                          $$haty^(t)_k= fracexp(o^(t)_k)sum_k=1^K exp(o^(t)_k) = fracq_k^(t)sum_k=1^K q_k^(t)$$



                          Here $q_k^(t)$ can be any positive number, they will be normalized to be sum to $1$.







                          share|improve this answer












                          share|improve this answer



                          share|improve this answer










                          answered Mar 17 at 9:37









                          Siong Thye GohSiong Thye Goh

                          1,367519




                          1,367519



























                              draft saved

                              draft discarded
















































                              Thanks for contributing an answer to Data Science Stack Exchange!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid


                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.

                              Use MathJax to format equations. MathJax reference.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function ()
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47460%2funnormalized-log-probability-rnn%23new-answer', 'question_page');

                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              Masuk log Menu navigasi

                              Identifying “long and narrow” polygons in with PostGISlength and width of polygonWhy postgis st_overlaps reports Qgis' “avoid intersections” generated polygon as overlapping with others?Adjusting polygons to boundary and filling holesDrawing polygons with fixed area?How to remove spikes in Polygons with PostGISDeleting sliver polygons after difference operation in QGIS?Snapping boundaries in PostGISSplit polygon into parts adding attributes based on underlying polygon in QGISSplitting overlap between polygons and assign to nearest polygon using PostGIS?Expanding polygons and clipping at midpoint?Removing Intersection of Buffers in Same Layers

                              Старые Смолеговицы Содержание История | География | Демография | Достопримечательности | Примечания | НавигацияHGЯOLHGЯOL41 206 832 01641 606 406 141Административно-территориальное деление Ленинградской области«Переписная оброчная книга Водской пятины 1500 года», С. 793«Карта Ингерманландии: Ивангорода, Яма, Копорья, Нотеборга», по материалам 1676 г.«Генеральная карта провинции Ингерманландии» Э. Белинга и А. Андерсина, 1704 г., составлена по материалам 1678 г.«Географический чертёж над Ижорскою землей со своими городами» Адриана Шонбека 1705 г.Новая и достоверная всей Ингерманландии ланткарта. Грав. А. Ростовцев. СПб., 1727 г.Топографическая карта Санкт-Петербургской губернии. 5-и верстка. Шуберт. 1834 г.Описание Санкт-Петербургской губернии по уездам и станамСпецкарта западной части России Ф. Ф. Шуберта. 1844 г.Алфавитный список селений по уездам и станам С.-Петербургской губернииСписки населённых мест Российской Империи, составленные и издаваемые центральным статистическим комитетом министерства внутренних дел. XXXVII. Санкт-Петербургская губерния. По состоянию на 1862 год. СПб. 1864. С. 203Материалы по статистике народного хозяйства в С.-Петербургской губернии. Вып. IX. Частновладельческое хозяйство в Ямбургском уезде. СПб, 1888, С. 146, С. 2, 7, 54Положение о гербе муниципального образования Курское сельское поселениеСправочник истории административно-территориального деления Ленинградской области.Топографическая карта Ленинградской области, квадрат О-35-23-В (Хотыницы), 1930 г.АрхивированоАдминистративно-территориальное деление Ленинградской области. — Л., 1933, С. 27, 198АрхивированоАдминистративно-экономический справочник по Ленинградской области. — Л., 1936, с. 219АрхивированоАдминистративно-территориальное деление Ленинградской области. — Л., 1966, с. 175АрхивированоАдминистративно-территориальное деление Ленинградской области. — Лениздат, 1973, С. 180АрхивированоАдминистративно-территориальное деление Ленинградской области. — Лениздат, 1990, ISBN 5-289-00612-5, С. 38АрхивированоАдминистративно-территориальное деление Ленинградской области. — СПб., 2007, с. 60АрхивированоКоряков Юрий База данных «Этно-языковой состав населённых пунктов России». Ленинградская область.Административно-территориальное деление Ленинградской области. — СПб, 1997, ISBN 5-86153-055-6, С. 41АрхивированоКультовый комплекс Старые Смолеговицы // Электронная энциклопедия ЭрмитажаПроблемы выявления, изучения и сохранения культовых комплексов с каменными крестами: по материалам работ 2016-2017 гг. в Ленинградской области