What loss function to use when labels are probabilities? Planned maintenance scheduled April...

What did Darwin mean by 'squib' here?

What is the electric potential inside a point charge?

Need a suitable toxic chemical for a murder plot in my novel

Statistical model of ligand substitution

How is simplicity better than precision and clarity in prose?

How do I automatically answer y in bash script?

Geometric mean and geometric standard deviation

Unable to start mainnet node docker container

Blender game recording at the wrong time

I'm thinking of a number

When is phishing education going too far?

How to rotate it perfectly?

Mortgage adviser recommends a longer term than necessary combined with overpayments

Windows 10: How to Lock (not sleep) laptop on lid close?

Working around an AWS network ACL rule limit

Was credit for the black hole image misattributed?

What to do with post with dry rot?

How are presidential pardons supposed to be used?

Jazz greats knew nothing of modes. Why are they used to improvise on standards?

Estimated State payment too big --> money back; + 2018 Tax Reform

Antler Helmet: Can it work?

ELI5: Why do they say that Israel would have been the fourth country to land a spacecraft on the Moon and why do they call it low cost?

Is there folklore associating late breastfeeding with low intelligence and/or gullibility?

What's the point in a preamp?



What loss function to use when labels are probabilities?



Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)
Announcing the arrival of Valued Associate #679: Cesar Manara
Unicorn Meta Zoo #1: Why another podcast?Why would neural networks be a particularly good framework for “embodied AI”?Understanding GAN Loss functionHelp with implementing Q-learning for a feedfoward network playing a video gameHow do I implement softmax forward propagation and backpropagation to replace sigmoid in a neural network?Gradient of hinge loss functionHow to understand marginal loglikelihood objective function as loss function (explanation of an article)?What is batch / batch size in neural networks?Comparing and studying Loss FunctionsLoss function spikesPredicting sine using LSTM: Small output range and delayed output?





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}







1












$begingroup$


What loss function is most appropriate when training a model with target values that are probabilities? For example, I have a 3-output model with x=[some features] and y=[0.2, 0.3, 0.5].



It seems like something like cross-entropy doesn't make sense here since it assumes that a single target is the correct label.



Would something like MSE (after applying softmax) make sense, or is there a better loss function?










share|improve this question







New contributor




Thomas Johnson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$



















    1












    $begingroup$


    What loss function is most appropriate when training a model with target values that are probabilities? For example, I have a 3-output model with x=[some features] and y=[0.2, 0.3, 0.5].



    It seems like something like cross-entropy doesn't make sense here since it assumes that a single target is the correct label.



    Would something like MSE (after applying softmax) make sense, or is there a better loss function?










    share|improve this question







    New contributor




    Thomas Johnson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$















      1












      1








      1





      $begingroup$


      What loss function is most appropriate when training a model with target values that are probabilities? For example, I have a 3-output model with x=[some features] and y=[0.2, 0.3, 0.5].



      It seems like something like cross-entropy doesn't make sense here since it assumes that a single target is the correct label.



      Would something like MSE (after applying softmax) make sense, or is there a better loss function?










      share|improve this question







      New contributor




      Thomas Johnson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$




      What loss function is most appropriate when training a model with target values that are probabilities? For example, I have a 3-output model with x=[some features] and y=[0.2, 0.3, 0.5].



      It seems like something like cross-entropy doesn't make sense here since it assumes that a single target is the correct label.



      Would something like MSE (after applying softmax) make sense, or is there a better loss function?







      neural-networks loss-functions probability-distribution






      share|improve this question







      New contributor




      Thomas Johnson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question







      New contributor




      Thomas Johnson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question






      New contributor




      Thomas Johnson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 5 hours ago









      Thomas JohnsonThomas Johnson

      1083




      1083




      New contributor




      Thomas Johnson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      Thomas Johnson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      Thomas Johnson is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






















          1 Answer
          1






          active

          oldest

          votes


















          1












          $begingroup$

          Actually, the cross-entropy loss function would be appropriate here, since it measures the "distance" between a distribution $q$ and the "true" distribution $p$.



          You are right, though, that using a loss function called "cross_entropy" in many APIs would be a mistake. This is because these functions, as you said, assume a one-hot label. You would need to use the general cross-entropy function,



          $$H(p,q)=-sum_{xin X} p(x) log q(x).$$
          $ $



          Note that one-hot labels would mean that
          $$
          p(x) =
          begin{cases}
          1 & text{if }x text{ is the true label}\
          0 & text{otherwise}
          end{cases}$$



          which causes the cross-entropy $H(p,q)$ to reduce to the form you're familiar with:



          $$H(p,q) = -log q(x_{label})$$






          share|improve this answer









          $endgroup$














            Your Answer








            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "658"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: false,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: null,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });






            Thomas Johnson is a new contributor. Be nice, and check out our Code of Conduct.










            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f11816%2fwhat-loss-function-to-use-when-labels-are-probabilities%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1












            $begingroup$

            Actually, the cross-entropy loss function would be appropriate here, since it measures the "distance" between a distribution $q$ and the "true" distribution $p$.



            You are right, though, that using a loss function called "cross_entropy" in many APIs would be a mistake. This is because these functions, as you said, assume a one-hot label. You would need to use the general cross-entropy function,



            $$H(p,q)=-sum_{xin X} p(x) log q(x).$$
            $ $



            Note that one-hot labels would mean that
            $$
            p(x) =
            begin{cases}
            1 & text{if }x text{ is the true label}\
            0 & text{otherwise}
            end{cases}$$



            which causes the cross-entropy $H(p,q)$ to reduce to the form you're familiar with:



            $$H(p,q) = -log q(x_{label})$$






            share|improve this answer









            $endgroup$


















              1












              $begingroup$

              Actually, the cross-entropy loss function would be appropriate here, since it measures the "distance" between a distribution $q$ and the "true" distribution $p$.



              You are right, though, that using a loss function called "cross_entropy" in many APIs would be a mistake. This is because these functions, as you said, assume a one-hot label. You would need to use the general cross-entropy function,



              $$H(p,q)=-sum_{xin X} p(x) log q(x).$$
              $ $



              Note that one-hot labels would mean that
              $$
              p(x) =
              begin{cases}
              1 & text{if }x text{ is the true label}\
              0 & text{otherwise}
              end{cases}$$



              which causes the cross-entropy $H(p,q)$ to reduce to the form you're familiar with:



              $$H(p,q) = -log q(x_{label})$$






              share|improve this answer









              $endgroup$
















                1












                1








                1





                $begingroup$

                Actually, the cross-entropy loss function would be appropriate here, since it measures the "distance" between a distribution $q$ and the "true" distribution $p$.



                You are right, though, that using a loss function called "cross_entropy" in many APIs would be a mistake. This is because these functions, as you said, assume a one-hot label. You would need to use the general cross-entropy function,



                $$H(p,q)=-sum_{xin X} p(x) log q(x).$$
                $ $



                Note that one-hot labels would mean that
                $$
                p(x) =
                begin{cases}
                1 & text{if }x text{ is the true label}\
                0 & text{otherwise}
                end{cases}$$



                which causes the cross-entropy $H(p,q)$ to reduce to the form you're familiar with:



                $$H(p,q) = -log q(x_{label})$$






                share|improve this answer









                $endgroup$



                Actually, the cross-entropy loss function would be appropriate here, since it measures the "distance" between a distribution $q$ and the "true" distribution $p$.



                You are right, though, that using a loss function called "cross_entropy" in many APIs would be a mistake. This is because these functions, as you said, assume a one-hot label. You would need to use the general cross-entropy function,



                $$H(p,q)=-sum_{xin X} p(x) log q(x).$$
                $ $



                Note that one-hot labels would mean that
                $$
                p(x) =
                begin{cases}
                1 & text{if }x text{ is the true label}\
                0 & text{otherwise}
                end{cases}$$



                which causes the cross-entropy $H(p,q)$ to reduce to the form you're familiar with:



                $$H(p,q) = -log q(x_{label})$$







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered 4 hours ago









                Philip RaeisghasemPhilip Raeisghasem

                963119




                963119






















                    Thomas Johnson is a new contributor. Be nice, and check out our Code of Conduct.










                    draft saved

                    draft discarded


















                    Thomas Johnson is a new contributor. Be nice, and check out our Code of Conduct.













                    Thomas Johnson is a new contributor. Be nice, and check out our Code of Conduct.












                    Thomas Johnson is a new contributor. Be nice, and check out our Code of Conduct.
















                    Thanks for contributing an answer to Artificial Intelligence Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f11816%2fwhat-loss-function-to-use-when-labels-are-probabilities%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    El tren de la libertad Índice Antecedentes "Porque yo decido" Desarrollo de la...

                    Castillo d'Acher Características Menú de navegación

                    Connecting two nodes from the same mother node horizontallyTikZ: What EXACTLY does the the |- notation for...