Proving $(bf xtimes ycdot N) z+(ytimes zcdot N) x+(ztimes x cdot N) y= 0$ when $bf x,y,z$ are coplanar and $bf N$ is a unit normal vector Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)Norm and Determinant relationProduct of reflections is a rotation, by elementary vector methodsComputing the unit vector for a generalised helixHow to rewrite this trigonometric formula in terms of scalar and vector products between vectors?Second derivative of the position vector in a spherical coordinate systemWhat is the logic/rationale behind the vector cross product?Simplify vector equation $2mathbf c - (mathbf a + mathbf b)times(mathbf a - mathbf b)$Unit Vectors ProblemIs this true, that the angle tangent betweem two vectors is equal to their cross product norm divided by it's inner product?Vector Cross Product.

Can this water damage be explained by lack of gutters and grading issues?

Suing a Police Officer Instead of the Police Department

Married in secret, can marital status in passport be changed at a later date?

“Since the train was delayed for more than an hour, passengers were given a full refund.” – Why is there no article before “passengers”?

Like totally amazing interchangeable sister outfit accessory swapping or whatever

Why did Bronn offer to be Tyrion Lannister's champion in trial by combat?

How can I wire a 9-position switch so that each position turns on one more LED than the one before?

How to calculate density of unknown planet?

How to create a command for the "strange m" symbol in latex?

Can gravitational waves pass through a black hole?

Proving inequality for positive definite matrix

"Destructive force" carried by a B-52?

What is the evidence that custom checks in Northern Ireland are going to result in violence?

Why aren't these two solutions equivalent? Combinatorics problem

What *exactly* is electrical current, voltage, and resistance?

Is "ein Herz wie das meine" an antiquated or colloquial use of the possesive pronoun?

Question regarding MLE

Why did Europeans not widely domesticate foxes?

Why are two-digit numbers in Jonathan Swift's "Gulliver's Travels" (1726) written in "German style"?

Compiling and throwing simple dynamic exceptions at runtime for JVM

Diffie-Hellman with non-prime modulus

Can a Wizard take the Magic Initiate feat and select spells from the Wizard list?

Example of a central simple algebra

/bin/ls sorts differently than just ls



Proving $(bf xtimes ycdot N) z+(ytimes zcdot N) x+(ztimes x cdot N) y= 0$ when $bf x,y,z$ are coplanar and $bf N$ is a unit normal vector



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 23, 2019 at 23:30 UTC (7:30pm US/Eastern)Norm and Determinant relationProduct of reflections is a rotation, by elementary vector methodsComputing the unit vector for a generalised helixHow to rewrite this trigonometric formula in terms of scalar and vector products between vectors?Second derivative of the position vector in a spherical coordinate systemWhat is the logic/rationale behind the vector cross product?Simplify vector equation $2mathbf c - (mathbf a + mathbf b)times(mathbf a - mathbf b)$Unit Vectors ProblemIs this true, that the angle tangent betweem two vectors is equal to their cross product norm divided by it's inner product?Vector Cross Product.










9












$begingroup$



Prove that if $mathbfx,mathbfy,mathbfz in mathbbR^3$ are coplanar vectors and $mathbfN$ is a unit normal vector to the plane then $$(mathbfxtimesmathbfy cdot mathbfN) mathbfz + (mathbfytimesmathbfz cdot mathbfN) mathbfx + (mathbfztimesmathbfx cdot mathbfN) mathbfy=mathbf0.$$




This is an elementary identity involving cross products which is used in the proof of the Gauss-Bonnet Theorem and whose proof was left as an exercise. I've tried it unsuccessfully. Initially I tried writing $mathbfN=fracmathbfxtimesmathbfy=fracmathbfytimesmathbfz=fracmathbfztimesmathbfx$ and substituting into the equation to get $| mathbfxtimesmathbfy|z +| mathbfytimesmathbfz|mathbfx+| mathbfztimesmathbfx|mathbfy=mathbf0$ but then I realised these terms are only correct up to $pm$ signs. You could write the norms in terms of sines of angles and divide by norms to get unit vectors with coefficients $sintheta,sinpsi,sin(theta+psi)$ (or $2pi -(theta+psi)$ I suppose) but I don't know what to do from there, especially when the terms are only correct up to sign. Any hints how to prove this identity? Perhaps there is a clever trick to it but I can't see it. Edit: Maybe writing $mathbfz=lambdamathbfx+mumathbfy$ will help.










share|cite|improve this question











$endgroup$







  • 1




    $begingroup$
    What does $x times y cdot N$ mean? Dot product $(x times y) cdot N$ ?
    $endgroup$
    – Widawensen
    Mar 20 at 12:51







  • 4




    $begingroup$
    @Widawensen Yes, what else could it mean?
    $endgroup$
    – Marc van Leeuwen
    Mar 20 at 12:53






  • 1




    $begingroup$
    @MarcvanLeeuwen: that could mean a badly written problem. That happens here sometimes.
    $endgroup$
    – Taladris
    Mar 21 at 2:54















9












$begingroup$



Prove that if $mathbfx,mathbfy,mathbfz in mathbbR^3$ are coplanar vectors and $mathbfN$ is a unit normal vector to the plane then $$(mathbfxtimesmathbfy cdot mathbfN) mathbfz + (mathbfytimesmathbfz cdot mathbfN) mathbfx + (mathbfztimesmathbfx cdot mathbfN) mathbfy=mathbf0.$$




This is an elementary identity involving cross products which is used in the proof of the Gauss-Bonnet Theorem and whose proof was left as an exercise. I've tried it unsuccessfully. Initially I tried writing $mathbfN=fracmathbfxtimesmathbfy=fracmathbfytimesmathbfz=fracmathbfztimesmathbfx$ and substituting into the equation to get $| mathbfxtimesmathbfy|z +| mathbfytimesmathbfz|mathbfx+| mathbfztimesmathbfx|mathbfy=mathbf0$ but then I realised these terms are only correct up to $pm$ signs. You could write the norms in terms of sines of angles and divide by norms to get unit vectors with coefficients $sintheta,sinpsi,sin(theta+psi)$ (or $2pi -(theta+psi)$ I suppose) but I don't know what to do from there, especially when the terms are only correct up to sign. Any hints how to prove this identity? Perhaps there is a clever trick to it but I can't see it. Edit: Maybe writing $mathbfz=lambdamathbfx+mumathbfy$ will help.










share|cite|improve this question











$endgroup$







  • 1




    $begingroup$
    What does $x times y cdot N$ mean? Dot product $(x times y) cdot N$ ?
    $endgroup$
    – Widawensen
    Mar 20 at 12:51







  • 4




    $begingroup$
    @Widawensen Yes, what else could it mean?
    $endgroup$
    – Marc van Leeuwen
    Mar 20 at 12:53






  • 1




    $begingroup$
    @MarcvanLeeuwen: that could mean a badly written problem. That happens here sometimes.
    $endgroup$
    – Taladris
    Mar 21 at 2:54













9












9








9


1



$begingroup$



Prove that if $mathbfx,mathbfy,mathbfz in mathbbR^3$ are coplanar vectors and $mathbfN$ is a unit normal vector to the plane then $$(mathbfxtimesmathbfy cdot mathbfN) mathbfz + (mathbfytimesmathbfz cdot mathbfN) mathbfx + (mathbfztimesmathbfx cdot mathbfN) mathbfy=mathbf0.$$




This is an elementary identity involving cross products which is used in the proof of the Gauss-Bonnet Theorem and whose proof was left as an exercise. I've tried it unsuccessfully. Initially I tried writing $mathbfN=fracmathbfxtimesmathbfy=fracmathbfytimesmathbfz=fracmathbfztimesmathbfx$ and substituting into the equation to get $| mathbfxtimesmathbfy|z +| mathbfytimesmathbfz|mathbfx+| mathbfztimesmathbfx|mathbfy=mathbf0$ but then I realised these terms are only correct up to $pm$ signs. You could write the norms in terms of sines of angles and divide by norms to get unit vectors with coefficients $sintheta,sinpsi,sin(theta+psi)$ (or $2pi -(theta+psi)$ I suppose) but I don't know what to do from there, especially when the terms are only correct up to sign. Any hints how to prove this identity? Perhaps there is a clever trick to it but I can't see it. Edit: Maybe writing $mathbfz=lambdamathbfx+mumathbfy$ will help.










share|cite|improve this question











$endgroup$





Prove that if $mathbfx,mathbfy,mathbfz in mathbbR^3$ are coplanar vectors and $mathbfN$ is a unit normal vector to the plane then $$(mathbfxtimesmathbfy cdot mathbfN) mathbfz + (mathbfytimesmathbfz cdot mathbfN) mathbfx + (mathbfztimesmathbfx cdot mathbfN) mathbfy=mathbf0.$$




This is an elementary identity involving cross products which is used in the proof of the Gauss-Bonnet Theorem and whose proof was left as an exercise. I've tried it unsuccessfully. Initially I tried writing $mathbfN=fracmathbfxtimesmathbfy=fracmathbfytimesmathbfz=fracmathbfztimesmathbfx$ and substituting into the equation to get $| mathbfxtimesmathbfy|z +| mathbfytimesmathbfz|mathbfx+| mathbfztimesmathbfx|mathbfy=mathbf0$ but then I realised these terms are only correct up to $pm$ signs. You could write the norms in terms of sines of angles and divide by norms to get unit vectors with coefficients $sintheta,sinpsi,sin(theta+psi)$ (or $2pi -(theta+psi)$ I suppose) but I don't know what to do from there, especially when the terms are only correct up to sign. Any hints how to prove this identity? Perhaps there is a clever trick to it but I can't see it. Edit: Maybe writing $mathbfz=lambdamathbfx+mumathbfy$ will help.







linear-algebra vectors cross-product






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Mar 21 at 9:56









Asaf Karagila

309k33441775




309k33441775










asked Mar 20 at 11:16









AlephNullAlephNull

559110




559110







  • 1




    $begingroup$
    What does $x times y cdot N$ mean? Dot product $(x times y) cdot N$ ?
    $endgroup$
    – Widawensen
    Mar 20 at 12:51







  • 4




    $begingroup$
    @Widawensen Yes, what else could it mean?
    $endgroup$
    – Marc van Leeuwen
    Mar 20 at 12:53






  • 1




    $begingroup$
    @MarcvanLeeuwen: that could mean a badly written problem. That happens here sometimes.
    $endgroup$
    – Taladris
    Mar 21 at 2:54












  • 1




    $begingroup$
    What does $x times y cdot N$ mean? Dot product $(x times y) cdot N$ ?
    $endgroup$
    – Widawensen
    Mar 20 at 12:51







  • 4




    $begingroup$
    @Widawensen Yes, what else could it mean?
    $endgroup$
    – Marc van Leeuwen
    Mar 20 at 12:53






  • 1




    $begingroup$
    @MarcvanLeeuwen: that could mean a badly written problem. That happens here sometimes.
    $endgroup$
    – Taladris
    Mar 21 at 2:54







1




1




$begingroup$
What does $x times y cdot N$ mean? Dot product $(x times y) cdot N$ ?
$endgroup$
– Widawensen
Mar 20 at 12:51





$begingroup$
What does $x times y cdot N$ mean? Dot product $(x times y) cdot N$ ?
$endgroup$
– Widawensen
Mar 20 at 12:51





4




4




$begingroup$
@Widawensen Yes, what else could it mean?
$endgroup$
– Marc van Leeuwen
Mar 20 at 12:53




$begingroup$
@Widawensen Yes, what else could it mean?
$endgroup$
– Marc van Leeuwen
Mar 20 at 12:53




1




1




$begingroup$
@MarcvanLeeuwen: that could mean a badly written problem. That happens here sometimes.
$endgroup$
– Taladris
Mar 21 at 2:54




$begingroup$
@MarcvanLeeuwen: that could mean a badly written problem. That happens here sometimes.
$endgroup$
– Taladris
Mar 21 at 2:54










7 Answers
7






active

oldest

votes


















10












$begingroup$

Here's an observation: If $Q$ is a rotation matrix, then
$$
(Qx) times (Qy) = Q(x times y)
$$



You have to prove that, of course, but it's not too tough. Similarly,
$$
(Qx) cdot (Qy) = x cdot y
$$

and, for a scalar $alpha$, we have
$$
Q (alpha x) = alpha (Q x)
$$



Now suppose that for some vector $v$, we have
$$
(mathbfxtimesmathbfy cdot mathbfN) mathbfz + (mathbfytimesmathbfz cdot mathbfN) mathbfx + (mathbfztimesmathbfx cdot mathbfN) mathbfy=mathbfv.
$$



Key idea 1: You can apply the rules above to show that for any rotation matrix $Q$, you can apply $Q$ to all the elements on the left to get $Qv$.



Key idea 2: You can choose $Q$ so that it takes $N$ to the vector $(0,0,1)$, and puts $x, y,$ and $z$ into the plane consisting of vectors of the form $(a, b, 0)$. And in that plane, it's easy to see that you get $0$, so $Qv = 0$. Hence $v = 0$, and you're done.



In short: by a change of basis, you can assume that $N$ is the vector $(0,0,1)$ and that the other vectors all lie in the $(a, b, 0)$ plane, and things get easy.






share|cite|improve this answer









$endgroup$












  • $begingroup$
    Ah so it is a clever approach with orthogonal matrices. I'm familiar with those identities. But I don't see how $Qv=0$. I can't see how we apply anything other than the third identity $Q(alpha x)=alpha (Qx)$.
    $endgroup$
    – AlephNull
    Mar 20 at 11:55











  • $begingroup$
    Oh I see, you're talking about the elements, not the terms. I understand the solution now.
    $endgroup$
    – AlephNull
    Mar 20 at 13:42







  • 2




    $begingroup$
    By the way, there's a general principle at work here, much loved by physicists, but worth remembering even if that's not your domain: "find the right coordinate system for your problem." (The math version is mostly "choose the right basis/generating set/...")
    $endgroup$
    – John Hughes
    Mar 20 at 16:51


















11












$begingroup$

If what is required is only to prove the validity of the given identity, there is another approach. Observe that if $x$ and $y$ are linearly dependent, i.e. for some $c$, $x=c y$ or $y=c x$, then the identity holds trivially because $wtimes v =-(vtimes w)$ and $v times v=0$ for all $v,w$. Thus, we may assume $x$ and $y$ are linearly independent and hence $z$ is a linear combination of $x$ and $y$, that is, $z=ax+by$ for some $a,b$. Now, since the given identity is linear in each variable and it holds for both $z=x$ and $z=y$, it is also true for $z=ax+by$. This proves the identity. It can be also noted that $N$ being perpendicular to the plane containing $x,y,z$ plays no role in this proof.






share|cite|improve this answer











$endgroup$








  • 2




    $begingroup$
    very nice solution!
    $endgroup$
    – John Hughes
    Mar 20 at 16:54










  • $begingroup$
    Indeed, this is very elegant. So my last remark had some significance!
    $endgroup$
    – AlephNull
    Mar 20 at 17:06










  • $begingroup$
    Thank you both :-)
    $endgroup$
    – Song
    Mar 21 at 7:07


















5












$begingroup$

Writing $x=ahati+bhatj,,y=chati+dhatj,,z=ehati+fhatj,,N=Nhatk$ reduces the sum to $$N((ad-bc)(ehati+fhatj)+(cf-de)(ahati+bhatj)+(be-af)(chati+dhatj)).$$The $hati$ coefficient is $N(ade-bce+acf-ade+bce-acf)=0$. The $hatj$ coefficient can be handled similarly.






share|cite|improve this answer









$endgroup$












  • $begingroup$
    I accepted a different answer but +1 because I appreciate that this finishes off the solution in that answer.
    $endgroup$
    – AlephNull
    Mar 20 at 13:45



















4












$begingroup$

Since $bf x, bf y, bf z$ are coplanar, they are linearly dependent. Since the result to be proved is symmetric in $bf x, bf y, bf z$, withouht loss of generality we can write $bf z = lambda bf x + mu bf y$ for some scalars $lambda, mu$.



Now, $$beginalign
& (bf y times bf z cdot bf N); bf x \
= & (bf y times (lambda bf x + mu bf y) cdot bf N); bf x \
= & (bf y times lambda bf x cdot bf N); bf x \
= & (bf y times bf x cdot bf N), (lambda bf x)
endalign$$

and similarly $$beginalign
& (bf z times bf x cdot bf N); bf y \
= & (bf y times bf x cdot bf N), (mu bf y)
endalign$$

So $$beginalign
& (bf y times bf z cdot bf N); bf x +
(bf z times bf x cdot bf N); bf y \
= & (bf y times bf x cdot bf N), (lambda bf x + mu bf y)\
= & -(bf x times bf y cdot bf N);z
endalign $$

and the result follows.






share|cite|improve this answer









$endgroup$




















    2












    $begingroup$

    If you know a little about the exterior algebra we can see this almost immediately, and in a way that generalizes substantially.




    Pick any plane $Pi$ containing $bf x, bf y, bf z$. The map on $Pi$ defined by $$(bf a, bf b, bf c) mapsto [(bf a times bf b) cdot bf N] bf c + [(bf b times bf c) cdot bf N] bf a + [(bf c times bf a) cdot bf N] bf b$$
    is visibly trilinear and totally skew in its arguments, so it is a (vector-valued) $3$-form on a $2$-dimensional vector space and hence is the zero map.




    NB this argument doesn't use any properties of $bf N$.






    share|cite|improve this answer









    $endgroup$




















      1












      $begingroup$

      By the properties of the triple product ( circluar shift) we can rearrange formula:



      $ (mathbfxtimesmathbfy) cdot mathbfN) mathbfz + (mathbfytimesmathbfz) cdot mathbfN) mathbfx + (mathbfztimesmathbfx) cdot mathbfN) mathbfy \ =(mathbfNtimesmathbfx) cdot mathbfy) mathbfz + (mathbfNtimesmathbfy) cdot mathbfz) mathbfx + (mathbfNtimesmathbfz) cdot mathbfx) mathbfy $



      All cross product vectors $$v_1=(mathbfNtimesmathbfx),v_2=(mathbfNtimesmathbfy), v_3=(mathbfNtimesmathbfz)$$

      lie in the plane of coplanar vectors $mathbfx,mathbfy,mathbfz$ and they are vectors $mathbfx,mathbfy,mathbfz$ rotated by $pi/2$ in this plane.



      So we can limit themselves to this plane and take any vectors with components $mathbfx=[ x_1 x_2]^T,mathbfy=[ y_1 y_2]^T,mathbfz =[ z_1 z_2]^T$.



      Transform them with the rotation matrix $R=beginbmatrix 0 & -1 \ 1 & 0 endbmatrix$ , calculate appropriate dot products and finally check the formula with these assumed general components.



      Namely we need to calculate:
      $$(y^TRx)z+(z^TRy)x+(x^TRz)y$$






      share|cite|improve this answer











      $endgroup$




















        0












        $begingroup$

        Another approach to the problem uses a formula for triple product.



        $ mathbfacdot(mathbfbtimes mathbfc) = det beginbmatrix
        a_1 & b_1 & c_1 \
        a_2 & b_2 & c_2 \
        a_3 & b_3 & c_3 \
        endbmatrix $



        Then consider determinant



        $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_1 & x_1 & y_1 & z_1 endvmatrix $



        where columns consist of vectors $ mathbfN ,mathbfx,mathbfy,mathbfz$ components (the fourth row repeats the first one).



        Of course such determinant equals to $0$.

        Developing the determinant along the fourth row we obtain:



        $-n_1beginvmatrix x_1 & y_1 & z_1 \ x_2 & y_2 & z_2 \ x_3 & y_3 & z_3 \ endvmatrix +x_1beginvmatrix n_1 & y_1 & z_1 \ n_2 & y_2 & z_2 \ n_3 & y_3 & z_3 \ endvmatrix -y_1beginvmatrix n_1 & x_1 & z_1 \ n_2 & x_2 & z_2 \ n_3 & x_3 & z_3 \ endvmatrix +z_1beginvmatrix n_1 & x_1 & y_1 \ n_2 & x_2 & y_2 \ n_3 & x_3 & y_3 \ endvmatrix=0$



        from which the formula for the first component of the vector given in the question follows



        (the first summand is equal to $0$ as the vectors $mathbfx,mathbfy,mathbfz$ are collinear, the columns can be permuted (required for the third summand) if needed to give appropriate sign in expression)



        Similarly the determinants



        $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_2 & x_2 & y_2 & z_2 endvmatrix $ and $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_3 & x_3 & y_3 & z_3 endvmatrix $



        give the second and the third component of the question vector, equal to $0$.






        share|cite|improve this answer











        $endgroup$













          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "69"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3155292%2fproving-bf-x-times-y-cdot-n-zy-times-z-cdot-n-xz-times-x-cdot-n-y%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          7 Answers
          7






          active

          oldest

          votes








          7 Answers
          7






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          10












          $begingroup$

          Here's an observation: If $Q$ is a rotation matrix, then
          $$
          (Qx) times (Qy) = Q(x times y)
          $$



          You have to prove that, of course, but it's not too tough. Similarly,
          $$
          (Qx) cdot (Qy) = x cdot y
          $$

          and, for a scalar $alpha$, we have
          $$
          Q (alpha x) = alpha (Q x)
          $$



          Now suppose that for some vector $v$, we have
          $$
          (mathbfxtimesmathbfy cdot mathbfN) mathbfz + (mathbfytimesmathbfz cdot mathbfN) mathbfx + (mathbfztimesmathbfx cdot mathbfN) mathbfy=mathbfv.
          $$



          Key idea 1: You can apply the rules above to show that for any rotation matrix $Q$, you can apply $Q$ to all the elements on the left to get $Qv$.



          Key idea 2: You can choose $Q$ so that it takes $N$ to the vector $(0,0,1)$, and puts $x, y,$ and $z$ into the plane consisting of vectors of the form $(a, b, 0)$. And in that plane, it's easy to see that you get $0$, so $Qv = 0$. Hence $v = 0$, and you're done.



          In short: by a change of basis, you can assume that $N$ is the vector $(0,0,1)$ and that the other vectors all lie in the $(a, b, 0)$ plane, and things get easy.






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            Ah so it is a clever approach with orthogonal matrices. I'm familiar with those identities. But I don't see how $Qv=0$. I can't see how we apply anything other than the third identity $Q(alpha x)=alpha (Qx)$.
            $endgroup$
            – AlephNull
            Mar 20 at 11:55











          • $begingroup$
            Oh I see, you're talking about the elements, not the terms. I understand the solution now.
            $endgroup$
            – AlephNull
            Mar 20 at 13:42







          • 2




            $begingroup$
            By the way, there's a general principle at work here, much loved by physicists, but worth remembering even if that's not your domain: "find the right coordinate system for your problem." (The math version is mostly "choose the right basis/generating set/...")
            $endgroup$
            – John Hughes
            Mar 20 at 16:51















          10












          $begingroup$

          Here's an observation: If $Q$ is a rotation matrix, then
          $$
          (Qx) times (Qy) = Q(x times y)
          $$



          You have to prove that, of course, but it's not too tough. Similarly,
          $$
          (Qx) cdot (Qy) = x cdot y
          $$

          and, for a scalar $alpha$, we have
          $$
          Q (alpha x) = alpha (Q x)
          $$



          Now suppose that for some vector $v$, we have
          $$
          (mathbfxtimesmathbfy cdot mathbfN) mathbfz + (mathbfytimesmathbfz cdot mathbfN) mathbfx + (mathbfztimesmathbfx cdot mathbfN) mathbfy=mathbfv.
          $$



          Key idea 1: You can apply the rules above to show that for any rotation matrix $Q$, you can apply $Q$ to all the elements on the left to get $Qv$.



          Key idea 2: You can choose $Q$ so that it takes $N$ to the vector $(0,0,1)$, and puts $x, y,$ and $z$ into the plane consisting of vectors of the form $(a, b, 0)$. And in that plane, it's easy to see that you get $0$, so $Qv = 0$. Hence $v = 0$, and you're done.



          In short: by a change of basis, you can assume that $N$ is the vector $(0,0,1)$ and that the other vectors all lie in the $(a, b, 0)$ plane, and things get easy.






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            Ah so it is a clever approach with orthogonal matrices. I'm familiar with those identities. But I don't see how $Qv=0$. I can't see how we apply anything other than the third identity $Q(alpha x)=alpha (Qx)$.
            $endgroup$
            – AlephNull
            Mar 20 at 11:55











          • $begingroup$
            Oh I see, you're talking about the elements, not the terms. I understand the solution now.
            $endgroup$
            – AlephNull
            Mar 20 at 13:42







          • 2




            $begingroup$
            By the way, there's a general principle at work here, much loved by physicists, but worth remembering even if that's not your domain: "find the right coordinate system for your problem." (The math version is mostly "choose the right basis/generating set/...")
            $endgroup$
            – John Hughes
            Mar 20 at 16:51













          10












          10








          10





          $begingroup$

          Here's an observation: If $Q$ is a rotation matrix, then
          $$
          (Qx) times (Qy) = Q(x times y)
          $$



          You have to prove that, of course, but it's not too tough. Similarly,
          $$
          (Qx) cdot (Qy) = x cdot y
          $$

          and, for a scalar $alpha$, we have
          $$
          Q (alpha x) = alpha (Q x)
          $$



          Now suppose that for some vector $v$, we have
          $$
          (mathbfxtimesmathbfy cdot mathbfN) mathbfz + (mathbfytimesmathbfz cdot mathbfN) mathbfx + (mathbfztimesmathbfx cdot mathbfN) mathbfy=mathbfv.
          $$



          Key idea 1: You can apply the rules above to show that for any rotation matrix $Q$, you can apply $Q$ to all the elements on the left to get $Qv$.



          Key idea 2: You can choose $Q$ so that it takes $N$ to the vector $(0,0,1)$, and puts $x, y,$ and $z$ into the plane consisting of vectors of the form $(a, b, 0)$. And in that plane, it's easy to see that you get $0$, so $Qv = 0$. Hence $v = 0$, and you're done.



          In short: by a change of basis, you can assume that $N$ is the vector $(0,0,1)$ and that the other vectors all lie in the $(a, b, 0)$ plane, and things get easy.






          share|cite|improve this answer









          $endgroup$



          Here's an observation: If $Q$ is a rotation matrix, then
          $$
          (Qx) times (Qy) = Q(x times y)
          $$



          You have to prove that, of course, but it's not too tough. Similarly,
          $$
          (Qx) cdot (Qy) = x cdot y
          $$

          and, for a scalar $alpha$, we have
          $$
          Q (alpha x) = alpha (Q x)
          $$



          Now suppose that for some vector $v$, we have
          $$
          (mathbfxtimesmathbfy cdot mathbfN) mathbfz + (mathbfytimesmathbfz cdot mathbfN) mathbfx + (mathbfztimesmathbfx cdot mathbfN) mathbfy=mathbfv.
          $$



          Key idea 1: You can apply the rules above to show that for any rotation matrix $Q$, you can apply $Q$ to all the elements on the left to get $Qv$.



          Key idea 2: You can choose $Q$ so that it takes $N$ to the vector $(0,0,1)$, and puts $x, y,$ and $z$ into the plane consisting of vectors of the form $(a, b, 0)$. And in that plane, it's easy to see that you get $0$, so $Qv = 0$. Hence $v = 0$, and you're done.



          In short: by a change of basis, you can assume that $N$ is the vector $(0,0,1)$ and that the other vectors all lie in the $(a, b, 0)$ plane, and things get easy.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Mar 20 at 11:45









          John HughesJohn Hughes

          65.5k24292




          65.5k24292











          • $begingroup$
            Ah so it is a clever approach with orthogonal matrices. I'm familiar with those identities. But I don't see how $Qv=0$. I can't see how we apply anything other than the third identity $Q(alpha x)=alpha (Qx)$.
            $endgroup$
            – AlephNull
            Mar 20 at 11:55











          • $begingroup$
            Oh I see, you're talking about the elements, not the terms. I understand the solution now.
            $endgroup$
            – AlephNull
            Mar 20 at 13:42







          • 2




            $begingroup$
            By the way, there's a general principle at work here, much loved by physicists, but worth remembering even if that's not your domain: "find the right coordinate system for your problem." (The math version is mostly "choose the right basis/generating set/...")
            $endgroup$
            – John Hughes
            Mar 20 at 16:51
















          • $begingroup$
            Ah so it is a clever approach with orthogonal matrices. I'm familiar with those identities. But I don't see how $Qv=0$. I can't see how we apply anything other than the third identity $Q(alpha x)=alpha (Qx)$.
            $endgroup$
            – AlephNull
            Mar 20 at 11:55











          • $begingroup$
            Oh I see, you're talking about the elements, not the terms. I understand the solution now.
            $endgroup$
            – AlephNull
            Mar 20 at 13:42







          • 2




            $begingroup$
            By the way, there's a general principle at work here, much loved by physicists, but worth remembering even if that's not your domain: "find the right coordinate system for your problem." (The math version is mostly "choose the right basis/generating set/...")
            $endgroup$
            – John Hughes
            Mar 20 at 16:51















          $begingroup$
          Ah so it is a clever approach with orthogonal matrices. I'm familiar with those identities. But I don't see how $Qv=0$. I can't see how we apply anything other than the third identity $Q(alpha x)=alpha (Qx)$.
          $endgroup$
          – AlephNull
          Mar 20 at 11:55





          $begingroup$
          Ah so it is a clever approach with orthogonal matrices. I'm familiar with those identities. But I don't see how $Qv=0$. I can't see how we apply anything other than the third identity $Q(alpha x)=alpha (Qx)$.
          $endgroup$
          – AlephNull
          Mar 20 at 11:55













          $begingroup$
          Oh I see, you're talking about the elements, not the terms. I understand the solution now.
          $endgroup$
          – AlephNull
          Mar 20 at 13:42





          $begingroup$
          Oh I see, you're talking about the elements, not the terms. I understand the solution now.
          $endgroup$
          – AlephNull
          Mar 20 at 13:42





          2




          2




          $begingroup$
          By the way, there's a general principle at work here, much loved by physicists, but worth remembering even if that's not your domain: "find the right coordinate system for your problem." (The math version is mostly "choose the right basis/generating set/...")
          $endgroup$
          – John Hughes
          Mar 20 at 16:51




          $begingroup$
          By the way, there's a general principle at work here, much loved by physicists, but worth remembering even if that's not your domain: "find the right coordinate system for your problem." (The math version is mostly "choose the right basis/generating set/...")
          $endgroup$
          – John Hughes
          Mar 20 at 16:51











          11












          $begingroup$

          If what is required is only to prove the validity of the given identity, there is another approach. Observe that if $x$ and $y$ are linearly dependent, i.e. for some $c$, $x=c y$ or $y=c x$, then the identity holds trivially because $wtimes v =-(vtimes w)$ and $v times v=0$ for all $v,w$. Thus, we may assume $x$ and $y$ are linearly independent and hence $z$ is a linear combination of $x$ and $y$, that is, $z=ax+by$ for some $a,b$. Now, since the given identity is linear in each variable and it holds for both $z=x$ and $z=y$, it is also true for $z=ax+by$. This proves the identity. It can be also noted that $N$ being perpendicular to the plane containing $x,y,z$ plays no role in this proof.






          share|cite|improve this answer











          $endgroup$








          • 2




            $begingroup$
            very nice solution!
            $endgroup$
            – John Hughes
            Mar 20 at 16:54










          • $begingroup$
            Indeed, this is very elegant. So my last remark had some significance!
            $endgroup$
            – AlephNull
            Mar 20 at 17:06










          • $begingroup$
            Thank you both :-)
            $endgroup$
            – Song
            Mar 21 at 7:07















          11












          $begingroup$

          If what is required is only to prove the validity of the given identity, there is another approach. Observe that if $x$ and $y$ are linearly dependent, i.e. for some $c$, $x=c y$ or $y=c x$, then the identity holds trivially because $wtimes v =-(vtimes w)$ and $v times v=0$ for all $v,w$. Thus, we may assume $x$ and $y$ are linearly independent and hence $z$ is a linear combination of $x$ and $y$, that is, $z=ax+by$ for some $a,b$. Now, since the given identity is linear in each variable and it holds for both $z=x$ and $z=y$, it is also true for $z=ax+by$. This proves the identity. It can be also noted that $N$ being perpendicular to the plane containing $x,y,z$ plays no role in this proof.






          share|cite|improve this answer











          $endgroup$








          • 2




            $begingroup$
            very nice solution!
            $endgroup$
            – John Hughes
            Mar 20 at 16:54










          • $begingroup$
            Indeed, this is very elegant. So my last remark had some significance!
            $endgroup$
            – AlephNull
            Mar 20 at 17:06










          • $begingroup$
            Thank you both :-)
            $endgroup$
            – Song
            Mar 21 at 7:07













          11












          11








          11





          $begingroup$

          If what is required is only to prove the validity of the given identity, there is another approach. Observe that if $x$ and $y$ are linearly dependent, i.e. for some $c$, $x=c y$ or $y=c x$, then the identity holds trivially because $wtimes v =-(vtimes w)$ and $v times v=0$ for all $v,w$. Thus, we may assume $x$ and $y$ are linearly independent and hence $z$ is a linear combination of $x$ and $y$, that is, $z=ax+by$ for some $a,b$. Now, since the given identity is linear in each variable and it holds for both $z=x$ and $z=y$, it is also true for $z=ax+by$. This proves the identity. It can be also noted that $N$ being perpendicular to the plane containing $x,y,z$ plays no role in this proof.






          share|cite|improve this answer











          $endgroup$



          If what is required is only to prove the validity of the given identity, there is another approach. Observe that if $x$ and $y$ are linearly dependent, i.e. for some $c$, $x=c y$ or $y=c x$, then the identity holds trivially because $wtimes v =-(vtimes w)$ and $v times v=0$ for all $v,w$. Thus, we may assume $x$ and $y$ are linearly independent and hence $z$ is a linear combination of $x$ and $y$, that is, $z=ax+by$ for some $a,b$. Now, since the given identity is linear in each variable and it holds for both $z=x$ and $z=y$, it is also true for $z=ax+by$. This proves the identity. It can be also noted that $N$ being perpendicular to the plane containing $x,y,z$ plays no role in this proof.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited Mar 20 at 16:22

























          answered Mar 20 at 16:15









          SongSong

          18.6k21651




          18.6k21651







          • 2




            $begingroup$
            very nice solution!
            $endgroup$
            – John Hughes
            Mar 20 at 16:54










          • $begingroup$
            Indeed, this is very elegant. So my last remark had some significance!
            $endgroup$
            – AlephNull
            Mar 20 at 17:06










          • $begingroup$
            Thank you both :-)
            $endgroup$
            – Song
            Mar 21 at 7:07












          • 2




            $begingroup$
            very nice solution!
            $endgroup$
            – John Hughes
            Mar 20 at 16:54










          • $begingroup$
            Indeed, this is very elegant. So my last remark had some significance!
            $endgroup$
            – AlephNull
            Mar 20 at 17:06










          • $begingroup$
            Thank you both :-)
            $endgroup$
            – Song
            Mar 21 at 7:07







          2




          2




          $begingroup$
          very nice solution!
          $endgroup$
          – John Hughes
          Mar 20 at 16:54




          $begingroup$
          very nice solution!
          $endgroup$
          – John Hughes
          Mar 20 at 16:54












          $begingroup$
          Indeed, this is very elegant. So my last remark had some significance!
          $endgroup$
          – AlephNull
          Mar 20 at 17:06




          $begingroup$
          Indeed, this is very elegant. So my last remark had some significance!
          $endgroup$
          – AlephNull
          Mar 20 at 17:06












          $begingroup$
          Thank you both :-)
          $endgroup$
          – Song
          Mar 21 at 7:07




          $begingroup$
          Thank you both :-)
          $endgroup$
          – Song
          Mar 21 at 7:07











          5












          $begingroup$

          Writing $x=ahati+bhatj,,y=chati+dhatj,,z=ehati+fhatj,,N=Nhatk$ reduces the sum to $$N((ad-bc)(ehati+fhatj)+(cf-de)(ahati+bhatj)+(be-af)(chati+dhatj)).$$The $hati$ coefficient is $N(ade-bce+acf-ade+bce-acf)=0$. The $hatj$ coefficient can be handled similarly.






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            I accepted a different answer but +1 because I appreciate that this finishes off the solution in that answer.
            $endgroup$
            – AlephNull
            Mar 20 at 13:45
















          5












          $begingroup$

          Writing $x=ahati+bhatj,,y=chati+dhatj,,z=ehati+fhatj,,N=Nhatk$ reduces the sum to $$N((ad-bc)(ehati+fhatj)+(cf-de)(ahati+bhatj)+(be-af)(chati+dhatj)).$$The $hati$ coefficient is $N(ade-bce+acf-ade+bce-acf)=0$. The $hatj$ coefficient can be handled similarly.






          share|cite|improve this answer









          $endgroup$












          • $begingroup$
            I accepted a different answer but +1 because I appreciate that this finishes off the solution in that answer.
            $endgroup$
            – AlephNull
            Mar 20 at 13:45














          5












          5








          5





          $begingroup$

          Writing $x=ahati+bhatj,,y=chati+dhatj,,z=ehati+fhatj,,N=Nhatk$ reduces the sum to $$N((ad-bc)(ehati+fhatj)+(cf-de)(ahati+bhatj)+(be-af)(chati+dhatj)).$$The $hati$ coefficient is $N(ade-bce+acf-ade+bce-acf)=0$. The $hatj$ coefficient can be handled similarly.






          share|cite|improve this answer









          $endgroup$



          Writing $x=ahati+bhatj,,y=chati+dhatj,,z=ehati+fhatj,,N=Nhatk$ reduces the sum to $$N((ad-bc)(ehati+fhatj)+(cf-de)(ahati+bhatj)+(be-af)(chati+dhatj)).$$The $hati$ coefficient is $N(ade-bce+acf-ade+bce-acf)=0$. The $hatj$ coefficient can be handled similarly.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Mar 20 at 12:45









          J.G.J.G.

          34.2k23252




          34.2k23252











          • $begingroup$
            I accepted a different answer but +1 because I appreciate that this finishes off the solution in that answer.
            $endgroup$
            – AlephNull
            Mar 20 at 13:45

















          • $begingroup$
            I accepted a different answer but +1 because I appreciate that this finishes off the solution in that answer.
            $endgroup$
            – AlephNull
            Mar 20 at 13:45
















          $begingroup$
          I accepted a different answer but +1 because I appreciate that this finishes off the solution in that answer.
          $endgroup$
          – AlephNull
          Mar 20 at 13:45





          $begingroup$
          I accepted a different answer but +1 because I appreciate that this finishes off the solution in that answer.
          $endgroup$
          – AlephNull
          Mar 20 at 13:45












          4












          $begingroup$

          Since $bf x, bf y, bf z$ are coplanar, they are linearly dependent. Since the result to be proved is symmetric in $bf x, bf y, bf z$, withouht loss of generality we can write $bf z = lambda bf x + mu bf y$ for some scalars $lambda, mu$.



          Now, $$beginalign
          & (bf y times bf z cdot bf N); bf x \
          = & (bf y times (lambda bf x + mu bf y) cdot bf N); bf x \
          = & (bf y times lambda bf x cdot bf N); bf x \
          = & (bf y times bf x cdot bf N), (lambda bf x)
          endalign$$

          and similarly $$beginalign
          & (bf z times bf x cdot bf N); bf y \
          = & (bf y times bf x cdot bf N), (mu bf y)
          endalign$$

          So $$beginalign
          & (bf y times bf z cdot bf N); bf x +
          (bf z times bf x cdot bf N); bf y \
          = & (bf y times bf x cdot bf N), (lambda bf x + mu bf y)\
          = & -(bf x times bf y cdot bf N);z
          endalign $$

          and the result follows.






          share|cite|improve this answer









          $endgroup$

















            4












            $begingroup$

            Since $bf x, bf y, bf z$ are coplanar, they are linearly dependent. Since the result to be proved is symmetric in $bf x, bf y, bf z$, withouht loss of generality we can write $bf z = lambda bf x + mu bf y$ for some scalars $lambda, mu$.



            Now, $$beginalign
            & (bf y times bf z cdot bf N); bf x \
            = & (bf y times (lambda bf x + mu bf y) cdot bf N); bf x \
            = & (bf y times lambda bf x cdot bf N); bf x \
            = & (bf y times bf x cdot bf N), (lambda bf x)
            endalign$$

            and similarly $$beginalign
            & (bf z times bf x cdot bf N); bf y \
            = & (bf y times bf x cdot bf N), (mu bf y)
            endalign$$

            So $$beginalign
            & (bf y times bf z cdot bf N); bf x +
            (bf z times bf x cdot bf N); bf y \
            = & (bf y times bf x cdot bf N), (lambda bf x + mu bf y)\
            = & -(bf x times bf y cdot bf N);z
            endalign $$

            and the result follows.






            share|cite|improve this answer









            $endgroup$















              4












              4








              4





              $begingroup$

              Since $bf x, bf y, bf z$ are coplanar, they are linearly dependent. Since the result to be proved is symmetric in $bf x, bf y, bf z$, withouht loss of generality we can write $bf z = lambda bf x + mu bf y$ for some scalars $lambda, mu$.



              Now, $$beginalign
              & (bf y times bf z cdot bf N); bf x \
              = & (bf y times (lambda bf x + mu bf y) cdot bf N); bf x \
              = & (bf y times lambda bf x cdot bf N); bf x \
              = & (bf y times bf x cdot bf N), (lambda bf x)
              endalign$$

              and similarly $$beginalign
              & (bf z times bf x cdot bf N); bf y \
              = & (bf y times bf x cdot bf N), (mu bf y)
              endalign$$

              So $$beginalign
              & (bf y times bf z cdot bf N); bf x +
              (bf z times bf x cdot bf N); bf y \
              = & (bf y times bf x cdot bf N), (lambda bf x + mu bf y)\
              = & -(bf x times bf y cdot bf N);z
              endalign $$

              and the result follows.






              share|cite|improve this answer









              $endgroup$



              Since $bf x, bf y, bf z$ are coplanar, they are linearly dependent. Since the result to be proved is symmetric in $bf x, bf y, bf z$, withouht loss of generality we can write $bf z = lambda bf x + mu bf y$ for some scalars $lambda, mu$.



              Now, $$beginalign
              & (bf y times bf z cdot bf N); bf x \
              = & (bf y times (lambda bf x + mu bf y) cdot bf N); bf x \
              = & (bf y times lambda bf x cdot bf N); bf x \
              = & (bf y times bf x cdot bf N), (lambda bf x)
              endalign$$

              and similarly $$beginalign
              & (bf z times bf x cdot bf N); bf y \
              = & (bf y times bf x cdot bf N), (mu bf y)
              endalign$$

              So $$beginalign
              & (bf y times bf z cdot bf N); bf x +
              (bf z times bf x cdot bf N); bf y \
              = & (bf y times bf x cdot bf N), (lambda bf x + mu bf y)\
              = & -(bf x times bf y cdot bf N);z
              endalign $$

              and the result follows.







              share|cite|improve this answer












              share|cite|improve this answer



              share|cite|improve this answer










              answered Mar 20 at 19:43









              alephzeroalephzero

              74037




              74037





















                  2












                  $begingroup$

                  If you know a little about the exterior algebra we can see this almost immediately, and in a way that generalizes substantially.




                  Pick any plane $Pi$ containing $bf x, bf y, bf z$. The map on $Pi$ defined by $$(bf a, bf b, bf c) mapsto [(bf a times bf b) cdot bf N] bf c + [(bf b times bf c) cdot bf N] bf a + [(bf c times bf a) cdot bf N] bf b$$
                  is visibly trilinear and totally skew in its arguments, so it is a (vector-valued) $3$-form on a $2$-dimensional vector space and hence is the zero map.




                  NB this argument doesn't use any properties of $bf N$.






                  share|cite|improve this answer









                  $endgroup$

















                    2












                    $begingroup$

                    If you know a little about the exterior algebra we can see this almost immediately, and in a way that generalizes substantially.




                    Pick any plane $Pi$ containing $bf x, bf y, bf z$. The map on $Pi$ defined by $$(bf a, bf b, bf c) mapsto [(bf a times bf b) cdot bf N] bf c + [(bf b times bf c) cdot bf N] bf a + [(bf c times bf a) cdot bf N] bf b$$
                    is visibly trilinear and totally skew in its arguments, so it is a (vector-valued) $3$-form on a $2$-dimensional vector space and hence is the zero map.




                    NB this argument doesn't use any properties of $bf N$.






                    share|cite|improve this answer









                    $endgroup$















                      2












                      2








                      2





                      $begingroup$

                      If you know a little about the exterior algebra we can see this almost immediately, and in a way that generalizes substantially.




                      Pick any plane $Pi$ containing $bf x, bf y, bf z$. The map on $Pi$ defined by $$(bf a, bf b, bf c) mapsto [(bf a times bf b) cdot bf N] bf c + [(bf b times bf c) cdot bf N] bf a + [(bf c times bf a) cdot bf N] bf b$$
                      is visibly trilinear and totally skew in its arguments, so it is a (vector-valued) $3$-form on a $2$-dimensional vector space and hence is the zero map.




                      NB this argument doesn't use any properties of $bf N$.






                      share|cite|improve this answer









                      $endgroup$



                      If you know a little about the exterior algebra we can see this almost immediately, and in a way that generalizes substantially.




                      Pick any plane $Pi$ containing $bf x, bf y, bf z$. The map on $Pi$ defined by $$(bf a, bf b, bf c) mapsto [(bf a times bf b) cdot bf N] bf c + [(bf b times bf c) cdot bf N] bf a + [(bf c times bf a) cdot bf N] bf b$$
                      is visibly trilinear and totally skew in its arguments, so it is a (vector-valued) $3$-form on a $2$-dimensional vector space and hence is the zero map.




                      NB this argument doesn't use any properties of $bf N$.







                      share|cite|improve this answer












                      share|cite|improve this answer



                      share|cite|improve this answer










                      answered Mar 21 at 5:02









                      TravisTravis

                      64.7k769152




                      64.7k769152





















                          1












                          $begingroup$

                          By the properties of the triple product ( circluar shift) we can rearrange formula:



                          $ (mathbfxtimesmathbfy) cdot mathbfN) mathbfz + (mathbfytimesmathbfz) cdot mathbfN) mathbfx + (mathbfztimesmathbfx) cdot mathbfN) mathbfy \ =(mathbfNtimesmathbfx) cdot mathbfy) mathbfz + (mathbfNtimesmathbfy) cdot mathbfz) mathbfx + (mathbfNtimesmathbfz) cdot mathbfx) mathbfy $



                          All cross product vectors $$v_1=(mathbfNtimesmathbfx),v_2=(mathbfNtimesmathbfy), v_3=(mathbfNtimesmathbfz)$$

                          lie in the plane of coplanar vectors $mathbfx,mathbfy,mathbfz$ and they are vectors $mathbfx,mathbfy,mathbfz$ rotated by $pi/2$ in this plane.



                          So we can limit themselves to this plane and take any vectors with components $mathbfx=[ x_1 x_2]^T,mathbfy=[ y_1 y_2]^T,mathbfz =[ z_1 z_2]^T$.



                          Transform them with the rotation matrix $R=beginbmatrix 0 & -1 \ 1 & 0 endbmatrix$ , calculate appropriate dot products and finally check the formula with these assumed general components.



                          Namely we need to calculate:
                          $$(y^TRx)z+(z^TRy)x+(x^TRz)y$$






                          share|cite|improve this answer











                          $endgroup$

















                            1












                            $begingroup$

                            By the properties of the triple product ( circluar shift) we can rearrange formula:



                            $ (mathbfxtimesmathbfy) cdot mathbfN) mathbfz + (mathbfytimesmathbfz) cdot mathbfN) mathbfx + (mathbfztimesmathbfx) cdot mathbfN) mathbfy \ =(mathbfNtimesmathbfx) cdot mathbfy) mathbfz + (mathbfNtimesmathbfy) cdot mathbfz) mathbfx + (mathbfNtimesmathbfz) cdot mathbfx) mathbfy $



                            All cross product vectors $$v_1=(mathbfNtimesmathbfx),v_2=(mathbfNtimesmathbfy), v_3=(mathbfNtimesmathbfz)$$

                            lie in the plane of coplanar vectors $mathbfx,mathbfy,mathbfz$ and they are vectors $mathbfx,mathbfy,mathbfz$ rotated by $pi/2$ in this plane.



                            So we can limit themselves to this plane and take any vectors with components $mathbfx=[ x_1 x_2]^T,mathbfy=[ y_1 y_2]^T,mathbfz =[ z_1 z_2]^T$.



                            Transform them with the rotation matrix $R=beginbmatrix 0 & -1 \ 1 & 0 endbmatrix$ , calculate appropriate dot products and finally check the formula with these assumed general components.



                            Namely we need to calculate:
                            $$(y^TRx)z+(z^TRy)x+(x^TRz)y$$






                            share|cite|improve this answer











                            $endgroup$















                              1












                              1








                              1





                              $begingroup$

                              By the properties of the triple product ( circluar shift) we can rearrange formula:



                              $ (mathbfxtimesmathbfy) cdot mathbfN) mathbfz + (mathbfytimesmathbfz) cdot mathbfN) mathbfx + (mathbfztimesmathbfx) cdot mathbfN) mathbfy \ =(mathbfNtimesmathbfx) cdot mathbfy) mathbfz + (mathbfNtimesmathbfy) cdot mathbfz) mathbfx + (mathbfNtimesmathbfz) cdot mathbfx) mathbfy $



                              All cross product vectors $$v_1=(mathbfNtimesmathbfx),v_2=(mathbfNtimesmathbfy), v_3=(mathbfNtimesmathbfz)$$

                              lie in the plane of coplanar vectors $mathbfx,mathbfy,mathbfz$ and they are vectors $mathbfx,mathbfy,mathbfz$ rotated by $pi/2$ in this plane.



                              So we can limit themselves to this plane and take any vectors with components $mathbfx=[ x_1 x_2]^T,mathbfy=[ y_1 y_2]^T,mathbfz =[ z_1 z_2]^T$.



                              Transform them with the rotation matrix $R=beginbmatrix 0 & -1 \ 1 & 0 endbmatrix$ , calculate appropriate dot products and finally check the formula with these assumed general components.



                              Namely we need to calculate:
                              $$(y^TRx)z+(z^TRy)x+(x^TRz)y$$






                              share|cite|improve this answer











                              $endgroup$



                              By the properties of the triple product ( circluar shift) we can rearrange formula:



                              $ (mathbfxtimesmathbfy) cdot mathbfN) mathbfz + (mathbfytimesmathbfz) cdot mathbfN) mathbfx + (mathbfztimesmathbfx) cdot mathbfN) mathbfy \ =(mathbfNtimesmathbfx) cdot mathbfy) mathbfz + (mathbfNtimesmathbfy) cdot mathbfz) mathbfx + (mathbfNtimesmathbfz) cdot mathbfx) mathbfy $



                              All cross product vectors $$v_1=(mathbfNtimesmathbfx),v_2=(mathbfNtimesmathbfy), v_3=(mathbfNtimesmathbfz)$$

                              lie in the plane of coplanar vectors $mathbfx,mathbfy,mathbfz$ and they are vectors $mathbfx,mathbfy,mathbfz$ rotated by $pi/2$ in this plane.



                              So we can limit themselves to this plane and take any vectors with components $mathbfx=[ x_1 x_2]^T,mathbfy=[ y_1 y_2]^T,mathbfz =[ z_1 z_2]^T$.



                              Transform them with the rotation matrix $R=beginbmatrix 0 & -1 \ 1 & 0 endbmatrix$ , calculate appropriate dot products and finally check the formula with these assumed general components.



                              Namely we need to calculate:
                              $$(y^TRx)z+(z^TRy)x+(x^TRz)y$$







                              share|cite|improve this answer














                              share|cite|improve this answer



                              share|cite|improve this answer








                              edited Mar 21 at 9:01

























                              answered Mar 20 at 13:42









                              WidawensenWidawensen

                              4,81531447




                              4,81531447





















                                  0












                                  $begingroup$

                                  Another approach to the problem uses a formula for triple product.



                                  $ mathbfacdot(mathbfbtimes mathbfc) = det beginbmatrix
                                  a_1 & b_1 & c_1 \
                                  a_2 & b_2 & c_2 \
                                  a_3 & b_3 & c_3 \
                                  endbmatrix $



                                  Then consider determinant



                                  $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_1 & x_1 & y_1 & z_1 endvmatrix $



                                  where columns consist of vectors $ mathbfN ,mathbfx,mathbfy,mathbfz$ components (the fourth row repeats the first one).



                                  Of course such determinant equals to $0$.

                                  Developing the determinant along the fourth row we obtain:



                                  $-n_1beginvmatrix x_1 & y_1 & z_1 \ x_2 & y_2 & z_2 \ x_3 & y_3 & z_3 \ endvmatrix +x_1beginvmatrix n_1 & y_1 & z_1 \ n_2 & y_2 & z_2 \ n_3 & y_3 & z_3 \ endvmatrix -y_1beginvmatrix n_1 & x_1 & z_1 \ n_2 & x_2 & z_2 \ n_3 & x_3 & z_3 \ endvmatrix +z_1beginvmatrix n_1 & x_1 & y_1 \ n_2 & x_2 & y_2 \ n_3 & x_3 & y_3 \ endvmatrix=0$



                                  from which the formula for the first component of the vector given in the question follows



                                  (the first summand is equal to $0$ as the vectors $mathbfx,mathbfy,mathbfz$ are collinear, the columns can be permuted (required for the third summand) if needed to give appropriate sign in expression)



                                  Similarly the determinants



                                  $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_2 & x_2 & y_2 & z_2 endvmatrix $ and $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_3 & x_3 & y_3 & z_3 endvmatrix $



                                  give the second and the third component of the question vector, equal to $0$.






                                  share|cite|improve this answer











                                  $endgroup$

















                                    0












                                    $begingroup$

                                    Another approach to the problem uses a formula for triple product.



                                    $ mathbfacdot(mathbfbtimes mathbfc) = det beginbmatrix
                                    a_1 & b_1 & c_1 \
                                    a_2 & b_2 & c_2 \
                                    a_3 & b_3 & c_3 \
                                    endbmatrix $



                                    Then consider determinant



                                    $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_1 & x_1 & y_1 & z_1 endvmatrix $



                                    where columns consist of vectors $ mathbfN ,mathbfx,mathbfy,mathbfz$ components (the fourth row repeats the first one).



                                    Of course such determinant equals to $0$.

                                    Developing the determinant along the fourth row we obtain:



                                    $-n_1beginvmatrix x_1 & y_1 & z_1 \ x_2 & y_2 & z_2 \ x_3 & y_3 & z_3 \ endvmatrix +x_1beginvmatrix n_1 & y_1 & z_1 \ n_2 & y_2 & z_2 \ n_3 & y_3 & z_3 \ endvmatrix -y_1beginvmatrix n_1 & x_1 & z_1 \ n_2 & x_2 & z_2 \ n_3 & x_3 & z_3 \ endvmatrix +z_1beginvmatrix n_1 & x_1 & y_1 \ n_2 & x_2 & y_2 \ n_3 & x_3 & y_3 \ endvmatrix=0$



                                    from which the formula for the first component of the vector given in the question follows



                                    (the first summand is equal to $0$ as the vectors $mathbfx,mathbfy,mathbfz$ are collinear, the columns can be permuted (required for the third summand) if needed to give appropriate sign in expression)



                                    Similarly the determinants



                                    $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_2 & x_2 & y_2 & z_2 endvmatrix $ and $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_3 & x_3 & y_3 & z_3 endvmatrix $



                                    give the second and the third component of the question vector, equal to $0$.






                                    share|cite|improve this answer











                                    $endgroup$















                                      0












                                      0








                                      0





                                      $begingroup$

                                      Another approach to the problem uses a formula for triple product.



                                      $ mathbfacdot(mathbfbtimes mathbfc) = det beginbmatrix
                                      a_1 & b_1 & c_1 \
                                      a_2 & b_2 & c_2 \
                                      a_3 & b_3 & c_3 \
                                      endbmatrix $



                                      Then consider determinant



                                      $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_1 & x_1 & y_1 & z_1 endvmatrix $



                                      where columns consist of vectors $ mathbfN ,mathbfx,mathbfy,mathbfz$ components (the fourth row repeats the first one).



                                      Of course such determinant equals to $0$.

                                      Developing the determinant along the fourth row we obtain:



                                      $-n_1beginvmatrix x_1 & y_1 & z_1 \ x_2 & y_2 & z_2 \ x_3 & y_3 & z_3 \ endvmatrix +x_1beginvmatrix n_1 & y_1 & z_1 \ n_2 & y_2 & z_2 \ n_3 & y_3 & z_3 \ endvmatrix -y_1beginvmatrix n_1 & x_1 & z_1 \ n_2 & x_2 & z_2 \ n_3 & x_3 & z_3 \ endvmatrix +z_1beginvmatrix n_1 & x_1 & y_1 \ n_2 & x_2 & y_2 \ n_3 & x_3 & y_3 \ endvmatrix=0$



                                      from which the formula for the first component of the vector given in the question follows



                                      (the first summand is equal to $0$ as the vectors $mathbfx,mathbfy,mathbfz$ are collinear, the columns can be permuted (required for the third summand) if needed to give appropriate sign in expression)



                                      Similarly the determinants



                                      $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_2 & x_2 & y_2 & z_2 endvmatrix $ and $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_3 & x_3 & y_3 & z_3 endvmatrix $



                                      give the second and the third component of the question vector, equal to $0$.






                                      share|cite|improve this answer











                                      $endgroup$



                                      Another approach to the problem uses a formula for triple product.



                                      $ mathbfacdot(mathbfbtimes mathbfc) = det beginbmatrix
                                      a_1 & b_1 & c_1 \
                                      a_2 & b_2 & c_2 \
                                      a_3 & b_3 & c_3 \
                                      endbmatrix $



                                      Then consider determinant



                                      $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_1 & x_1 & y_1 & z_1 endvmatrix $



                                      where columns consist of vectors $ mathbfN ,mathbfx,mathbfy,mathbfz$ components (the fourth row repeats the first one).



                                      Of course such determinant equals to $0$.

                                      Developing the determinant along the fourth row we obtain:



                                      $-n_1beginvmatrix x_1 & y_1 & z_1 \ x_2 & y_2 & z_2 \ x_3 & y_3 & z_3 \ endvmatrix +x_1beginvmatrix n_1 & y_1 & z_1 \ n_2 & y_2 & z_2 \ n_3 & y_3 & z_3 \ endvmatrix -y_1beginvmatrix n_1 & x_1 & z_1 \ n_2 & x_2 & z_2 \ n_3 & x_3 & z_3 \ endvmatrix +z_1beginvmatrix n_1 & x_1 & y_1 \ n_2 & x_2 & y_2 \ n_3 & x_3 & y_3 \ endvmatrix=0$



                                      from which the formula for the first component of the vector given in the question follows



                                      (the first summand is equal to $0$ as the vectors $mathbfx,mathbfy,mathbfz$ are collinear, the columns can be permuted (required for the third summand) if needed to give appropriate sign in expression)



                                      Similarly the determinants



                                      $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_2 & x_2 & y_2 & z_2 endvmatrix $ and $beginvmatrix n_1 & x_1 & y_1 & z_1 \ n_2 & x_2 & y_2 & z_2 \ n_3 & x_3 & y_3 & z_3 \ n_3 & x_3 & y_3 & z_3 endvmatrix $



                                      give the second and the third component of the question vector, equal to $0$.







                                      share|cite|improve this answer














                                      share|cite|improve this answer



                                      share|cite|improve this answer








                                      edited Mar 29 at 14:04

























                                      answered Mar 29 at 13:57









                                      WidawensenWidawensen

                                      4,81531447




                                      4,81531447



























                                          draft saved

                                          draft discarded
















































                                          Thanks for contributing an answer to Mathematics Stack Exchange!


                                          • Please be sure to answer the question. Provide details and share your research!

                                          But avoid


                                          • Asking for help, clarification, or responding to other answers.

                                          • Making statements based on opinion; back them up with references or personal experience.

                                          Use MathJax to format equations. MathJax reference.


                                          To learn more, see our tips on writing great answers.




                                          draft saved


                                          draft discarded














                                          StackExchange.ready(
                                          function ()
                                          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3155292%2fproving-bf-x-times-y-cdot-n-zy-times-z-cdot-n-xz-times-x-cdot-n-y%23new-answer', 'question_page');

                                          );

                                          Post as a guest















                                          Required, but never shown





















































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown

































                                          Required, but never shown














                                          Required, but never shown












                                          Required, but never shown







                                          Required, but never shown







                                          Popular posts from this blog

                                          Identifying “long and narrow” polygons in with PostGISlength and width of polygonWhy postgis st_overlaps reports Qgis' “avoid intersections” generated polygon as overlapping with others?Adjusting polygons to boundary and filling holesDrawing polygons with fixed area?How to remove spikes in Polygons with PostGISDeleting sliver polygons after difference operation in QGIS?Snapping boundaries in PostGISSplit polygon into parts adding attributes based on underlying polygon in QGISSplitting overlap between polygons and assign to nearest polygon using PostGIS?Expanding polygons and clipping at midpoint?Removing Intersection of Buffers in Same Layers

                                          Masuk log Menu navigasi

                                          อาณาจักร (ชีววิทยา) ดูเพิ่ม อ้างอิง รายการเลือกการนำทาง10.1086/39456810.5962/bhl.title.447410.1126/science.163.3863.150576276010.1007/BF01796092408502"Phylogenetic structure of the prokaryotic domain: the primary kingdoms"10.1073/pnas.74.11.5088432104270744"Towards a natural system of organisms: proposal for the domains Archaea, Bacteria, and Eucarya"1990PNAS...87.4576W10.1073/pnas.87.12.4576541592112744PubMedJump the queueexpand by handPubMedJump the queueexpand by handPubMedJump the queueexpand by hand"A revised six-kingdom system of life"10.1111/j.1469-185X.1998.tb00030.x9809012"Only six kingdoms of life"10.1098/rspb.2004.2705169172415306349"Kingdoms Protozoa and Chromista and the eozoan root of the eukaryotic tree"10.1098/rsbl.2009.0948288006020031978เพิ่มข้อมูล