Practical application of matrices and determinants
$begingroup$
I have learned recently about matrices and determinants and also about the geometrical interpretations, i.e, how the matrix is used for linear transformations and how determinants tell us about area/volume changes.
My school textbooks tell me that matrices and determinants can be used to solve a system of equations, but I feel that such a vast concept would have more practical applications. My question is: what are the various ways the concept of matrices and determinants is employed in science or everyday life?
matrices soft-question determinant applications
$endgroup$
|
show 1 more comment
$begingroup$
I have learned recently about matrices and determinants and also about the geometrical interpretations, i.e, how the matrix is used for linear transformations and how determinants tell us about area/volume changes.
My school textbooks tell me that matrices and determinants can be used to solve a system of equations, but I feel that such a vast concept would have more practical applications. My question is: what are the various ways the concept of matrices and determinants is employed in science or everyday life?
matrices soft-question determinant applications
$endgroup$
1
$begingroup$
Matrices are used a lot in machine learning.
$endgroup$
– Bladewood
Mar 17 at 19:00
6
$begingroup$
With some exaggeration, all of applied mathematics boils down to solving systems of linear equations.
$endgroup$
– Rodrigo de Azevedo
Mar 17 at 19:19
4
$begingroup$
Solving systems of equations is extremely practical. Every time someone solves a differential equation using the finite element method, or runs a linear regression, or solves an optimization problem using Newton's method, a system of linear equations is solved. There is hardly any engineering or applied math project that doesn't require solving a system of linear equations.
$endgroup$
– Sasho Nikolov
Mar 17 at 19:37
$begingroup$
Matrices are important to computer graphics, but not determinants.
$endgroup$
– immibis
Mar 17 at 22:47
$begingroup$
See this intuitive motivation for matrices.
$endgroup$
– user21820
Mar 18 at 7:51
|
show 1 more comment
$begingroup$
I have learned recently about matrices and determinants and also about the geometrical interpretations, i.e, how the matrix is used for linear transformations and how determinants tell us about area/volume changes.
My school textbooks tell me that matrices and determinants can be used to solve a system of equations, but I feel that such a vast concept would have more practical applications. My question is: what are the various ways the concept of matrices and determinants is employed in science or everyday life?
matrices soft-question determinant applications
$endgroup$
I have learned recently about matrices and determinants and also about the geometrical interpretations, i.e, how the matrix is used for linear transformations and how determinants tell us about area/volume changes.
My school textbooks tell me that matrices and determinants can be used to solve a system of equations, but I feel that such a vast concept would have more practical applications. My question is: what are the various ways the concept of matrices and determinants is employed in science or everyday life?
matrices soft-question determinant applications
matrices soft-question determinant applications
edited Mar 18 at 8:44
YuiTo Cheng
2,1212837
2,1212837
asked Mar 17 at 14:57
Vaishakh Sreekanth MenonVaishakh Sreekanth Menon
292
292
1
$begingroup$
Matrices are used a lot in machine learning.
$endgroup$
– Bladewood
Mar 17 at 19:00
6
$begingroup$
With some exaggeration, all of applied mathematics boils down to solving systems of linear equations.
$endgroup$
– Rodrigo de Azevedo
Mar 17 at 19:19
4
$begingroup$
Solving systems of equations is extremely practical. Every time someone solves a differential equation using the finite element method, or runs a linear regression, or solves an optimization problem using Newton's method, a system of linear equations is solved. There is hardly any engineering or applied math project that doesn't require solving a system of linear equations.
$endgroup$
– Sasho Nikolov
Mar 17 at 19:37
$begingroup$
Matrices are important to computer graphics, but not determinants.
$endgroup$
– immibis
Mar 17 at 22:47
$begingroup$
See this intuitive motivation for matrices.
$endgroup$
– user21820
Mar 18 at 7:51
|
show 1 more comment
1
$begingroup$
Matrices are used a lot in machine learning.
$endgroup$
– Bladewood
Mar 17 at 19:00
6
$begingroup$
With some exaggeration, all of applied mathematics boils down to solving systems of linear equations.
$endgroup$
– Rodrigo de Azevedo
Mar 17 at 19:19
4
$begingroup$
Solving systems of equations is extremely practical. Every time someone solves a differential equation using the finite element method, or runs a linear regression, or solves an optimization problem using Newton's method, a system of linear equations is solved. There is hardly any engineering or applied math project that doesn't require solving a system of linear equations.
$endgroup$
– Sasho Nikolov
Mar 17 at 19:37
$begingroup$
Matrices are important to computer graphics, but not determinants.
$endgroup$
– immibis
Mar 17 at 22:47
$begingroup$
See this intuitive motivation for matrices.
$endgroup$
– user21820
Mar 18 at 7:51
1
1
$begingroup$
Matrices are used a lot in machine learning.
$endgroup$
– Bladewood
Mar 17 at 19:00
$begingroup$
Matrices are used a lot in machine learning.
$endgroup$
– Bladewood
Mar 17 at 19:00
6
6
$begingroup$
With some exaggeration, all of applied mathematics boils down to solving systems of linear equations.
$endgroup$
– Rodrigo de Azevedo
Mar 17 at 19:19
$begingroup$
With some exaggeration, all of applied mathematics boils down to solving systems of linear equations.
$endgroup$
– Rodrigo de Azevedo
Mar 17 at 19:19
4
4
$begingroup$
Solving systems of equations is extremely practical. Every time someone solves a differential equation using the finite element method, or runs a linear regression, or solves an optimization problem using Newton's method, a system of linear equations is solved. There is hardly any engineering or applied math project that doesn't require solving a system of linear equations.
$endgroup$
– Sasho Nikolov
Mar 17 at 19:37
$begingroup$
Solving systems of equations is extremely practical. Every time someone solves a differential equation using the finite element method, or runs a linear regression, or solves an optimization problem using Newton's method, a system of linear equations is solved. There is hardly any engineering or applied math project that doesn't require solving a system of linear equations.
$endgroup$
– Sasho Nikolov
Mar 17 at 19:37
$begingroup$
Matrices are important to computer graphics, but not determinants.
$endgroup$
– immibis
Mar 17 at 22:47
$begingroup$
Matrices are important to computer graphics, but not determinants.
$endgroup$
– immibis
Mar 17 at 22:47
$begingroup$
See this intuitive motivation for matrices.
$endgroup$
– user21820
Mar 18 at 7:51
$begingroup$
See this intuitive motivation for matrices.
$endgroup$
– user21820
Mar 18 at 7:51
|
show 1 more comment
8 Answers
8
active
oldest
votes
$begingroup$
My first brief understanding of matrices is that they offer an elegant way to deal with data (combinatorially, sort of). A classical and really concrete example would be a discrete Markov chain (don't be frightened by its name). Say you are given the following information: if today is rainy, then tomorrow has a 0.9 probability to be rainy; if today is sunny, then tomorrow has a 0.5 probability to be rainy. Then you may organize these data into a matrix:
$$A=begin{pmatrix}
0.9 & 0.5 \
0.1 & 0.5
end{pmatrix}$$
Now if you compute $A^2=begin{pmatrix}
0.86 & 0.7 \
0.14 & 0.3
end{pmatrix}$, what do you get? 0.86 is the probability that if today is rainy then the day after tomorrow is still rainy and 0.7 is the probability that if today is sunny then the day after tomorrow is rainy. And this pattern holds for $A^n$ an arbitrary $n$.
That's the simple point: matrices are a way to calculate elegantly. In my understanding, this aligns with the spirit of mathematics. Math occurs when people try to solve practical problems. People find that if they make good definitions and use good notations, things will be a lot easier. Here comes math. And the matrix is such a good notation to make things easier.
$endgroup$
add a comment |
$begingroup$
Matrices are used widely in computer graphics. If you have the coordinates of an object in 3d space, then scaling, stretching and rotating the object can all be done by considering the coordinates to be vectors and multiplying them by the appropriate matrix. When you want to display that object on-screen, the projection down to a 2D object is also a matrix multiplication.
$endgroup$
add a comment |
$begingroup$
Determinants are of great theoretical significance in mathematics, since in general "the determinant of something $= 0$" means something very special is going on, which may be either good news of bad news depending on the situation.
On the other hand determinants have very little practical use in numerical calculations, since evaluating a determinant of order $n$ "from first principles" involves $n!$ operations, which is prohibitively expensive unless $n$ is very small. Even Cramer's rule, which is often taught in an introductory course on determinants and matrices, is not the cheapest way to solve $n$ linear equations in $n$ variables numerically if $n>2$, which is a pretty serious limitation!
Also, if the typical magnitude of each term in a matrix of of order $n$ is $a$, the determinant is likely to be of magnitude $a^n$, and for large $n$ (say $n > 1000$) that number will usually be too large or too small to do efficient computer calculations, unless $|a|$ is very close to $1$.
On the other hand, almost every type of numerical calculation involves the same techniques that are used to solve equations, so the practical applications of matrices are more or less "the whole of applied mathematics, science, and engineering". Most applications involve systems of equations that are much too big to create and solve by hand, so it is hard to give realistic simple examples. In real-world numerical applications, a set of $n$ linear equations in $n$ variables would still be "small" from a practical point of view if $n = 100,000,$ and even $n = 1,000,000$ is not usually big enough to cause any real problems - the solution would only take a few seconds on a typical personal computer.
$endgroup$
$begingroup$
Why "even Cramer's rule"? That rule is so obviously inefficient that it's hardly worth mentioning, as every introductory course covers Gaussian elimination, which is clearly much more efficient.
$endgroup$
– Servaes
Mar 17 at 20:19
$begingroup$
Whilst it doesn't make it more efficient, the determinant calculations in Cramer's rule can be done using Gaussian elimination which means its at least in the same complexity class surely?
$endgroup$
– jacob1729
Mar 17 at 23:10
add a comment |
$begingroup$
Here's an application in calculus. The multivariate generalisation of integration by substitution viz. $x=f(y)implies dx=f^prime(y)dy$ uses the determinant of a matrix called a Jacobian in place of the $f^prime$ factor. In particular, the chain rule $dx_i=sum_j J_{ij}dy_j,,J_{ij}:=frac{partial x_i}{partial y_j}$ for $n$-dimensional vectors $vec{x},,vec{y}$ can be summarised as $dvec{x}=Jdvec{y}$. Then $d^nvec{x}=|det J|d^nvec{y}$.
$endgroup$
add a comment |
$begingroup$
There are plenty of applications of determinants, but I will just mention one that applies to optimization. A totally unimodular matrix is a matrix (doesn’t have to be square) that every square submatrix has a determinant of 0, 1 or -1. It turns out that (by Cramer’s rule) that if a constraint matrix $A$ of a linear program max ${c’x:: Ax leq b, x in mathbb{R}^n_+} $ is totally unimodular, it is guaranteed to have an integer solution if a solution exists. In other words, the polyhedron formed by $P = {x:: Ax leq b}$ has integer vertices in $mathbb{R}^n$. This has major implications in integer programming, as we solve an integer program that has a totally unimodular matrix as a linear program. This is advantageous because a linear program can me solved in polynomial time, where there is no polynomial algorithm for integer programs.
$endgroup$
add a comment |
$begingroup$
Besides the applications already mentioned in the previous answers, just consider that matrices are the fundamental basis for Finite Elements design, today widely used in every sector of engineering.
Actually a truss is a physical representation of a matrix: if its stiffness matrix has null determinant, it means that there can be movements without external forces, i.e. the truss will collapse.
Also, in the continuous analysis of the deformation of bodies, stress and strain each are represented by matrices (tensors).
The inertia of a body to rotation is a matrix.
An electric network is described by a matrix voltages/ currents, and a null determinant denotes a short somewhere.
And so on ...
$endgroup$
add a comment |
$begingroup$
If the determinant of a matrix is zero, then there are no solutions to a set of equations represented by an nXn matrix set equal to a 1Xn matrix. If it is non-zero, then there are solutions and they can all be found using Cramer's Rule. They are also used in Photoshop for various visual tricks; they are used to cast 3D shapes onto a 2D surface; they are used to analyze seismic waves... and a hundred other applications where data need to be crunched in a simple manner.
$endgroup$
add a comment |
$begingroup$
In system-theory,
- systems can be represented by matrices and each column represent the internal-state of the system.
- If a determinant of one such matrix is zero, then we can say that one of the state associated with certain dynamics is being duplicated.
- Based on some special matrix operations, we arrive at something called as relative-gain-array (RGA). This will give information on how much each states/output of a system interactes with each other, collectively speaking.
However, top are just a few examples. There are much more.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3151650%2fpractical-application-of-matrices-and-determinants%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
8 Answers
8
active
oldest
votes
8 Answers
8
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
My first brief understanding of matrices is that they offer an elegant way to deal with data (combinatorially, sort of). A classical and really concrete example would be a discrete Markov chain (don't be frightened by its name). Say you are given the following information: if today is rainy, then tomorrow has a 0.9 probability to be rainy; if today is sunny, then tomorrow has a 0.5 probability to be rainy. Then you may organize these data into a matrix:
$$A=begin{pmatrix}
0.9 & 0.5 \
0.1 & 0.5
end{pmatrix}$$
Now if you compute $A^2=begin{pmatrix}
0.86 & 0.7 \
0.14 & 0.3
end{pmatrix}$, what do you get? 0.86 is the probability that if today is rainy then the day after tomorrow is still rainy and 0.7 is the probability that if today is sunny then the day after tomorrow is rainy. And this pattern holds for $A^n$ an arbitrary $n$.
That's the simple point: matrices are a way to calculate elegantly. In my understanding, this aligns with the spirit of mathematics. Math occurs when people try to solve practical problems. People find that if they make good definitions and use good notations, things will be a lot easier. Here comes math. And the matrix is such a good notation to make things easier.
$endgroup$
add a comment |
$begingroup$
My first brief understanding of matrices is that they offer an elegant way to deal with data (combinatorially, sort of). A classical and really concrete example would be a discrete Markov chain (don't be frightened by its name). Say you are given the following information: if today is rainy, then tomorrow has a 0.9 probability to be rainy; if today is sunny, then tomorrow has a 0.5 probability to be rainy. Then you may organize these data into a matrix:
$$A=begin{pmatrix}
0.9 & 0.5 \
0.1 & 0.5
end{pmatrix}$$
Now if you compute $A^2=begin{pmatrix}
0.86 & 0.7 \
0.14 & 0.3
end{pmatrix}$, what do you get? 0.86 is the probability that if today is rainy then the day after tomorrow is still rainy and 0.7 is the probability that if today is sunny then the day after tomorrow is rainy. And this pattern holds for $A^n$ an arbitrary $n$.
That's the simple point: matrices are a way to calculate elegantly. In my understanding, this aligns with the spirit of mathematics. Math occurs when people try to solve practical problems. People find that if they make good definitions and use good notations, things will be a lot easier. Here comes math. And the matrix is such a good notation to make things easier.
$endgroup$
add a comment |
$begingroup$
My first brief understanding of matrices is that they offer an elegant way to deal with data (combinatorially, sort of). A classical and really concrete example would be a discrete Markov chain (don't be frightened by its name). Say you are given the following information: if today is rainy, then tomorrow has a 0.9 probability to be rainy; if today is sunny, then tomorrow has a 0.5 probability to be rainy. Then you may organize these data into a matrix:
$$A=begin{pmatrix}
0.9 & 0.5 \
0.1 & 0.5
end{pmatrix}$$
Now if you compute $A^2=begin{pmatrix}
0.86 & 0.7 \
0.14 & 0.3
end{pmatrix}$, what do you get? 0.86 is the probability that if today is rainy then the day after tomorrow is still rainy and 0.7 is the probability that if today is sunny then the day after tomorrow is rainy. And this pattern holds for $A^n$ an arbitrary $n$.
That's the simple point: matrices are a way to calculate elegantly. In my understanding, this aligns with the spirit of mathematics. Math occurs when people try to solve practical problems. People find that if they make good definitions and use good notations, things will be a lot easier. Here comes math. And the matrix is such a good notation to make things easier.
$endgroup$
My first brief understanding of matrices is that they offer an elegant way to deal with data (combinatorially, sort of). A classical and really concrete example would be a discrete Markov chain (don't be frightened by its name). Say you are given the following information: if today is rainy, then tomorrow has a 0.9 probability to be rainy; if today is sunny, then tomorrow has a 0.5 probability to be rainy. Then you may organize these data into a matrix:
$$A=begin{pmatrix}
0.9 & 0.5 \
0.1 & 0.5
end{pmatrix}$$
Now if you compute $A^2=begin{pmatrix}
0.86 & 0.7 \
0.14 & 0.3
end{pmatrix}$, what do you get? 0.86 is the probability that if today is rainy then the day after tomorrow is still rainy and 0.7 is the probability that if today is sunny then the day after tomorrow is rainy. And this pattern holds for $A^n$ an arbitrary $n$.
That's the simple point: matrices are a way to calculate elegantly. In my understanding, this aligns with the spirit of mathematics. Math occurs when people try to solve practical problems. People find that if they make good definitions and use good notations, things will be a lot easier. Here comes math. And the matrix is such a good notation to make things easier.
answered Mar 17 at 15:55
J. WangJ. Wang
1816
1816
add a comment |
add a comment |
$begingroup$
Matrices are used widely in computer graphics. If you have the coordinates of an object in 3d space, then scaling, stretching and rotating the object can all be done by considering the coordinates to be vectors and multiplying them by the appropriate matrix. When you want to display that object on-screen, the projection down to a 2D object is also a matrix multiplication.
$endgroup$
add a comment |
$begingroup$
Matrices are used widely in computer graphics. If you have the coordinates of an object in 3d space, then scaling, stretching and rotating the object can all be done by considering the coordinates to be vectors and multiplying them by the appropriate matrix. When you want to display that object on-screen, the projection down to a 2D object is also a matrix multiplication.
$endgroup$
add a comment |
$begingroup$
Matrices are used widely in computer graphics. If you have the coordinates of an object in 3d space, then scaling, stretching and rotating the object can all be done by considering the coordinates to be vectors and multiplying them by the appropriate matrix. When you want to display that object on-screen, the projection down to a 2D object is also a matrix multiplication.
$endgroup$
Matrices are used widely in computer graphics. If you have the coordinates of an object in 3d space, then scaling, stretching and rotating the object can all be done by considering the coordinates to be vectors and multiplying them by the appropriate matrix. When you want to display that object on-screen, the projection down to a 2D object is also a matrix multiplication.
answered Mar 17 at 19:02
David RicherbyDavid Richerby
2,25511324
2,25511324
add a comment |
add a comment |
$begingroup$
Determinants are of great theoretical significance in mathematics, since in general "the determinant of something $= 0$" means something very special is going on, which may be either good news of bad news depending on the situation.
On the other hand determinants have very little practical use in numerical calculations, since evaluating a determinant of order $n$ "from first principles" involves $n!$ operations, which is prohibitively expensive unless $n$ is very small. Even Cramer's rule, which is often taught in an introductory course on determinants and matrices, is not the cheapest way to solve $n$ linear equations in $n$ variables numerically if $n>2$, which is a pretty serious limitation!
Also, if the typical magnitude of each term in a matrix of of order $n$ is $a$, the determinant is likely to be of magnitude $a^n$, and for large $n$ (say $n > 1000$) that number will usually be too large or too small to do efficient computer calculations, unless $|a|$ is very close to $1$.
On the other hand, almost every type of numerical calculation involves the same techniques that are used to solve equations, so the practical applications of matrices are more or less "the whole of applied mathematics, science, and engineering". Most applications involve systems of equations that are much too big to create and solve by hand, so it is hard to give realistic simple examples. In real-world numerical applications, a set of $n$ linear equations in $n$ variables would still be "small" from a practical point of view if $n = 100,000,$ and even $n = 1,000,000$ is not usually big enough to cause any real problems - the solution would only take a few seconds on a typical personal computer.
$endgroup$
$begingroup$
Why "even Cramer's rule"? That rule is so obviously inefficient that it's hardly worth mentioning, as every introductory course covers Gaussian elimination, which is clearly much more efficient.
$endgroup$
– Servaes
Mar 17 at 20:19
$begingroup$
Whilst it doesn't make it more efficient, the determinant calculations in Cramer's rule can be done using Gaussian elimination which means its at least in the same complexity class surely?
$endgroup$
– jacob1729
Mar 17 at 23:10
add a comment |
$begingroup$
Determinants are of great theoretical significance in mathematics, since in general "the determinant of something $= 0$" means something very special is going on, which may be either good news of bad news depending on the situation.
On the other hand determinants have very little practical use in numerical calculations, since evaluating a determinant of order $n$ "from first principles" involves $n!$ operations, which is prohibitively expensive unless $n$ is very small. Even Cramer's rule, which is often taught in an introductory course on determinants and matrices, is not the cheapest way to solve $n$ linear equations in $n$ variables numerically if $n>2$, which is a pretty serious limitation!
Also, if the typical magnitude of each term in a matrix of of order $n$ is $a$, the determinant is likely to be of magnitude $a^n$, and for large $n$ (say $n > 1000$) that number will usually be too large or too small to do efficient computer calculations, unless $|a|$ is very close to $1$.
On the other hand, almost every type of numerical calculation involves the same techniques that are used to solve equations, so the practical applications of matrices are more or less "the whole of applied mathematics, science, and engineering". Most applications involve systems of equations that are much too big to create and solve by hand, so it is hard to give realistic simple examples. In real-world numerical applications, a set of $n$ linear equations in $n$ variables would still be "small" from a practical point of view if $n = 100,000,$ and even $n = 1,000,000$ is not usually big enough to cause any real problems - the solution would only take a few seconds on a typical personal computer.
$endgroup$
$begingroup$
Why "even Cramer's rule"? That rule is so obviously inefficient that it's hardly worth mentioning, as every introductory course covers Gaussian elimination, which is clearly much more efficient.
$endgroup$
– Servaes
Mar 17 at 20:19
$begingroup$
Whilst it doesn't make it more efficient, the determinant calculations in Cramer's rule can be done using Gaussian elimination which means its at least in the same complexity class surely?
$endgroup$
– jacob1729
Mar 17 at 23:10
add a comment |
$begingroup$
Determinants are of great theoretical significance in mathematics, since in general "the determinant of something $= 0$" means something very special is going on, which may be either good news of bad news depending on the situation.
On the other hand determinants have very little practical use in numerical calculations, since evaluating a determinant of order $n$ "from first principles" involves $n!$ operations, which is prohibitively expensive unless $n$ is very small. Even Cramer's rule, which is often taught in an introductory course on determinants and matrices, is not the cheapest way to solve $n$ linear equations in $n$ variables numerically if $n>2$, which is a pretty serious limitation!
Also, if the typical magnitude of each term in a matrix of of order $n$ is $a$, the determinant is likely to be of magnitude $a^n$, and for large $n$ (say $n > 1000$) that number will usually be too large or too small to do efficient computer calculations, unless $|a|$ is very close to $1$.
On the other hand, almost every type of numerical calculation involves the same techniques that are used to solve equations, so the practical applications of matrices are more or less "the whole of applied mathematics, science, and engineering". Most applications involve systems of equations that are much too big to create and solve by hand, so it is hard to give realistic simple examples. In real-world numerical applications, a set of $n$ linear equations in $n$ variables would still be "small" from a practical point of view if $n = 100,000,$ and even $n = 1,000,000$ is not usually big enough to cause any real problems - the solution would only take a few seconds on a typical personal computer.
$endgroup$
Determinants are of great theoretical significance in mathematics, since in general "the determinant of something $= 0$" means something very special is going on, which may be either good news of bad news depending on the situation.
On the other hand determinants have very little practical use in numerical calculations, since evaluating a determinant of order $n$ "from first principles" involves $n!$ operations, which is prohibitively expensive unless $n$ is very small. Even Cramer's rule, which is often taught in an introductory course on determinants and matrices, is not the cheapest way to solve $n$ linear equations in $n$ variables numerically if $n>2$, which is a pretty serious limitation!
Also, if the typical magnitude of each term in a matrix of of order $n$ is $a$, the determinant is likely to be of magnitude $a^n$, and for large $n$ (say $n > 1000$) that number will usually be too large or too small to do efficient computer calculations, unless $|a|$ is very close to $1$.
On the other hand, almost every type of numerical calculation involves the same techniques that are used to solve equations, so the practical applications of matrices are more or less "the whole of applied mathematics, science, and engineering". Most applications involve systems of equations that are much too big to create and solve by hand, so it is hard to give realistic simple examples. In real-world numerical applications, a set of $n$ linear equations in $n$ variables would still be "small" from a practical point of view if $n = 100,000,$ and even $n = 1,000,000$ is not usually big enough to cause any real problems - the solution would only take a few seconds on a typical personal computer.
answered Mar 17 at 19:19
alephzeroalephzero
72037
72037
$begingroup$
Why "even Cramer's rule"? That rule is so obviously inefficient that it's hardly worth mentioning, as every introductory course covers Gaussian elimination, which is clearly much more efficient.
$endgroup$
– Servaes
Mar 17 at 20:19
$begingroup$
Whilst it doesn't make it more efficient, the determinant calculations in Cramer's rule can be done using Gaussian elimination which means its at least in the same complexity class surely?
$endgroup$
– jacob1729
Mar 17 at 23:10
add a comment |
$begingroup$
Why "even Cramer's rule"? That rule is so obviously inefficient that it's hardly worth mentioning, as every introductory course covers Gaussian elimination, which is clearly much more efficient.
$endgroup$
– Servaes
Mar 17 at 20:19
$begingroup$
Whilst it doesn't make it more efficient, the determinant calculations in Cramer's rule can be done using Gaussian elimination which means its at least in the same complexity class surely?
$endgroup$
– jacob1729
Mar 17 at 23:10
$begingroup$
Why "even Cramer's rule"? That rule is so obviously inefficient that it's hardly worth mentioning, as every introductory course covers Gaussian elimination, which is clearly much more efficient.
$endgroup$
– Servaes
Mar 17 at 20:19
$begingroup$
Why "even Cramer's rule"? That rule is so obviously inefficient that it's hardly worth mentioning, as every introductory course covers Gaussian elimination, which is clearly much more efficient.
$endgroup$
– Servaes
Mar 17 at 20:19
$begingroup$
Whilst it doesn't make it more efficient, the determinant calculations in Cramer's rule can be done using Gaussian elimination which means its at least in the same complexity class surely?
$endgroup$
– jacob1729
Mar 17 at 23:10
$begingroup$
Whilst it doesn't make it more efficient, the determinant calculations in Cramer's rule can be done using Gaussian elimination which means its at least in the same complexity class surely?
$endgroup$
– jacob1729
Mar 17 at 23:10
add a comment |
$begingroup$
Here's an application in calculus. The multivariate generalisation of integration by substitution viz. $x=f(y)implies dx=f^prime(y)dy$ uses the determinant of a matrix called a Jacobian in place of the $f^prime$ factor. In particular, the chain rule $dx_i=sum_j J_{ij}dy_j,,J_{ij}:=frac{partial x_i}{partial y_j}$ for $n$-dimensional vectors $vec{x},,vec{y}$ can be summarised as $dvec{x}=Jdvec{y}$. Then $d^nvec{x}=|det J|d^nvec{y}$.
$endgroup$
add a comment |
$begingroup$
Here's an application in calculus. The multivariate generalisation of integration by substitution viz. $x=f(y)implies dx=f^prime(y)dy$ uses the determinant of a matrix called a Jacobian in place of the $f^prime$ factor. In particular, the chain rule $dx_i=sum_j J_{ij}dy_j,,J_{ij}:=frac{partial x_i}{partial y_j}$ for $n$-dimensional vectors $vec{x},,vec{y}$ can be summarised as $dvec{x}=Jdvec{y}$. Then $d^nvec{x}=|det J|d^nvec{y}$.
$endgroup$
add a comment |
$begingroup$
Here's an application in calculus. The multivariate generalisation of integration by substitution viz. $x=f(y)implies dx=f^prime(y)dy$ uses the determinant of a matrix called a Jacobian in place of the $f^prime$ factor. In particular, the chain rule $dx_i=sum_j J_{ij}dy_j,,J_{ij}:=frac{partial x_i}{partial y_j}$ for $n$-dimensional vectors $vec{x},,vec{y}$ can be summarised as $dvec{x}=Jdvec{y}$. Then $d^nvec{x}=|det J|d^nvec{y}$.
$endgroup$
Here's an application in calculus. The multivariate generalisation of integration by substitution viz. $x=f(y)implies dx=f^prime(y)dy$ uses the determinant of a matrix called a Jacobian in place of the $f^prime$ factor. In particular, the chain rule $dx_i=sum_j J_{ij}dy_j,,J_{ij}:=frac{partial x_i}{partial y_j}$ for $n$-dimensional vectors $vec{x},,vec{y}$ can be summarised as $dvec{x}=Jdvec{y}$. Then $d^nvec{x}=|det J|d^nvec{y}$.
answered Mar 17 at 15:27
J.G.J.G.
31.8k23250
31.8k23250
add a comment |
add a comment |
$begingroup$
There are plenty of applications of determinants, but I will just mention one that applies to optimization. A totally unimodular matrix is a matrix (doesn’t have to be square) that every square submatrix has a determinant of 0, 1 or -1. It turns out that (by Cramer’s rule) that if a constraint matrix $A$ of a linear program max ${c’x:: Ax leq b, x in mathbb{R}^n_+} $ is totally unimodular, it is guaranteed to have an integer solution if a solution exists. In other words, the polyhedron formed by $P = {x:: Ax leq b}$ has integer vertices in $mathbb{R}^n$. This has major implications in integer programming, as we solve an integer program that has a totally unimodular matrix as a linear program. This is advantageous because a linear program can me solved in polynomial time, where there is no polynomial algorithm for integer programs.
$endgroup$
add a comment |
$begingroup$
There are plenty of applications of determinants, but I will just mention one that applies to optimization. A totally unimodular matrix is a matrix (doesn’t have to be square) that every square submatrix has a determinant of 0, 1 or -1. It turns out that (by Cramer’s rule) that if a constraint matrix $A$ of a linear program max ${c’x:: Ax leq b, x in mathbb{R}^n_+} $ is totally unimodular, it is guaranteed to have an integer solution if a solution exists. In other words, the polyhedron formed by $P = {x:: Ax leq b}$ has integer vertices in $mathbb{R}^n$. This has major implications in integer programming, as we solve an integer program that has a totally unimodular matrix as a linear program. This is advantageous because a linear program can me solved in polynomial time, where there is no polynomial algorithm for integer programs.
$endgroup$
add a comment |
$begingroup$
There are plenty of applications of determinants, but I will just mention one that applies to optimization. A totally unimodular matrix is a matrix (doesn’t have to be square) that every square submatrix has a determinant of 0, 1 or -1. It turns out that (by Cramer’s rule) that if a constraint matrix $A$ of a linear program max ${c’x:: Ax leq b, x in mathbb{R}^n_+} $ is totally unimodular, it is guaranteed to have an integer solution if a solution exists. In other words, the polyhedron formed by $P = {x:: Ax leq b}$ has integer vertices in $mathbb{R}^n$. This has major implications in integer programming, as we solve an integer program that has a totally unimodular matrix as a linear program. This is advantageous because a linear program can me solved in polynomial time, where there is no polynomial algorithm for integer programs.
$endgroup$
There are plenty of applications of determinants, but I will just mention one that applies to optimization. A totally unimodular matrix is a matrix (doesn’t have to be square) that every square submatrix has a determinant of 0, 1 or -1. It turns out that (by Cramer’s rule) that if a constraint matrix $A$ of a linear program max ${c’x:: Ax leq b, x in mathbb{R}^n_+} $ is totally unimodular, it is guaranteed to have an integer solution if a solution exists. In other words, the polyhedron formed by $P = {x:: Ax leq b}$ has integer vertices in $mathbb{R}^n$. This has major implications in integer programming, as we solve an integer program that has a totally unimodular matrix as a linear program. This is advantageous because a linear program can me solved in polynomial time, where there is no polynomial algorithm for integer programs.
answered Mar 17 at 15:36
JBLJBL
493210
493210
add a comment |
add a comment |
$begingroup$
Besides the applications already mentioned in the previous answers, just consider that matrices are the fundamental basis for Finite Elements design, today widely used in every sector of engineering.
Actually a truss is a physical representation of a matrix: if its stiffness matrix has null determinant, it means that there can be movements without external forces, i.e. the truss will collapse.
Also, in the continuous analysis of the deformation of bodies, stress and strain each are represented by matrices (tensors).
The inertia of a body to rotation is a matrix.
An electric network is described by a matrix voltages/ currents, and a null determinant denotes a short somewhere.
And so on ...
$endgroup$
add a comment |
$begingroup$
Besides the applications already mentioned in the previous answers, just consider that matrices are the fundamental basis for Finite Elements design, today widely used in every sector of engineering.
Actually a truss is a physical representation of a matrix: if its stiffness matrix has null determinant, it means that there can be movements without external forces, i.e. the truss will collapse.
Also, in the continuous analysis of the deformation of bodies, stress and strain each are represented by matrices (tensors).
The inertia of a body to rotation is a matrix.
An electric network is described by a matrix voltages/ currents, and a null determinant denotes a short somewhere.
And so on ...
$endgroup$
add a comment |
$begingroup$
Besides the applications already mentioned in the previous answers, just consider that matrices are the fundamental basis for Finite Elements design, today widely used in every sector of engineering.
Actually a truss is a physical representation of a matrix: if its stiffness matrix has null determinant, it means that there can be movements without external forces, i.e. the truss will collapse.
Also, in the continuous analysis of the deformation of bodies, stress and strain each are represented by matrices (tensors).
The inertia of a body to rotation is a matrix.
An electric network is described by a matrix voltages/ currents, and a null determinant denotes a short somewhere.
And so on ...
$endgroup$
Besides the applications already mentioned in the previous answers, just consider that matrices are the fundamental basis for Finite Elements design, today widely used in every sector of engineering.
Actually a truss is a physical representation of a matrix: if its stiffness matrix has null determinant, it means that there can be movements without external forces, i.e. the truss will collapse.
Also, in the continuous analysis of the deformation of bodies, stress and strain each are represented by matrices (tensors).
The inertia of a body to rotation is a matrix.
An electric network is described by a matrix voltages/ currents, and a null determinant denotes a short somewhere.
And so on ...
edited Mar 18 at 16:07
answered Mar 17 at 20:18
G CabG Cab
20.4k31341
20.4k31341
add a comment |
add a comment |
$begingroup$
If the determinant of a matrix is zero, then there are no solutions to a set of equations represented by an nXn matrix set equal to a 1Xn matrix. If it is non-zero, then there are solutions and they can all be found using Cramer's Rule. They are also used in Photoshop for various visual tricks; they are used to cast 3D shapes onto a 2D surface; they are used to analyze seismic waves... and a hundred other applications where data need to be crunched in a simple manner.
$endgroup$
add a comment |
$begingroup$
If the determinant of a matrix is zero, then there are no solutions to a set of equations represented by an nXn matrix set equal to a 1Xn matrix. If it is non-zero, then there are solutions and they can all be found using Cramer's Rule. They are also used in Photoshop for various visual tricks; they are used to cast 3D shapes onto a 2D surface; they are used to analyze seismic waves... and a hundred other applications where data need to be crunched in a simple manner.
$endgroup$
add a comment |
$begingroup$
If the determinant of a matrix is zero, then there are no solutions to a set of equations represented by an nXn matrix set equal to a 1Xn matrix. If it is non-zero, then there are solutions and they can all be found using Cramer's Rule. They are also used in Photoshop for various visual tricks; they are used to cast 3D shapes onto a 2D surface; they are used to analyze seismic waves... and a hundred other applications where data need to be crunched in a simple manner.
$endgroup$
If the determinant of a matrix is zero, then there are no solutions to a set of equations represented by an nXn matrix set equal to a 1Xn matrix. If it is non-zero, then there are solutions and they can all be found using Cramer's Rule. They are also used in Photoshop for various visual tricks; they are used to cast 3D shapes onto a 2D surface; they are used to analyze seismic waves... and a hundred other applications where data need to be crunched in a simple manner.
answered Mar 17 at 15:20
poetasispoetasis
430217
430217
add a comment |
add a comment |
$begingroup$
In system-theory,
- systems can be represented by matrices and each column represent the internal-state of the system.
- If a determinant of one such matrix is zero, then we can say that one of the state associated with certain dynamics is being duplicated.
- Based on some special matrix operations, we arrive at something called as relative-gain-array (RGA). This will give information on how much each states/output of a system interactes with each other, collectively speaking.
However, top are just a few examples. There are much more.
$endgroup$
add a comment |
$begingroup$
In system-theory,
- systems can be represented by matrices and each column represent the internal-state of the system.
- If a determinant of one such matrix is zero, then we can say that one of the state associated with certain dynamics is being duplicated.
- Based on some special matrix operations, we arrive at something called as relative-gain-array (RGA). This will give information on how much each states/output of a system interactes with each other, collectively speaking.
However, top are just a few examples. There are much more.
$endgroup$
add a comment |
$begingroup$
In system-theory,
- systems can be represented by matrices and each column represent the internal-state of the system.
- If a determinant of one such matrix is zero, then we can say that one of the state associated with certain dynamics is being duplicated.
- Based on some special matrix operations, we arrive at something called as relative-gain-array (RGA). This will give information on how much each states/output of a system interactes with each other, collectively speaking.
However, top are just a few examples. There are much more.
$endgroup$
In system-theory,
- systems can be represented by matrices and each column represent the internal-state of the system.
- If a determinant of one such matrix is zero, then we can say that one of the state associated with certain dynamics is being duplicated.
- Based on some special matrix operations, we arrive at something called as relative-gain-array (RGA). This will give information on how much each states/output of a system interactes with each other, collectively speaking.
However, top are just a few examples. There are much more.
answered Mar 18 at 10:33
RaajaRaaja
208312
208312
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3151650%2fpractical-application-of-matrices-and-determinants%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
Matrices are used a lot in machine learning.
$endgroup$
– Bladewood
Mar 17 at 19:00
6
$begingroup$
With some exaggeration, all of applied mathematics boils down to solving systems of linear equations.
$endgroup$
– Rodrigo de Azevedo
Mar 17 at 19:19
4
$begingroup$
Solving systems of equations is extremely practical. Every time someone solves a differential equation using the finite element method, or runs a linear regression, or solves an optimization problem using Newton's method, a system of linear equations is solved. There is hardly any engineering or applied math project that doesn't require solving a system of linear equations.
$endgroup$
– Sasho Nikolov
Mar 17 at 19:37
$begingroup$
Matrices are important to computer graphics, but not determinants.
$endgroup$
– immibis
Mar 17 at 22:47
$begingroup$
See this intuitive motivation for matrices.
$endgroup$
– user21820
Mar 18 at 7:51