High school plane geometry contains some
nice, useful math and an excellent first
lesson in mathematical proofs.
(9) Abstract Algebra
In grade school, we learned that
2 + 3 = 5
Abstract algebra calls the +, addition, an
operation. Multiplication is also an
operation.
We learned that for real numbers a and b
a + b = b + a
So, addition is a commutative operation.
It is also associative: For real
numbers a, b, c we have
a + (b + c) = (a + b) + c
Multiplication is also commutative and
associative.
For addition, 0 is the identity element
since for any number a we have
a + 0 = 0 + a = a
And -a is the inverse of a since
a + -a = 0.
Etc. with the rest of the usual properties
of the operations of addition and
multiplication we knew in grade school.
Well, abstract algebra studies operations
on sets more general than the numbers we
have considered so far. So, there are
groups, rings, fields, etc. The
rationals, reals, and complex numbers are
all examples of fields.
E.g., a group is a nonempty set with
just one operation. The operation is
associative and there is an identity
element and inverses.
Some groups are commutative, and some are
not. Some groups are on a set that is
infinite and some are on a set that is
finite.
Given a common definition of vectors, with
vector addition the set of all the vectors
forms a commutative group.
Similarly for the rings, fields, etc. --
that is, we have sets with operations and
properties.
There are some important applications,
e.g., in error correcting coding.
(10) Calculus
Calculus was invented mostly by Newton and
mostly for understanding the motions of
the planets etc.
The first part of calculus has to do with
derivatives. So, suppose we have an
object that for each time t is at distance
t^2. Then the speed of the object at time
t is 2t and the acceleration is 2. The
acceleration is directly proportional to
the force on the object.
We got 2 and 2t from t^2 by taking
calculus derivatives.
We write
d/dt t^2 = 2t
d/dt 2t = 2
So at time t, consider increment in time
h. Then at time t + h, the object is at
position (t + h)^2. So in time h, the
object has moved from t^2 to (t + h)^2,
that is, has moved distance
(t + h)^2 - t^2
= t^2 + 2th + h^2 - t^2
= 2th + h^2
Since speed is distance divided by time,
the object has moved at speed
(2th + h^2) / h
= 2t + h
Then as h is made small and moves to 0,
the speed moves to just
2t
So for very small h, the speed is
approximately just 2t, and as h moves to 0
the speed is exactly 2t.
So, 2t is the instantaneous speed at
time t.
So, 2t is the calculus derivative of
t^2.
To reverse differentiation we can do the
second half of calculus, do integration,
start with the 2t and get back to t^2.
Calculus is the most important math in
physics.
Later chapters in a calculus book show how
to find various areas, arc lengths, and
surface areas.
Calculus with vectors is crucial for heat
flow, fluid flow, electricity and
magnetism, and much more.
Einstein's E = mc^2 can be derived from
just some thought experiments and the
first parts of calculus. There is an
amazing video at
This topic starts with systems of linear
equations in several variables, e.g., for
two equations in three variables:
2x + 5y - 2z = 21
3x - 2y + 2z = 12
The system has none, one, or infinitely
many solutions. To see this, we can use
Gauss elimination. To justify that,
multiplying one of the equations by a
non-zero number does not change the set of
solution. Neither does adding a copy of
one of the equations to another. So, with
these two tools, we just rewrite the
equations so that the set of solutions is
obvious.
To save space here,
I will leave the rest of the
details to the Internet.
A solution can be regarded as a vector,
e.g.,
(x, y, z)
A function f is linear provided for x
and y in the domain of f and numbers a and
b we have
f(ax + by) = af(x) + bf(y)
It can be claimed that the two pillars of
the analysis part of math are continuity
and linearity.
Linearity is enormously important,
essentially because it is such a powerful
simplification.
The linear equations of linear algebra are
linear in this sense.
Differentiation and integration in
calculus are linear.
Linearity is a key assumption in physics
and engineering, especially signals and
signal processing.
In linear algebra we take the two
equations in two unknowns and define
matrices so that we can write the
equations as
So, left to right, we have three matrices.
The first one has 2 rows and three columns
and, thus, is a 2 x 3 (read, 2 by 3)
matrix. The second one is 3 x 1. The
third one is 2 x 1.
We define matrix multiplication so that
multiplying the first and second matrix
gives the same results as in the linear
equations:
2x + 5y - 2z = 21
3x - 2y + 2z = 12
In principle could do all of linear
algebra with just such linear equations
and without matrices, but the effort would
be very clumsy. Matrix notation,
multiplication, etc. are terrific
simplifications.
The 3 x 1 and 2 x 1 matrices are examples
of vectors.
Matrix multiplication is linear.
The known properties of matrices and their
multiplication and addition and of
associated vectors are amazing, profound,
and enormously powerful.
There is a joke, surprisingly close to
true, that all math, once applied, becomes
linear algebra.
(12) Differential Equations
In the subject differential equations,
we are given some equations satisfied by
some function and its derivatives and
asked to find the function.
The classic application was for the motion
of a planet: Newton's laws of motion
specified the derivatives of the position
of the planet as a function of time, and
the goal was to find the function, that
is, predict where the planet would be.
Science and engineering have many
applications of differential equations.
Some of the models for the spread of a
disease are differential equations.
We have seen that
d/dx t^2 = 2t
Well, we may have
function
f(s, t) = st^2
We can still write
d/dx f(s, t) = d/dx st^2 = 2st
but usually we say that we are taking the
partial derivative of f(s, t) with
respect to x.
(13) Mathematical Analysis
This topic is really part of calculus and
has the fine details on the assumptions
that guarantee that calculus works. The
main important assumption is continuity.
The next most important assumption is that
the function has domain a closed
interval or a generalization to what is
called a compact set; a closed
interval is, for real numbers a and b,
{x| a <= x <= b}
Also important are infinite sequences and
series that make approximations as close
as we please to something we want. So, in
this way we define sine, cosine, etc.
This topic is also part of calculus but
concentrates on surfaces, volumes, and
maybe the math of flows of heat, water,
etc.
Commonly this part of calculus makes heavy
use of vectors and matrices.
Also covered can be Fourier series and
integrals.
A pure tone in music, say, from an organ,
has a fundamental frequency, say, 440
cycles per second and also overtones at
natural number multiples of that
fundamental frequency. Each of the
overtone frequenciess works like an axis
of orthogonal coordinates and is a
terrific way to analyze or synthesize such
a tone.
The Fourier transform is closely related
and has an orthogonal coordinate axis for
each real number.
Engineering is awash in linear systems
that do not change their properties over
time -- time invariant linear systems.
Such systems merely adjust some Fourier
coefficients. This is a powerful result.
(15) Measure Theory
This topic develops integral calculus with
a lot more generality.
(16) Probability Theory
In probability we imagine that we do
some experimental trials and observe the
results. We have a sample space, our
set of trials.
We regard the set of all trials like a
region with area, and we say that the area
is 1.
Maybe we flip a coin and get Heads. The
set of all trials that yield Heads is one
event Heads.
The event Heads has area, probability
P(Heads)
With a fair coin we have
P(Heads) = 1/2
A random variable X is some outcome of a
trial. Usually X is a number. In this
case we can write
P(X >= 2)
for the probability that random variable X
is >= 2.
Events A and B are independent provided
P(A and B) = P(A)P(B)
In probability we can have algebraic
expressions with random variables, have a
distance between random variables, have
a sequence of random variables converge
to a random variable, etc.
Given a real valued random variable X, for
real number x we can let
F_X(x) = P(X <= x)
Then F_X is the cumulative distribution
of X, and
f_X(x) = d/dx F_X(x)
is the probability density of X.
One of the major results is the central
limit theorem that shows a distribution
converging to the Gaussian bell curve.
Two more results are the weak and strong
laws of large numbers that show
convergence to average values (means,
expectations).
One more result, amazing, is the
martingale convergence theorem.
The Radon-Nikodym theorem of measure
theory provides an amazing general theory
of conditional expectation which is one
way to look at the value of information.
In statistics we are given the values of
some random variables and try to infer
the values of something related.
The more serious work in probability and
statistics is done on the foundation of
measure theory.
(17) Optimization
We have a furniture factory and a new
shipment of wood. We have some freedom in
what we do with this wood. An
optimization problem is how to use the
freedom to make the greatest revenue from
the wood.
More generally we have some work to do
with some freedom in how we do the work,
and an optimization problem is how best to
exploit the freedom to have the best
outcome, profit,
for the work.
If the profit is linear and if the work
and freedom are described by linear
equations, then we have the topic of
linear programming.
Linear programming has played a role in
the Nobel prizes in economics.
The main means of solving linear
programming problems is the simplex
algorithm which is a not very large
modification of Gauss elimination.
In practice the simplex algorithm is
shockingly fast; in theory its worst case
performance is exponential.
In the furniture making, we can't sell a
fraction, say, 1/3rd, of a dining room
chair. So, we want our solution to
consist of integers. The simplex
algorithm does not promise us integers.
So we have a problem in integer linear
programming.
In practice from the simplex algorithm we
may get to make 601 + 1/3rd dining room
chairs and round that to 601 or just 600.
No biggie.
If we insist on integers, then,
surprisingly, even shockingly, we have one
of the early examples of a problem in NP
complete.
In practice there are still ways to
proceed; often we can get the optimal
integer solutions we want easily;
sometimes such solutions are challenging;
sometimes we have to settle for
approximations; often the approximations
can be quite good, like forgetting about
1/3rd of a chair; in theory the ways have
worst case performance that is exponential
and nearly useless.
One integer linear programming problem I
attacked has 600,000 variables and 40,000
constraints. I wrote some software that
in 600 seconds on a slow computer found an
integer solution within 0.025% of
optimality. The solution made heavy use
of the simplex algorithm, non-linear
duality theory, and something called
Lagrangian relaxation.
In addition to optimization, the simplex
algorithm and related math make nice
contributions to geometry, e.g., prove
theorems of the alternative used in
proving the famous Kuhn-Tucker conditions
necessary for optimality.
Since it is possible that the work we are
trying to optimize is complicated, its
mathematical description may also be
complicated and the optimization effort,
difficult.
Optimization problems are common in animal
feeding, cracking crude oil,
transportation problems, and much more.
Game theory can be regarded as a special
case. One proof of the famous Nash result
(featured in the movie about Nash, A
Beautiful Mind) is via linear
programming.
Some problems are to find curves, say, how
to climb, cruise, and descend an airplane
to minimize fuel burned or the best curve
for a missile to attack an enemy fighter
jet. Newton found the brachistochrone
curve solution to the problem of the
frictionless curve that would let a bead
slide down in minimum time.
Relevant fields are calculus of variations
and deterministic optimal control. One
famous result is the Pontryagin maximum
principle.
Some problems have uncertainty; so the
optimization may be to find the solution
with the best expected outcome. For
such problems there is the field of
stochastic optimal control.
(8) High School Plane Geometry
High school plane geometry contains some nice, useful math and an excellent first lesson in mathematical proofs.
(9) Abstract Algebra
In grade school, we learned that
2 + 3 = 5
Abstract algebra calls the +, addition, an operation. Multiplication is also an operation.
We learned that for real numbers a and b
a + b = b + a
So, addition is a commutative operation. It is also associative: For real numbers a, b, c we have
a + (b + c) = (a + b) + c
Multiplication is also commutative and associative.
For addition, 0 is the identity element since for any number a we have
a + 0 = 0 + a = a
And -a is the inverse of a since
a + -a = 0.
Etc. with the rest of the usual properties of the operations of addition and multiplication we knew in grade school.
Well, abstract algebra studies operations on sets more general than the numbers we have considered so far. So, there are groups, rings, fields, etc. The rationals, reals, and complex numbers are all examples of fields.
E.g., a group is a nonempty set with just one operation. The operation is associative and there is an identity element and inverses.
Some groups are commutative, and some are not. Some groups are on a set that is infinite and some are on a set that is finite.
Given a common definition of vectors, with vector addition the set of all the vectors forms a commutative group.
Similarly for the rings, fields, etc. -- that is, we have sets with operations and properties.
There are some important applications, e.g., in error correcting coding.
(10) Calculus
Calculus was invented mostly by Newton and mostly for understanding the motions of the planets etc.
The first part of calculus has to do with derivatives. So, suppose we have an object that for each time t is at distance t^2. Then the speed of the object at time t is 2t and the acceleration is 2. The acceleration is directly proportional to the force on the object.
We got 2 and 2t from t^2 by taking calculus derivatives.
We write
d/dt t^2 = 2t
d/dt 2t = 2
So at time t, consider increment in time h. Then at time t + h, the object is at position (t + h)^2. So in time h, the object has moved from t^2 to (t + h)^2, that is, has moved distance
(t + h)^2 - t^2
= t^2 + 2th + h^2 - t^2
= 2th + h^2
Since speed is distance divided by time, the object has moved at speed
(2th + h^2) / h
= 2t + h
Then as h is made small and moves to 0, the speed moves to just
2t
So for very small h, the speed is approximately just 2t, and as h moves to 0 the speed is exactly 2t.
So, 2t is the instantaneous speed at time t.
So, 2t is the calculus derivative of t^2.
To reverse differentiation we can do the second half of calculus, do integration, start with the 2t and get back to t^2.
Calculus is the most important math in physics.
Later chapters in a calculus book show how to find various areas, arc lengths, and surface areas.
Calculus with vectors is crucial for heat flow, fluid flow, electricity and magnetism, and much more.
Einstein's E = mc^2 can be derived from just some thought experiments and the first parts of calculus. There is an amazing video at
https://www.youtube.com/watch?v=KZ8G4VKoSpQ
Calculus is a pillar of civilization.
(11) Linear Algebra
This topic starts with systems of linear equations in several variables, e.g., for two equations in three variables:
The system has none, one, or infinitely many solutions. To see this, we can use Gauss elimination. To justify that, multiplying one of the equations by a non-zero number does not change the set of solution. Neither does adding a copy of one of the equations to another. So, with these two tools, we just rewrite the equations so that the set of solutions is obvious. To save space here, I will leave the rest of the details to the Internet.A solution can be regarded as a vector, e.g.,
(x, y, z)
A function f is linear provided for x and y in the domain of f and numbers a and b we have
f(ax + by) = af(x) + bf(y)
It can be claimed that the two pillars of the analysis part of math are continuity and linearity.
Linearity is enormously important, essentially because it is such a powerful simplification.
The linear equations of linear algebra are linear in this sense.
Differentiation and integration in calculus are linear.
Linearity is a key assumption in physics and engineering, especially signals and signal processing.
In linear algebra we take the two equations in two unknowns and define matrices so that we can write the equations as
So, left to right, we have three matrices. The first one has 2 rows and three columns and, thus, is a 2 x 3 (read, 2 by 3) matrix. The second one is 3 x 1. The third one is 2 x 1.We define matrix multiplication so that multiplying the first and second matrix gives the same results as in the linear equations:
In principle could do all of linear algebra with just such linear equations and without matrices, but the effort would be very clumsy. Matrix notation, multiplication, etc. are terrific simplifications.The 3 x 1 and 2 x 1 matrices are examples of vectors.
Matrix multiplication is linear.
The known properties of matrices and their multiplication and addition and of associated vectors are amazing, profound, and enormously powerful.
There is a joke, surprisingly close to true, that all math, once applied, becomes linear algebra.
(12) Differential Equations
In the subject differential equations, we are given some equations satisfied by some function and its derivatives and asked to find the function.
The classic application was for the motion of a planet: Newton's laws of motion specified the derivatives of the position of the planet as a function of time, and the goal was to find the function, that is, predict where the planet would be.
Science and engineering have many applications of differential equations.
Some of the models for the spread of a disease are differential equations.
We have seen that
d/dx t^2 = 2t
Well, we may have function
f(s, t) = st^2
We can still write
d/dx f(s, t) = d/dx st^2 = 2st
but usually we say that we are taking the partial derivative of f(s, t) with respect to x.
(13) Mathematical Analysis
This topic is really part of calculus and has the fine details on the assumptions that guarantee that calculus works. The main important assumption is continuity. The next most important assumption is that the function has domain a closed interval or a generalization to what is called a compact set; a closed interval is, for real numbers a and b,
{x| a <= x <= b}
Also important are infinite sequences and series that make approximations as close as we please to something we want. So, in this way we define sine, cosine, etc.