A Euclidean space is a vector space, but with a metric defined over it. To be more precise, it's a vector space with some additional properties. Please jump to the summary below.
But What Does It All Mean?!
Let's talk about what a vector space actually is.
A vector space is a collection of objects called vectors that satisfies certain rules.
In high school physics, you may have been taught that a vector is an 'arrow' with length and direction - and while this is correct, they are not the only type of vectors out there.
In fact, any object is a vector if it satisfies the following definitions:
- Vector addition: The sum of any two vectors produces a third vector. i.e. , where are all vectors. The sum is never allowed to produce a non-vector.
- Scalar multiplication: Multiplying a vector with a scalar (anything that's not a vector) creates a new vector. e.g. multiplying a vector by 5 creates a new vector of value .
Vector addition and scalar multiplication: a vector v (blue) is added to another vector w (red, upper illustration). Below, w is stretched by a factor of 2, yielding the sum v + 2w. This picture and caption are stolen from Wikipedia.
So that's what a vector is. I mentioned, however, that a collection of vectors has to have some associated rules before it can be considered a vector space.
Here are the rules:
- Associativity/Commutativity: The order of addition for any number of vectors in the collection doesn't matter at all in their sum
In other words, - Existence of a zero vector: There is at least one vector in the collection that, when added to any other vector in the collection, doesn't change the value of the second vector.
This is not unlike what the number zero does: for all numbers . Essentially, we require the existence of a vector analogous to zero, called (fittingly) the zero vector. - Existence of an inverse: For every vector, there is another vector (called the inverse vector) which, upon addition, results in the zero vector. In other words, for every vector there is a such that
- Distributivity: Finally, multiplying a scalar over a sum of vectors is the same as multiplying each vector with the scalar and adding them up. In other words, , where is a scalar and the rest are not.
If you're thoroughly confused, here is an example.
Let's take three objects we define as . We say they are vectors because they obey the following rule of addition: to add two vectors, we just add up each component of a vector. For example: . Multiplying with a scalar always results in a new vector - for example, .
Now we ask: given these rules, do our three vectors - - form a vector space?
- no matter which order you add them up in, so this checks out.
- The vector certainly acts like a zero vector. For instance, and .
- Is there an inverse for every element? Yes: is the inverse of (and vice versa) because adding them up gives our zero vector . The zero vector is its own inverse.
- The way we've defined multiplication and addition automatically make the fourth requirement true.
The Building Blocks of Complexity
The example that I've given you define the simplest possible vector space, and therefore one of the least interesting.
More interesting vector spaces go ahead and add other requirements:
- An inner product vector space is a vector space that also requires a way to take two vectors and return a scalar. In other words, a vector space must have Innerproduct defined.
In our example, we could define a dot product of two vectors as and turn our vector space into an inner product space: .
This is not the only possible definition of a dot product - it can be anything so long as it returns a scalar from two vectors - but it is the most common. - A normed vector space is an inner product space that also requires that there be a way to compute the 'length' (or norm) of a vector
- A very common way to define the length of a vector is to take the positive square root of its dot product with itself - in other words, the norm of a vector is defined as .
In our example, the norm of is, by this definition, . - A metric space goes one step further and asks that, in addition to having a length and an inner product, a vector space should also have a way to compute the distance between vectors, or a metric.
The most famous metric in history is of course the Euclidean metric:
Unsurprisingly, a Euclidean space is a metric space where the metric is the Euclidean metric.
Summary
So there you have it. A metric space is a vector space with:
- a way to multiply two vectors to obtain a scalar (called the inner product)
- a way to define the length of a vector (called the norm).
- a way to compute the distance between two vectors (the metric).
NOTE
Lets look at the big picture and how one builds on simpler spaces and which leads to Hilbert spaces. I dont really have a deep knowledge of functional analysis but I will give it a try.
So lets start with metric spaces.
Metric Spaces / Euclidean space: Obeys all properties of metric(Metric (mathematics))
This space has a distance defined over it such that we can observe the distances and observe points 'a' converging to a point 'b'(say). To simply put, What we can really do in this space is, does one point converge or not converge to other point. There is no notion of addition or trivial thing also. One more thing these spaces are totally capable handling functions. What I mean by that for example is, Each point in the metric space can be a function. for example a polynomial can converge to a linear function. We define distances accordingly. Such that metric properties is preserved.
Note:Euclidean space has finite number of dimensions, Hilbert space can have infinite number of dimensions.Euclidean spaces are extensively used to solve problems of non-relativistic classical mechanics.
Complete Metric Spaces: We can build on Metric Spaces and define a notion of completeness. To put simply, What this really means is that say a point converges to a limit point. So all sequence which converge to a limit remains in the space(including the limit point). what this really means is that there are no holes in the space.
Vector Spaces: Here we define a space independently which gives us capability of vector addition and multiplication of vector by a scalar. Note that this space has no notion of convergence. We define here addition , multiplication by a scalar, linear independence, subspace, dimension of subspace, rank. Note we dont have any notion of dot product here. Its the way it is defined and different than linear algebra. Linear algerba really works in hilbert space as of what we normally use(we are reaching there).
vector space is something very formal and axiomatic, Euclidean space has not a unified meaning. Usually, it refers to something where you have points, lines, can measure angles and distances and the Euclidean axioms are satisfied. Sometimes it is identified with resp. but more as an affine and metric space (you have both points and vectors, not just vectors). So, the Euclidean space has softer meaning and usually refers to a richer structure.
Normed Space: We here define a norm i.e. associate with each point a number which is like to combine a metric space and vector space and then see what happens. Now we have a notion of convergence and we can add, substract, multiply with scalar here. :) We are here and this space is very familiar.
Complete Normed Space(Banach Space): Here we add the notion of completeness similar to complete metric space. OR is like to combine a complete metric space and vector space and then see what happens.
Hilbert Space: A complete inner product space is Hilbert space. Hibbert space is a specific type of normed space. Not all Banach spaces are Hilbert spaces but all Hilbert spaces are banach spaces.
One can define orthogonality, projection, norm, adjoint operator. we also have a representer theorem which is useful. One can come into this while using RKHS(Reproducing kernel hilbert space).
What is the pattern we see when we come from top to bottom? More components get added to spaces, spaces start seeming familiar, starts looking useful, more specific.
Why is the Hilbert's space useful in quantum mechanics?
Using the formalism of Hilbert space calculations in QM becomes
easy. It is also possible to visualize a problem clearly using the
idea of Hilbert space. A common example is harmonic oscillator
problem. Here Schrodinger equation gives infinite number of solutions
with infinite number of eigenvalues. Therefore the oscillator can
reside in any one of those states. In each state the energy eigenvalue
is different. All those (normalized) states span an infinite dimensional
vector space. A physical state can be written as a linear combination
of basis vectors with a definite weight factor for each basis state.
Now a vector space comes with ready-made machinery for
calculations. For example addition, dot product, cross product
and so on. In this "mapping" where a physical problem is
being mapped to a complex vector space, one can use
mathematical machinery of complex vector spaces
for our advantage.
easy. It is also possible to visualize a problem clearly using the
idea of Hilbert space. A common example is harmonic oscillator
problem. Here Schrodinger equation gives infinite number of solutions
with infinite number of eigenvalues. Therefore the oscillator can
reside in any one of those states. In each state the energy eigenvalue
is different. All those (normalized) states span an infinite dimensional
vector space. A physical state can be written as a linear combination
of basis vectors with a definite weight factor for each basis state.
Now a vector space comes with ready-made machinery for
calculations. For example addition, dot product, cross product
and so on. In this "mapping" where a physical problem is
being mapped to a complex vector space, one can use
mathematical machinery of complex vector spaces
for our advantage.
For more Details
0 comments:
Post a Comment