Now, let's introduce the one very important concept concerning
the so called linear dependence or linear independence
of some finite family of functions on some given interval.
Consider the finite set of functions say n functions f_sub_i of x,
where i is moving from 1 through the n.
Such a finite set of functions f_sub_i of x is said to be linearly dependent,
linearly dependent on an interval I.
If there are a constant c_sub_i,
not all 0 at the same time
satisfying linear combination of those f_sub_i of x using these co-efficiencies c_sub_i,
which we assume not all 0.
If this combination, if this linear combination can be equal to identically 0 on I,
in that case, we say this family f_sub_i of x is a linearly dependent on the interval I.
Otherwise, we say that the same family are linearly independent on the interval I.
In other words, what it means,
the family f_sub_i of x are linearly independent on
the interval I if their linear combination is identically 0 on I,
with some coefficiencies of I,
then all the coefficiencies of i must be equal to 0.
Then we say that this f_sub_i of x are linearly dependent on the interval I.
Let's check this meaning
through the simple examples.
First, if we have only two functions
which I call it rather f of x and the g of x.
Then my claim is these two functions,
f and g are linearly dependent on
some interval I if and only if one is a constant in multiple of the other.
You can confirm it very easily.
We have only two functions,
f and g. They are linearly dependent.
Can remind the definition?
Such a finite in the family of functions are linearly dependent
if there are two constants which I will call a and b.
If there is a and b,
not equal to zero at the same time,
but their linear combination is identically zero or linearly independent on I,
identical zero on I for suitable choice of a and b.
But these two coefficience is not equal to zero at the same time.
That's the meaning of the linearly independence.
Since not both of them are equal to zero,
at least one of them should be non-zero.
So for example, a might not be non-zero.
What happened in this case?
If a is not equal to zero then you
can write f as negative b over a,
times g. So one of them,
as you can see right here,
one of them is really a constant multiple of the other.
On the other hand,
if b is not equal to zero,
that might be true.
Then by the same token and now g is equal to negative a over b and the times f. So again,
one of them, one of f and g,
say g is a constant multiple of the other.
That's my claim down here.
If you have only two functions,
then they are linearly dependent on the interval I if and only
if one of them is a constant multiple of the other.
As a very concrete example,
consider the two functions,
say, one is a sin 2x.
And the other one is a sin x times a cosine of x.
They look totally two different function.
But in fact they are linearly dependent on the whole real line,
negative infinity, infinity because by the double angle form a sine.
Sin 2x is the same as 2 times the sin x cosine of x.
So, this first function is a constant multiple of the other,
so that these two functions are linearly dependent on this interval.
But I would like to emphasize the following thing,
linear dependence or the linear independence
of a finite family of functions depends not only on functions,
but also on the interval over which we tested the linear dependence.
What I mean by this,
look at the next example then you can understand it easily.
In general, if there are more than two functions- In other words,
if the little n is greater than or equal three.
Little n is a number of functions in board.
Then it's not that much easier to use the given definition itself to check
linearly dependence or linear independence of family of
functions from f_sub_1 of x to f_sub_n of x.
That's not a easy task to do.
However, when they are solutions,
when they are solutions of a given linear homogeneous differential equation.
Can you remind of the differential equation (4)?
In symbol this is, L(y) = 0.
To be precise, what is the L(y)?
This is a_sub_n of x,
y as derivative, A_sub_n minus 1 of x,
n minus 1 derivative of y.
And a,
1 of x and y prime plus a_sub_0 of x and of y,
and that is equal to 0.
We are concerned with the homogeneous equation.
This is a homogeneous differential equation, L(y) = 0.
Equation number (4).
If a family of functions they are coming from the solutions of this homogeneous equation,
then there is a much easier test to apply.
Much easier test to see whether
those family of functions are linearly independent or not.
To introduce that criterion,
we need to introduce another terminology.
Say the Wronskian of some family of functions.
So we now consider some family finite limited functions.
f_sub_i of x.
The is are moving from 1 to n which
are at least (n - 1) times a differentiable on an interval I.
For those family of functions make an n by n matrix formed by those functions.
The first row is just f1, f2, and f_sub_n.
Second row will be
f1 prime f2 prime and f_n prime.
The nth row will be (n - 1) derivative of f1, (n - 1) derivative of f of 2,
(n - 1) derivative of f_sub_n.
And make its determinant.
Here you have n by n matrix formed
by those functions f_i and that their derivatives,
then makes its determinant.
Compute its determinant, we denoted by this assemble,
capital W of f_sub_1,
of f_sub_n of x and called it
the Wronskian of this given family of functions, f_sub_i of x.
We needed such the terminology.