In this segment, we introduce a means to represent all pixel in an image as a vector. This then introduces an alternative way for representing the degradation equation in the restoration problem. It becomes a vector matrix equation. If the degradation is LSI. Then this degradation matrix has a specific form. It is a block circulant matrix. It is straightforward to describe this matrix in the spectral domain, by finding its eigen values and eigen vectors. We study this property which allows us to take the degradation equation to the frequency domain, something of course we know we can do, based on the material we covered in week three. The reason of course we devoted this lecture to study this alternative representation of images and the matrix vector version of the degradation equation. Is not to just do some math, but because they provide the flexible representation of images and systems that crosses them. And they're also useful since we can now easily borrow tools and the results from linear algebra optimization theory, in providing solutions to these inverse problems we are interested in. Before we proceed I would like to introduce a vector notation for images and then make this vector notation for the convolution equation. So let's assume we have an image, of dimensions N by M. So this is the image here. The standard way we've been using, so far, to represent an image is xm1 and 2. Where n1 ranges from 0 to n minus 1 and 2, from 0 to m minus 1. We would like to represent this two dimensional array. This image as a 1 dimensional vector. So I'd like to write this image as a vector x. And now vectors can be shown in bold face, or sometimes with an underbar or an overbar. It should be clear from the context that a vector is involved. So one way to do it, is to take each row of the original image, so here's row 0, row 1, row N minus 1. And turn each row into a column vector, so this is row 0 transpose. So it comes here. Row 1 transpose all the way do to row M minus 1 transpose. So the n by m image is now converted into an nm times 1 vector. So this is the process of stacking an image into a vector. And if the stacking is, shown here. I stack by rows. Then it's also referred to as lexicographic ordering. So it should be clear. From now on, that this functional representation x function of n 1 and 2, and this vector representation can mean the same thing, mean the same thing in what follows. [SOUND] Let us consider the one dimensional convolution of two one dimensional signals, x of n convolved with h of n. Let's call the resulting quote y of n, and we know that this is equal to the sum of the k, x of k h n minus k. Let's assume for simplicity that x of n, here support from 0 to n minus 1. And samples in other words. H of n is support from 0 to L minus 1. And as we know, the result of the convolution will have support the sum of the limits of the support. So from 0 to N plus L minus 2. I would like to consider all the values of a signal y, and stack them into a vector. So I'd like to form the vector y 0, y 1, all the way to y N plus L minus 2. So this vector. Is, n plus l minus 1 by 1 vector. So this will be equal to matrix here, that will multiply the signal x, so here is x 0, x 1, x of n minus 1. So x clearly has dimensions N by 1. And therefore, this matrix has dimensions N plus L minus 1 times N. So, if we follow the steps of the convolution, we take age, we flip it around, find H of minus k and then shift it by n. You can see that y 0 is equal to H 0 times x 0. y 1 is equal to H 1 times x 0 plus H 0 times x 1. Then I'll have here, this row of H's. [SOUND] [BLANK_AUDIO] And the rest of the values here not shown, they're all zero. So if I call this a vector y, and call this matrix H, and then a vector X, I wrote. In matrix form, is out of the convolution, because then all the values of, of the signals. I can, kind of combine them all into this matrix vector form again, Y equals HX. We observe at H, here's this specific form that is the values along each diagonal are equal. And this is referred to as a [INAUDIBLE] matrix. We will see next that by modifying the signal and the matrix, we can write indeed that the matrix vector form for, for the central convolution and then this matrix H is going to have another form which we call circulant matrix. I would like to perform the circular convolution of x of n with h of n. So that the result of the circular convolution equals that of the linear convolution. In order for this to be the case, I should carry out the circular convolution with period N plus L minus 1. Let's call this Y of N again. So if this means I should pad X of N with L minus 1 0s and then carry out the [INAUDIBLE] convolution. So, if I do the same thing here now, which is to put all the values of Y into a vector. [SOUND] This will be equal to a square matrix. It's now multiplying x. And x i'll have the original values of x. 0, 1, all the way to n minus 1. And then I'll add it with zeros. And the number of zeros is equal to l minus 1. So if we look at the structure of h here this matrix. It will have h 0 on its main diagonal. [SOUND] h 1 on the diagonal below it. [SOUND] All the way. 2 H L minus 1. Would be 0 here and then, in this corner would be H1, H2, H L minus 1. [SOUND] And I'll have let's say here H, at minus 1. And Z everywhere else. It is to verify that indeed that the values you ob, obtain here by getting out this matrix vector multiplication are equal this Y values to the values you obtain from the linear convolution as shown in the previous slide. So again, I can write y equals H x. H now has also a special structure. The structure now is that if I take any row of H, and I circularly shift it by one to the right, then, and circular means that the element that leaves the matrix due to this shift to the right, wraps around and appears at the other end. Of the, of the row at the beginning of the row. And I'll do, I can do this for all rows including the last row. So if I set to receive the last row, I'll obtain the first row. So this matrix is called the circulant [SOUND] matrix, and here's some useful properties that we, we will talk about next. [SOUND]. We found in the previous slide, that if I use a matrix vector presentation of secular convolution, I end up with a secular matrix. Here's a simple example of a 4 by 4, secular matrix. So the element of the first row are these, a1, a2, a3, a4. I generate the 2nd row by circulating shift the first shifting the first one. So this is a1, a2, a3. a4 wraps around and comes this place. a1, a2, a3, a4 wrap around. And finally, a1, a4, a3, a2. So clearly for this 4 by 4 matrix if I know the elements of its first row, I can define the whole matrix, since again. Qundro Ichero is generated by the secular shift of the row above it. So if I look at the general matrix h it's a circular matrix. The elements of the first row are h 0 to h. M minus 1 I don't write the remaining of the rows in again secular shifts of the first one. For any secular matrix finding it's Hegan values and Hegan vectors which is by and large a difficult problem, is rather straight forward. And this is so. Because, let me first denote by lump in W, it's E convectors or the Ws and. And lambda, the eigenvalues. So the eigenvec, vectors of any circulant matrix are known in advance. The nth eigenvector is equal to this. [SOUND]. And one should recognize right away that these are the basis vectors of a discrete Fourier transform, these complex exponentials. And the set of eigenvalues, are simply, the discrete Fourier transform values of the first throw of the matrix. [NOISE] So again a very straightforward and important property of circulant matrices. So the singular value composition of the original matrix, is simply equal to this. I put the Eigen vectors here as column vectors of the matrix. The Eigen-values become the diagonals. Element of a diagonal matrix. [SOUND] And this matrix is the first one, actually should be 0 here, on the first one. [SOUND] Minus 1, it's an inverted matrix. So I can write any secular matrix H as WDW minus 1, but again these this diagonal matrix that has it at it's diagonal. The discreet fully transformed of the first row of the matrix. Based on what you've seen, let's assume I have the secular convulsion of x of n circular length n with h of n. Lets call this y over n. We know from the study of the discrete fourier transform that they can take this equation to the discrete frequency domain, where the convolution becomes multiplication. So these are the discrete fourier transforms of the corresponding quantities and this is an endpoint DFT. So k goes from zero to n minus 1. Now we saw that I can take this equation here and write it as a, matrix vector form. Where H is a block circular matrix. And we did see that H can be written also, its singular value composition is equal to this. Therefore from here, if I multiply by w minus one both sides this becomes equal to this. It can be easily shown that w minus one signal is the discreet fully transformed of the signal. So, this is the DFT. [SOUND] Of y, all the values. And this is the DFT of x. [SOUND] So this gives me, this is a vector clearly, so this vector gives me all the values of the DFT of Y. And D is a diagonal that has as its element the DFT of H, so this is H(0) to H(N-1). And here's the tft of x. So we see whether we're waiting with a point wise here version of a circular convolution or the matrix vector form of the circular convolution. By either taking the d f d here, in a straightforward way, I end up with a point wise relationship in the discrete frequency domain shown here, or by considering the spectral decomposition of the matrix age, which is second, I end up with the matrix vector representation of the Discrete Fourier Transform. So this is something very important to keep in mind that we will be making use a multiple times. Based on what we have discussed so far, if I, if I have a linear system H that accepts f as its input. And the noise is added that it's output to form the preservation y. I can represent this system in the matrix vector format shown here by stacking lexicogrhaphically more specifically stacking the images into vectors. Now the system in addition to be linear is also specially invariant. Then the input output relationships are represented by the familiar by now two dimensional convulsion. In this case the matrix h has a special structures, it's now block circular matrix. Which means that it is circular circular at the block level, so you see if I take the first row of blocks. And shifted circularly, I will obtain the second row of blocks and they keep going if I take the last row of blocks and secondly shifted. I will obtain the first row of blocks. But each block is also a circulant matrix. And that's a definition of a block circulant matrix. If let's say for simplicity, H here, has a first quadrant support. And this is the 0th row, this is the 1st row of the impulse response, this is the N minus 1 row. Then the zero'th row would be used to form the sub-matrix h zero. The first row of the sub-matrix h 1 and all the way to the sub-matrix h n minus 1. So this simple response that I show here. Is all that's needed to form this block circular matrix. Because for each block I need its first row, and then, that's all I need, because it's circular at the block level. The story with block circular matrices is the same as with circular matrices I mentioned earlier. So if H is a block circular matrix. It can be decomposed as shown here. W has as elements, the eigenvectors of age, which are the two-dimensional complex exponentials, also the two-dimensional DFT. And D here is a diagonal matrix. With elements that stack values of the two dimensional dxd of hij, so then if I started with the observation equation as shown here, I substitute the value of the decomposition of h here and then. I pre-multiply with W minus 1 to, to obtain this. So W minus 1 y is, represents the stacked values of the two dimensional DFT of y. And similarly, W minus 1 f represents the stacked into a vector values of the. That could be measure of DFT of F. So in another words, this equation is equal to Y equals D, F plus M, where again here, I have start the two dimensional DFT of the image y. I have stuck in the vector here two dimensional DFT values of the image f and these are diagonal matrix that has the DFT of h. If I look one element of y since this diagonal is going to give me one element of the discrete fully transfer of the image y. So each entry of this vector y Y represents the point ys here, frequency u, v. These, these are the discrete frequencies, of, of the DFT. In what follows we're going to need some very basic elements of linear algebra that I would like to mention. So given a N dimensional column vector, X, the L2 norm of the vector squared is simply X transpose X. It's a scaler. And this is equal to the sum of xi squared. So, applying this to a term like this, [SOUND]. The definition of the norm that is, this is y minus hx transpose. Y minus hx. And if I carry out all the cross multiplication this is equal to this [NOISE] right these are all scalers these are inner products. And the one is the transpose of the other therefore they're equal and this becomes equal to y transpose y minus 2 x transpose H transpose y plus x transpose H transpose H. H x. We'll also need the notion of the gradient over function all over norm. This is, by definition, this is a vector, intimation of vector that has. The partial respect for x 1 of the norm squared, all the way to the past with respect to x squared of the norm squared. And if I use the expression for the norm, I just showed in the beginning it's the sum of the a x of y squared. Then clearly if I look at any of these partial derivatives only one term that depends on a particular x or y. So the first term, there's only one term depending on x 1. So this is 2 x1, down here it's 2 xN, so this equal to 2 x. So applying this definition you should be able to verify that the gradient with, gradient with respect to x of x transpose y is equal to y. Y, the great, in respect to x of the quadratic term here is equal to 2 times H transpose H x.