Hi everyone. Welcome back to operations research. This is Ling-Chieh Kung. This is the last week of our course. This is second one course. Now we want to do this course summary and I tell you if you plan to go further, or what we plan to give you. Let's recall our memory about what we did in this course. In this course, our focus is on algorithms. Basically, we spend most of our time talking about several important algorithms for solving mathematical programs. Linear programs, integer programs, nonlinear programs. Let's briefly review what are them. The first thing is the simplex method or one of the most important invention in the field of operations research. Basically, we mentioned several topics like the standard form. The idea is very elegant. You don't need to define an algorithm to solve all kinds of linear programs. That's too much. But you are able to show that all these different kinds of linear programs can be converted into standard form, and then you define one algorithm to solve it. That's the standard form. Then when we want to solve it, the idea is to define basic feasible solutions. You know that it is equivalent to extreme points, but extreme points is defined geometrically. If you want to use computers, use algorithms to solve a problem, you need it's algebraic definition. That's basic feasible solution. Once we have the definition of basic feasible solution, then we are almost done because all we need to do is to look for basic feasible solutions, to search among basic feasible solutions. Then we mentioned the method. Repeatedly, iteratively, select a subset of columns and then change basis, change basis, change basis. We may also use the tableau representation to help you organize all these calculations. Finally, we also mentioned, if you use the linear simplex method to deal with unbounded or infeasible linear programs, what shall we do? Again, the simplex method is able to identify these cases and then report this problem is among it, this problem is infeasible, or there is an optimal solution. The publication of the simplex method is done by George Danzig, the person we mentioned in our first course. That historical event, the publication in 1947, opened the whole fields of operations research, all kinds of applications, all kinds of further developments. Even today, many commercial solvers still use some kind of advanced versions of the simplex method. It's some kind of modules in solving linear programs. Even though this method is very old, even though there are some other new methods, simplex method is still the widely applied practically today. It runs along edges because we know for any linear programs, optimal solutions must lie on corners or there must be corner optimal solutions. If you want to go to the end, whatever place you start, you move along edges. That's the idea of simplex method. The moving along edges idea helps George Danzig to defy basic feasible solutions and then the whole process. Interestingly, if you just look at this graph, you may also think, why should I go two steps? Why shouldn't I just go interiorly to go inside the feasible region, to move towards the optimal solution directly? Well, you are not the only one to think about this. There exists something called interior point methods, for example, the Karmarkar's method and so on and so on. These interior points method says, let's try to be faster, we don't want to move along ages, that sometimes takes too much time, let's move directly from the interior. We don't have time to talk about these kinds of events, the developments about linear programming. We don't want to talk about Karmarkar's method or whatever other methods. But I just want to remind you that indeed there are interior points methods for solving linear programs. If you are interested, maybe you may teach yourself these kinds of topics. We also mentioned about a branch-and-bound algorithm for solving integer programs. We first mentioned about linear relaxation, hopefully, it's okay. Hopefully, once we relax those integer variables, we get a linear program, hopefully, the linear program gives us a feasible IP Solution. The thing is that typically it's not. Typically, once we solve the linear relaxation, we are unable to get an integer solution, we typically get fractional solutions. If that's the case, then we branch and bound. You have a fractional value, we cut it. We give you two independent subproblems. If you get 4.5, we'll require your variable to be less than or equal to four, or greater than or equal to five. By doing that, we then get two subproblems, at least one of them contains an optimal solution. We keep doing this and doing that, we solve all kinds of linear relaxation, we add constraint until eventually, we are able to find an optimal solution. We also demonstrate that if you use the idea to solve specific problems, maybe you don't need to do all those simplex method to solve each note. For example, if you are solving the knapsack problem, because it's linear relaxation may be easily solved with the ratio method, you don't really need to call simplex or adobe to do it, you may do it in your faster way for special cases. Branch and bound iteratively split the feasible region without removing an optimal solution, that's also a typical strategy in solving difficult problems. You have a big problem, you don't know how to solve it, right? Never mind, you solve two sub-problems, each one is smaller and easier to solve, all you need to make sure is that at least one of the subproblems contains the original optimal solution. There exists a more advanced method, again, we don't have time to teach you, like the cutting plane method, branch and cut, branch and price, and the many others. If you take some advanced courses, maybe you will see these methods. They are having similar ideas about branch and bound, but it try to improve branch and bound to make it faster. We also see that compared to linear programming, now we have a very concrete idea about why integer programming is harder to be solved. Because of that, your linear relaxation may not give you an integer solution, you branch and bound, one problem becomes two problem, two problem became four problem, and so on. That's why you typically should avoid integer variables. If you are solving a problem that you want to make your production quantity, you want to decide your shipping quantity, the number of persons you want to hire, the number of products you want to sell, when you are dealing with quantities for most of the time, we suggest you to set it as linear. If you set your production quality whatever to be integers, typically, that's going to increase the computation time in a very huge scale. If that's the case, you cannot get any solution, your model becomes useless. We use integer programs or we use integer variables only if it is needed, only when it is regarding the selection among multiple options, whether you want to build a facility here or not, that's zero. In the one, you'll have no other choice, you must have a binary variable. Whether you want to open the machine and pay that setup cost, zero or one, whether you want to satisfy this constraint or null constraint, zero or one. In my personal experience, typically we only have binary variables in an integer program, it's actually rare to have integer variables. We need binary variables because they are useful, they are important, they enlarge the set of problems that we may model from linear programming to integer programming. But we don't want to set those quantities to integers. That's not the reason for using integer programming. We also learned about gradient descent and Newton's method, that's for nonlinear programs. We told you what's this and what that and about the basic ideas. One thing that is interesting is that you may see that they are naturally interior point methods. Gradient descent is that you are here, you'll find an improving direction, you move toward it and stop somewhere, and then you'll move again. You always move within your feasible region. You don't move along boundaries, you move interior. The thing is obvious because for a nonlinear program, your optimal solution may not fall on boundaries, so there's no reason for you to search among boundary points. Even though they are interior point methods, they are not so easy to obtain an optimal solution because non-linear problems are just hard. Speed of convergence are still critical issues, not to mention in this course we only talk about how to do gradient descent and Newton's method for unconstrained problems. If you have constraints, then you need further develop your algorithms to avoid violating these constraints. There are several different kinds of methods. Some methods still do the search and it does not go beyond the feasible region. Some algorithms actually allows you to violate the constraint from time to time. It only requires you that at the end you need to reach a solution that is feasible. We don't have time to talk about all of them. Maybe you want to take some further courses to learn these advanced methods. We also spend one week to give you a case study. We do not just want to give you some exposure about the real problems that's important, it's even more important that we learn the idea of heuristic algorithms in that particular case, it's a real problem. A company formulate an integer program to help them decide whether we want to open or close some existing facilities. That problem is real, that problem makes sense and integer programming is a good way to do it. But if you have too many facilities to consider, if you have too many customers to serve, that may not be reasonable. We may need to decide our own methods, heuristic, and we may need to test whether its performance is good or not. Practical considerations regarding computation, regarding management should also be considered. A very quick summary is that all these algorithms are very powerful and that's why we want to teach you them, because they may help you to solve all kinds of problems. Now, whatever number of variables and constraints you have, you are able to solve them. It's just that maybe you'll need a lot of time. When time complexity is really an issue, when your problem is really too difficult, the skill is really too large, we need to develop specific algorithms for your specific problems. Typically that's heuristic but sometimes it's still exact algorithms. Some researchers, they work in a different way. They say, "I just want an optimal solution." I just want to solve general problems with general algorithms. They would work on those, Karmarkar's algorithm, branch and cut, branch and price, cutting plan, or other things that we have no time to teach you. In practice, both are important. From time to time, you need to decide whether you want an exact optimal solution or whether you want a close to optimal feasible solution. You need to learn all of them to have concrete ideas in mind so that when you go to the practice, so that when you do research, you can choose the right research topics and research direction.