Now that we've learned about structured locks, let's look at a more general case of unstructured locks. Consider a linked list example, suppose we have a linked list with nodes N1, N2, N3, N4. And we want a computation where multiple threads can work on this linked list where they will first work with N1, N2 then work with N2, N3. And then work with N3, N4, and so on. Now how do we set this up? We know that we'll need to use locks and in the case of unstructured locks, what we have to do is we have to allocate lock objects for each node. So column L1, L2, L3. You don't get these automatically as in the case of the synchonized construct and structured locks, you have to allocate them. But once you've allocated them you can perform explicit calls to lock. You can say lock L1 and then you lock L2. So at this point after you have locked L1 and L2, you have access to both nodes N1 and N2 and the thread can work with N1, N2. At this point it can unlock L1 because it no longer needs to work with node N1 and lock N3, while acquiring the lock for L3. It can work with N2 and N3, then it needs to unlock L2, lock L4, and so on. So this pattern is called hand-over-hand locking because at any point a thread needs to lock two nodes, but the nesting of the lock and unlock is very different from the synchronized case. So for example if you see over here we have L2 acquired and released over here with lock and unlock of L2. And if we had filled this out further we would have had an unlock of L3 over here. And the lock and unlock of L3 would appear in this interval. What you notice about these two intervals in the sequence of code is they are not nested. With structured locks all the synchronization had to be nested. With unstructured locks, we get to make explicit calls to lock and unlock, and we can create these new general patterns like hand-over-hand locking. There's another extension to unstructured locks which can be very helpful. Sometimes the thread doesn't just want to wait indefinitely at a lock. It may want to do something else if the lock is unavailable. So, there's an option called tryLock, and if we said tryLock for L1 we get a return value, let's say a Boolean called "success". And then we could do something like, if success, do all this code, else, do something else. So this give's us even more flexibility because by calling tryLock the threat has the option to do other pieces of work when a lock is unavailable. Whereas in structured locking the thread is blocked at the synchronized point waiting to acquire that lock. And finally there is another refinement that you can get with unstructured locks, and that is to differentiate readers and writers. So there is something called a read-write lock, and I will use a simple data structure over here, let's say that we have an array over here. And we want to do two kinds of operations on the array, we want to do a search. Maybe it's searching for some item in the array, and we want to do an update. Maybe at index I, the element Y. So over here, the array A sub I might be assigned Y. And over here, we are searching for different A sub Is to see whether they're equal to X. Now, we cannot let multiple threads perform multiple search and updates in parallel without synchronization. So with what we've learned so far we would probably use a synchronized construct on the array A or we would use unstructured locks with calls to lock and unlock. But there's an observation over here. This search operation is only performing read operations, this double equals is only reading the value of A sub I whereas the update is performing a write. So there's a further extension to unstructured locks where you can actually say that you want, if we allocate for the array, a read-write lock L, this is the read-write lock that's protecting the array A. Then we can say we want a read-lock on L in the beginning, and then we can do an unlock on L when we're done. And this would be the code in the search, when we're only reading the array. And for update, we can do a write-lock with L, the same lock L and do an unlock. So what's the difference? If you had a regular lock and unlock, only one thread would enter search at a time, with a read-lock there's a special semantics that recognizes the fact that multiple threads can read the same value without changing the outcome. So multiple threads are allowed to acquire a read-lock but only one thread is allowed to acquire a write-lock. And if one thread has a write-lock no other thread can get a read-lock. So the two modes are one or more threads holds a read-lock or exactly one thread, or at most one thread holds a write-lock. So now we see that with unstructured locks we go a major step beyond structured locks. And there may be certain kinds of concurrent programming where you need the extra power of unstructured locks such as hand-over-hand locking or the try-lock, where the thread wants to do some other operations or read-write locks. So you can try out these primitives for more sophisticated forms of concurrent programming.