In today's lecture, we're going to focus on symmetric multiprocessors and memory
systems for sharing of data and using shared memory to share data.
But there are other ways to share data and we're already touching on that in two
lectures when we talk about messaging in more detail.
The, I bring up symmetric multiprocessors because it's the simplest model to reason
about. All processors are up here, and memory is
down here. And let's assume there's no caches in the
system to begin with. And everyone is equidistant from memory,
and it's one big memory. And you've multiple processors, which
could either be running multiple threads in one program or multiple programs.
A couple other interesting things is, you know,
these concurrency challenges don't come up just from having multiple processors.
There are other things that can go and communicate with memory or communicate
via memory. We have all these different IO
controllers down here, network cards, disc controllers, graphics cards and they
also want to read and write memory. So, in your memory system, even on a
uniprocessor system, there are multiple agents that want to read and write memory
simultaneously. [COUGH] Okay. So, let's talk about
synchronization. The dining philosopher problem is an
example of a synchronization challenge and we're going to talk about two major
synchronization ideas but there are more than, are shown on the slide.
mainly they're sort of broadcast models and, and other things like that.
But for right now, when I say synchronization, it's some way to
synchronize communication or arbitrate communication in a restricted fashion.
So, we're going to have two here, we're going to talk about producer-consumer,
that's the, the figure on the right here. A producer, as the name implies, produces
some values. And a consumer, consumes the values.
This is the most basic form here, one producer, one consumer.
You could think about having one producer, multiple consumers or multiple
producers, multiple consumers or multiple producers, one consumer.
a lot of this same ideas hold here. So, that's, that's one model that people
like to use. Another model that people like to use is
there's some shared resource and you want to make sure that not more, let's say,
then one processor is trying to access that shared resource.
And this resource could either be a disk, a graphics card, or it could actually be
a location in memory. And we're going to call this mutual
exclusion. And the reason it's called mutual
exclusion is it's exclusive, who can access the resource at one time.
Now, the generalized form of this, that, that's the most basic form is only one
processor or one entity can go and access the resource at a time.
A more general form of it is some number of processors or resources can go access
that at one time. And this is the more general semaphore
question which we'll be talking about a little bit later.