0:00
In this next lesson I want to talk about the alternative that I think
is raised by what we've been discussing here in these modules on vision.
And that's the alternative that the brain is really an engine of reflex
associations.
And what do I mean by that?
What is a reflex in the first place?
Well, you're probably all familiar with a reflex.
It's in the idiom of modern discussions as a response,
an automatic response that's simple, straightforward, and always the same.
And the reflex in physiology that's most often considered as an explanation
0:49
of this and this was the work of Charles Sherrington.
Again, we discussed Sherrington before in the context of flicker fusion.
But Sherrington's main work was on reflexes and
the reflex that is best understood and, as I said,
used as a conventional analogy in thinking about reflexes, is the knee-jerk reflex.
The physician taps your patellar tendon with a reflex hammer
in a neurological exam, and this stretches the sensors that are in the muscle.
These are sensory receptors that are less obvious than the eye, or
the ear, and so on.
But they're quite legitimate sense organs that respond to mechanical stimulation.
The stimulation sends an action potential to neurons in the spinal cord.
These neurons fire, they connect with motor neurons that go back to the muscle,
1:47
and cause it to contract, and
the muscles that oppose this are at the same time inhibited.
All this happens in a perfectly automatic way.
This belies what is really going on and
what's really going on is much more complicated.
There are ascending pathways for
this information generated by the tap of the patella tendon to cortical structures.
There's information coming from the cortex that modifies the reflex.
Every neurologist knows that if you get a weak reflex in a patient you're examining
and you're doubtful whether the reflex is intact, you ask the patient to
tense all of their muscles by doing this and the reflex is improved thereby.
The knee-jerk reflex can also be conditioned.
There's a wonderful demonstration of this done by school children as
a science project, demonstrating that if you repeatedly tap the patellar
tendon over and over again, and the knee jerks many times in succession.
And then pretend to tap it, but hold short of actually stretching the tendon and
stretching the sensory receptors here, the knee will still extend.
And that's just an example of classical conditioning.
3:10
So reflexes are not the simple thing that people have taken to be,
they're neurologically much more complex.
But they do make the point that for very good reasons,
the experience over evolutionary time and lifetime as well, but
over evolutionary times as we've discussed before, is doing the heavy lifting,
you can generate an association between an input and
an output more or less automatically, that serves a good purpose.
The purpose served by this reflex may not be obvious to you,
but it is if you think about it, straight forward.
If you're walking along and your toe stubs a root and
you are about to trip, the extension of the leg that's generated in
this reflex manner keeps you from falling down.
And is obviously biologically useful, even lifesaving,
but certainly one of the contributors to our longevity as a species.
So what is the alternative idea that the brain is not acting
as a computer in the normal sense of this phrase, but
is solving a problem by making associations that are useful?
Well, there's whole other way of thinking about neural networks,
other than thinking about them as executors of algorithms.
I should have said before but let me emphasize here,
that algorithms are logical sequences of operations that can be expressed and
are expressed in software,
as a series of rule based steps that lead you to the solution of some problem.
There's a very different way of thinking about computation
that is unsupervised and simply operates empirically.
And this is an idea about the way that computers
might work that dates from the 1940s by McCulloch and
Pitts, two workers at MIT in those days.
And they established the idea of artificial neural
networks as a system of connections that simply operated
by taking information that was inputted to the network,
and this is a very simple network diagrammed here.
And through the connections established empirically, just by trial and
error, the behavior out could eventually after enough learning,
evolutionary or lifetime learning, succeed, and
the success of the information would be fit back into the input.
And through this loop, gradually an artificial network would
learn how to solve basically any problem that you put to it.
Computer scientists haven't really liked that idea so
much, although artificial neural networks have had
their ups and downs over the last 70 or 80 years.
But they are widely recognized as a possible solution to problems
that a computer encounters or biological agent encounters.
And I think that they come much closer to the kind of work that the brain is doing.
And present a much better analogy than does the sort of ordinary
algorithmically driven computer that's executing a program.
So what's the network role?
Well, as I said, this is quite a simple network but
it's just a bunch of neurons that are connected,
there are a large variety of ways, of course, of connecting them.
And much work has been done on this over the last 60, 70 years to
demonstrate that yes, artificial networks can solve a variety of problems.
And this idea that was inspired by neurons in the first place,
that's where McCulloch and Pitts took their idea from,
that you can make these networks as complicated as you'd like.
7:58
There are, of course, some problems that can't be solved by any form of computation
but that's another matter.
So how would one go about exploring the idea that the evolution
of a neural network is really the way that the brain is working?
Well, that's an enormous challenge for the future, but
let me give you just a general idea of how that might happen.
Already, and for the last few decades, there've been available off the shelf,
or generated by the researchers involved, genetic algorithms.
And these genetic algorithms are simply a way of,
in a computer network, an artificial neural network,
mimicking the process of evolution in a way that uses the concept
of heritable genetic change, mutation, change in the gene.
The persistence of the genes were not in
the ongoing simulation of evolution
being determined by the degree to which they establish, by one criterion or
another, the network's fitness.
So this is a way in computation of mimicking the way
evolution leads to heritable changes.
And it can be in principle used to demonstrate that yes,
if you could establish a neural network of some degree of complexity,
and get it to evolve using a genetic algorithm,
asking it to solve some biologically possible problem,
let's say a visual problem.
You could begin to understand how a visual system and
how the brain were generally might operate in computational terms.
Not standard computational terms, but computational terms that are much more
related to the way biology seems to operate.
This is of course, a challenge for the future.
But I think one can imagine, a futurist could imagine, that
10:09
once one established a sufficiently complex neural network and
a sufficiently complex environment,
you could in principle mimic what happened over a revolutionary time.
As the networks that are our extraordinarily complex brains really did
evolved to solve the problems empirically that we've been talking about in vision,
and now more generally, in the context of how brains might work broadly.
So this is a very long way from where we are today of course.
But as I say, it's something worth thinking about,
because with remarkable speed of advances in computation,
it's probably not going to be too many decades before people try this and
begin to achieve some degree of success.