Welcome everyone. In this video, we'll be going over the Google Assistant Project, describing it and showing the execution of it. So first, let's just show a brief overview of what we're going to do. So again, we're going to do hardware requirements, then a description, instructions, and then after that we're going to do the execution of the project and show a demo of it. So moving on, we have our hardware requirements first, so obviously, we need a DragonBoard and we need internet access for sure, because we need to access Google's API. Then, we need a keyboard and mouse to interact with the DragonBoard, and then monitor to see what's happening, and then we need speakers. You'd have to get the HDMI output from the monitor or the audio out for the monitor or if your monitor already has speakers, that's great too. But this is so that we can hear Google Assistant talk to us. The last thing we need is a microphone. In this case, we're using our USB webcam as a microphone because it has one built-in, and we're going to move on to our next part, which is the instructions and Andrew will take care of that. So, the instructions are provided in the readings and then we wanted to give credit to the person who wrote it, which is of 96Boards and that's Radhika Paralkar, excuse me if I pronounced that incorrectly, and Google also. So the code itself is made by Google and some important things to know in the instructions are the following. When restarting the laptop, you have to rerun "source venv/bin/activate" and that's because you have to run this in a virtual environment and if you guys don't do that then some of the instructions won't work. So make sure you guys do this every single time you restart it. Another thing to note is, you have to remember the path to your.json files and the instructions it asked you to download the key and the key as a.json file. So make sure you remember where that is. Another thing is, make sure to change the audio settings and PulseAudio Volume Control. I'll show you how to do this later. Basically, you want to change the inputs so that it takes into webcam as input and you want to change the output so that the HDMI is output. And another thing is, we had trouble with some of the dependencies and we just deleted some of the lines of the code and it started to work. So if you have this same error where it doesn't know which libraries is importing then, you can do the same fix that I did, and I'll show you this through the DragonBoard as well. Let's get into our DragonBoard to see which steps you guys need to do before we run this program. So the first thing we want to do is, we want to go to sound video and then PulseAudio Volume Control. And, this might be hard to see but as you can see in the output devices, you want to specify HDMI output, and you want it to display all output devices here. And in input devices, you also want it to show all input devices and you want to make sure that this one is checked for the webcam. So, as you can see it's not checked for the webcam. So you guys can just click on this one and exit out of the PulseAudio. And then, let's go into our terminal to see some of the stuff that you need to fix in the terminal as well. So let's open up Q Terminal, and then let's go to our project directory. So, it'll tell you to name your directory New Project. So, that's what we called it as well. And then if you ls, you'll see all the files here and their compiled files as well. And then, if you just go to pushtotalk.py. Oh Sorry. I'll open this in "vim" so you guys can see. My bad. So, before there were lines of code that says, "From that import. " Well, it says, "Try and from that import." And it sees if it works that way and if it doesn't, it does this step anyways. So, what you can do is, you can delete that entire part and you can just unindent these so that they're at the far left-hand side of your Python file and then it should work fine. This is only if you are having trouble with the imports. So, let's run this program to see what it actually does. So, let's run Python pushtotalk.py. So, I forgot the step where you have to run this into virtual environment, like I said. So you have to go back and you have to run source venv/bin/activate. As you can see on the left-hand side it shows venv and then, let's cd entire new project again and let's run that program once more. As you can see, there are no errors. So, as for requests, so let's press enter and then let's just say, "Hello?" Hey, I'm having the craziest day. I just learned that bananas are curved because they grow towards the sun. Wait what? Really? So, as you can see, it responded in some manner. I found a few places. The first one is Bibigo Fresh Korean Kitchen at 2525 La Jolla Village Drive in San Diego. So apparently, I can't say anything because Google, this thing keeps interrupting me, but you guys get the idea. So, if you just say something, it'll respond in some manner and then, sometimes they won't hear correctly because I was just like, not talking to her but talking to you guys. But make sure you guys run this, implement it in your own programs, and yeah. In the next video, we'll show you how the code works.