Data Analysis Using Pyspark

4.4
95 个评分
提供方
Coursera Project Network
4,345 人已注册
在此指导项目中,您将:

Learn how to setup the google colab for distributed data processing

Learn applying different queries to your dataset to extract useful Information

Learn how to visualize this information using matplotlib

Clock1.5 h
Intermediate中级
Cloud无需下载
Video分屏视频
Comment Dots英语(English)
Laptop仅限桌面

One of the important topics that every data analyst should be familiar with is the distributed data processing technologies. As a data analyst, you should be able to apply different queries to your dataset to extract useful information out of it. but what if your data is so big that working with it on your local machine is not easy to be done. That is when the distributed data processing and Spark Technology will become handy. So in this project, we are going to work with pyspark module in python and we are going to use google colab environment in order to apply some queries to the dataset we have related to lastfm website which is an online music service where users can listen to different songs. This dataset is containing two csv files listening.csv and genre.csv. Also, we will learn how we can visualize our query results using matplotlib.

您要培养的技能

Google colabData AnalysisPython ProgrammingpySpark SQL

分步进行学习

在与您的工作区一起在分屏中播放的视频中,您的授课教师将指导您完成每个步骤:

  1. Prepare the Google Colab for distributed data processing

  2. Mounting our Google Drive into Google Colab environment

  3. Importing first file of our Dataset (1 Gb) into pySpark dataframe

  4. Applying some Queries to extract useful information out of our data

  5. Importing second file of our Dataset (3 Mb) into pySpark dataframe

  6. Joining two dataframes and prepapre it for more advanced queries

  7. Learn visualizing our query results using matplotlib

指导项目工作原理

您的工作空间就是浏览器中的云桌面,无需下载

在分屏视频中,您的授课教师会为您提供分步指导

审阅

来自DATA ANALYSIS USING PYSPARK的热门评论

查看所有评论

常见问题

常见问题

还有其他问题吗?请访问 学生帮助中心