联系方式

  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-23:00
  • 微信:codinghelp

您当前位置:首页 >> Database作业Database作业

日期:2021-05-09 10:47

Analyzing 10Gb of Yelp Reviews Data

For this project, you will be tasked with provisioning a Spark Cluster on AWS EMR for loading and running some analysis on Yelp’s Reviews and Businesses dataset (about 10gb) from Kaggle. You will run your analysis via Jupyter Notebook and the expected output artifact is a .ipynb file.

Requirements2

Artifacts2

Notebook File2

PDF File2

README3

S3 Bucket4

Submission4

Assignment4

Part I: Installation and Initial Setup5

Part II:  Analyzing Categories5

Part III: Do Yelp Reviews Skew Negative?5

Part IV: Should the Elite be Trusted? (Or, some other analysis of your choice)5

Requirements

This project is very simple: you are to provision a Spark cluster on AWS EMR, connect it to a Jupyter Notebook and then run a series of queries (in python with DataFrame API or Spark SQL) that answer a few simple questions about the Yelp Data available.

In doing so, you are demonstrating your ability to configure and provision infrastructure using the AWS Elastic Map Reduce ecosystem. Also you are demonstrating your understanding of how to leverage transformations and actions (as per the Spark terminology) with PySpark in performing basic data analysis tasks on information sources that are too large to manage in memory.

This project is due TUESDAY APRIL 27TH, MIDNIGHT.

PLEASE READ THE FOLLOWING INSTRUCTIONS CAREFULLY. I will likely be automating some or most of the grading process and therefore your output artifact must match the spec I define below.

Artifacts

You are to submit a zip file with your project work content (as seen below) inside. Expected Zip file structure:

project02

+-- Analysis.ipynb

+-- Analysis.pdf

+-- assets

+-- +-- cluster_configuration.png

+-- +-- notebook_configuration.png

+-- README.md

Note: I’m ok with the images submitted as jpegs as well.

Notebook File

The ipynb file that contains your analysis and the outputs of the code you wrote to arrive at your results. This is very important as this is the sole method of validation that you actually ran an EMR cluster successfully.

You must name your Notebook file Analysis.ipynb. PLEASE PLACE THIS IN YOUR PROJECT ROOT FOLDER.


PDF File

From your EMR notebook, if you download your file as html, you can open it in the browser and save it as PDF. I’d like you to name this file Analysis.pdf and include it in the zip artifact.

README

The README, in markdown, should contain a brief blurb describing the project and the technology leveraged to conduct your analysis. This ought to be brief and informational, in case folks in the future want to recreate your results.

ALSO, your README must contain screenshots of your EMR cluster configuration and Notebook configuration. Here are mine, shared below as reference

Example markdown code:

![cluster_iamge](assets/cluster_configuration.png)

PS: if you wanted to “test” your readme, you can download a readme viewer like this one

Cluster Configuration



Notebook Configuration


S3 Bucket

You must read your Yelp data from S3. In order to do so, you can use my publicly available bucket which contains all the data needed for your analysis. Your Analysis.ipynb file must demonstrate that the data is being read from S3 - this is largely as simple loading your DataFrame like so:

Assignment

The actual analysis is broken into four parts - three which are guided and one that is freeform.  I have published a sample github repo demonstrating this project.

Note that the output of the code written is provided as a means to give you structure as you write your analysis. For Parts I, II & III, you must fill in the blanks (implement the code however you want) to get the output provided in the file. (Mainly columns and aggregations, I don't care about the exact rows).

For Parts III and IV, you have more flexibility to take the analysis further however you see fit. Below, I expound a bit more about each part of analysis.

Part I: Installation and Initial Setup

In this portion, you will import the necessary dependencies (pandas and matplotlib, seaborn is optional) and load your dataset as a pyspark dataframe.

Part II:  Analyzing Categories

For this part, you will take a stab at denormalizing the categories that are associated with each business (there may be more than one, presented as a string of comma separated identifiers) and then running some basic analysis on the result.

Part III: Do Yelp Reviews Skew Negative?

For this next part, you will attempt to answer the question: are the (written) reviews generally more pessimistic or more optimistic as compared to the overall business rating. There are some required questions you must answer (see the analysis.ipynb file) which is the bare minimum. But, feel free to have fun with it and take your analysis as far as you’d like. Any additional work you do will be counted for up to 5 points of extra credit on your project grade, capped at 105.

Part IV: Should the Elite be Trusted? (Or, some other analysis of your choice)

For this final part you may choose to either answer the question posed or explore the data in some other manner of your own choosing. The only requirements are:

●You must leverage the users dataset provided

●You must have at least one data visualization as part of your analysis


RUBRIC

Overall/Part 16

README.md markdown file is available directly in the project02 root folder1

README contains screenshots of your EMR cluster configuration AND Notebook configuration1

Analysis.ipynb file and Analysis.pdf are both available directly in the project02 root folder 1

Any necessary packages are loaded in to environment (pandas, matplotlib, seaborn, etc) before any analysis has been started 1

Generally, analysis structure in Analysis.ipynb follows the same structure available here (similar headings and subheadings).1

Blank fields in the Analysis.ipynb sample file provided are answered.1

Part 28

*business.json dataset is loaded from S3 bucket and saved as Spark DF1

A Spark DF is derived associating business id with categories. Each business_id is represented multiple times, 1x per category in this DF3

A Spark DF is derived from the previous DF that represents the number of businesses per category.


(Ok if output is not exactly like the one shown in my Analysis.ipynb). I care more about the table columns and that you are able to rollup counts.2

Bar chart of “top” categories (from previous DF) is plotted and rendered. (You can use either seaborn or matplotlib)2

Part 38

*review.json dataset is loaded from S3 bucket and saved as Spark DF1

Spark DF is derived containing business_id col and average stars/business aggregate col from user reviews DF2

Spark DF from previous row and original business data DF (from Part 2) is joined on business_id field2

The skew dimension as defined in Analysis.ipynb is computed for each business on the previous DF and histogram is plotted with this data3

Part 48

*user.json dataset is loaded from S3 bucket and saved as Spark DF1

This DF is joined on either the *review.json or *business.json dataset4

One dataviz is rendered in analysis3


版权所有:留学生编程辅导网 2020 All Rights Reserved 联系方式:QQ:99515681 微信:codinghelp 电子信箱:99515681@qq.com
免责声明:本站部分内容从网络整理而来,只供参考!如有版权问题可联系本站删除。 站长地图

python代写
微信客服:codinghelp