Reference no: EM132294041
ASSESSMENT TASK ONE - DATA EXPLORATION: WINE RATING DATA
Task Description
We provide one IPython notebook SIT742Task1.ipynb, together with two data files at the data subfolder:
wine.json The json file contains the wine ratings and reviews from WineEnthusiast.
stopwords.txt This text file contains the most common English stop words.
You are required to develop a data exploration report using IPython notebook to complete the following two sub-tasks.
Numeric and Categorical Value Analysis
For a data scientist, after obtaining the dataset, the first most crucial task is to obtain a good understanding of the data he or she is dealing with. This includes: examining the data attributes (or equivalently, data fields), seeing what they look like, what is the data type for each field, and from this information, determining suitable numerical/visual descriptions.
The first task is to read the json file as a Pandas DataFrame and delete the rows which contain invalid values in the attributes of "points" and "price".
Then, you need to answer the following two questions in your IPython notebook based on this dataset:
(1) what are the 10 varieties of wine which receives the highest number of reviews?
(2) which varieties of wine having the average price less than 20, with the average points at least 90? Assuming there is no duplicate review in the data, i.e., each row represent a unique wine.
In addition, you need to group all reviews by different countries and generate a statistic table, and save as a csv file named "statisticByState.csv". The table must have four columns:
Country - listing the unique country name.
Variety - listing the varieties receiving the most reviews in that country.
AvgPoint - listing the average point (rounded to 2 decimal places) of wine in that country
AvgPrice - listing the average price (rounded to 2 decimal places) of wine in that country Based on this table, which country/countries would you recommend Hotel TULIP to
source wine from? Please state your reasons.
Text analysis
In this task, you are required to write Python code to extract keywords from the "description" column of the json data, used to redesign the wine menu for Hotel TULIP.
You need to generate two txt files:
HighFreq.txt This file contains the frequent unigrams that appear in more than 5000
reviews (one row in the dataframe is one review).
Shirazkey.txt This file contains the key unigrams with tf-idf score higher than 0.4. To reduce the runtime, first you need to extract the description from the variety of "Shiraz", and then calculate tf-idf score for the unigrams in these descriptions only.
In both txt files, all unigrams are sorted alphabetically and are saved line by line without duplicate. Before you calculate the unigram frequent or tf-idf, you need to remove the stop words in all description using the provided "stopwords.txt" or using the built-in function in Python.
*ASSESSMENT TASK TWO - DATA ANALYTICS: BANK MARKETING
Task Description
We provide one IPython notebook SIT742Task2.ipynb, together with a csv file bank.csv at the data sub- folder. You are required to analyse this dataset using IPython notebook with Spark packages including spark.sql and pyspark.ml that you have learnt from SIT742.
Table 2.1: Attribute information of the dataset
Attribute |
Meaning |
age |
age of the customer |
job |
type of job |
marital |
marital status |
education |
education level |
default |
has credit in default? |
balance |
the balance of the customer |
housing |
has housing loan? |
loan |
has personal loan? |
contact |
contact communication type |
day |
last contact day of the week |
month |
last contact month of year |
duration |
last contact duration, in seconds |
campaign |
number of contacts performed |
pdays |
number of days that passed by after a previous campaign |
previous |
number of contacts performed before this campaign |
poutcome |
outcome of the previous marketing campaign |
deposit |
has the client subscribed a term deposit? |
IPython Notebook
To systematically investigate this dataset, your IPython notebook should follow the basic 6 procedures as:
(1) Import the csv file, "bank.csv", as a Spark dataframe and name it as df, then check and explore its individual attribute.
(2) Select important attributes from df as a new dataframe df2 for further investigate. You are required to select 13 important attributes from df: `age', `job', `marital',
`education', `default', `balance', `housing', `loan', `campaign', `pdays',
`previous', `poutcome' and 'deposit'.
(3) Remove all invalid rows in the dataframe df2 using spark.sql. Supposing that a row is invalid if at least one of its attributes contains `unknown'. For the attribute
`poutcome', the valid values are `failure' and `success'.
(4) Convert all categorical attributes to numerical attributes in df2 using One hot encoding, then apply Min-Max normalisation on each attribute.
(5) Perform unsupervised learning on df2 including k-means and PCA. For k-means, you can use the whole df2 as both training and testing data, and evaluate the clustering result using Accuracy. For PCA, you can generate a scatter plot using the first two components to investigate the data distribution.
(6) Perform supervised learning on df2 including Logistic Regression, Decision Tree and Naive Bayes. For the three classification methods, you can use 70% of df2 as the training data and the remaining 30% as the testing data, and evaluate their prediction performance using Accuracy.
Case Study Report
Based on your IPython notebook results, you are required to write a case study report with 500 - 1000 words, which should include the following information:
(1) The data attribute distribution
(2) The methods/algorithms you used for data wrangling and processing
(3) The performance of both unsupervised and supervised learning on the data
(4) The important features which affect the objective (‘yes' in ‘deposit') [Hint: you can refer the coefficients generated from the Logistic Regression]
(5) Discuss the possible reasons for obtaining these analysis results and how to improve them
(6) Describe the group activities, such as the task distribution for group members and what you have learnt during this project.
*Note: Only need ASSESSMENT TASK TWO
Attachment:- Modern Data Science.rar