Reference no: EM133765428 , Length: word count:1500
Software Practice for Big Data Analytics - Master of Data Analytics (MDA)
Assessment Title - Perform Data Analytics on Real-World Problems Using Amazon Web Services.
Purpose of the assessment - Select the tools in the chosen software stack to design and program the big data analytics platform;
Relate the concept and use of visualisation to big data analytics;
Develop and appraise big data platforms for predictive analytics in complex real-world domains.
Description
In this group assignment, you will explore various aspects of big data analysis and manipulation using the Hadoop ecosystem, specifically focusing on Pig Latin and Hive Query Language (HiveQL). The primary objective of this assignment is to gain practical experience in processing large-scale data sets and understanding the differences in data trends over time. You will be working with two distinct data sets: patent data and sale data related to patent applications and sale transactions.
For the first part of the assignment, you will use patent data from the United States (US) and abroad to analyze the total patent applications applied and granted each year. This will involve executing patent files, creating directories on a Hadoop cluster, and running the Pig Latin program to count the total number of patent applications applied and granted yearly. This analysis will help you understand the development of analytics reports on patent data over a decade.
In the second part of the assignment, you will work with sale data using HiveQL commands. The primary goal here is to analyze product sale transactions by performing various data operations such as uploading files to HDFS, joining multiple data sets, grouping data, and performing calculations such as listing customers, products and supplier details. You will also filter and display the expensive products and the average price of all products.
By the end of this assignment, you will have acquired hands-on experience in leveraging the Hadoop ecosystem to analyze and manipulate large-scale data sets efficiently. This will serve as a foundation for further exploration in the field of big data analysis and processing.
Your Tasks
To complete Assignment 2, which comprises two main parts, your team will follow the steps outlined in the two questions below to perform data processing and analysis tasks using the Hadoop ecosystem, Pig Latin and HiveQL. The primary focus will be working with data sets related to patent data and sale data, allowing for hands-on experience in managing and processing large-scale information efficiently.
Part I: Download the US_patent.csv data from the Assignment 2 folder on Moodle. It is a comma- separated value (CSV) file. The file includes a list of US patents applied and granted between 1965 and 2020. The column "Total Patent Applications" in the file presents the total applied applications, and "Total Patent Grants" contains information about total granted applications.
For Part I, using Pig Latin commands and Tableau to perform the following tasks:
Upload the files to HDFS.
Create directories on the cluster and name the directory Patent.
Load the file US_patent to the new directory.
Write a Pig program to find the total number of patents applied each year.
Write a Pig program to find the total number of patents granted each year.
Observe and compare the applied rate and granted rate from 1965 and 2020.
Using Tableau Software, visualize the results in a suitable manner. Choose the format that youfind most appropriate.
In 350 words, summarize your understanding of the changing trends and accepted rates of patent
applications.
Part II: Download the saledata.zip data from Assignment 2 folder on Moodle. This file is compressed, and once you unzip it, you will find five tab-delimited text formats files:
[50 Marks]
customer (which contains information about customer ID, name and zip code)
product (which contains information about product ID, name, price, supplier ID and product category)
product sales (whichcontaininformation about product id, corresponding sale transaction id and no of items sold)
sales (which contain information about sale ID, customer ID, store ID and transaction year)
supplier (which contains information about supplier ID and supplier name)
For Part II, using HiveQL commands to perform the following operations:
Upload the five files to HDFS
Create directory on the cluster and name the directory Sales.
Create a database named sales_db and create tables where the above files can be loaded.
Display the supplier id and supplier name for all suppliers.
Display the customer's name and customer zip for all customers.
Display the product id, product name, product price, and supplier name for all products.
Filter and display the product id, product name, and product price for products with a product price of $500 or lower.
Filter and display the customer's name and sale year for sales involving a customer buying more than two products.
Filter the top product id and product name based on product price.
Calculate the average price of all products.
In 350 words, summarize your findings.