K-nearest neighbor for text classification, Computer Engineering

Assignment Help:

Assignment 2: K-nearest neighbor for text classification.

The goal of text classification is to identify the topic for a piece of text (news article, web-blog, etc.). Text classification has obvious utility in the age of information overload, and it has become a popular turf for applying machine learning algorithms. In this project, you will have the opportunity to implement k-nearest neighbor and apply it to text classification on the well known Reuter news collection.

1.       Download the dataset from my website, which is created from the original collection and contains a training file, a test file, the topics, and the format for train/test.

2.       Implement the k-nearest neighbor algorithm for text classification. Your goal is to predict the topic for each news article in the test set. Try the following distance or similarity measures with their corresponding representations.

a.        Hamming distance: each document is represented as a boolean vector, where each bit represents whether the corresponding word appears in the document.

b.       Euclidean distance: each document is represented as a numeric vector, where each number represents how many times the corresponding word appears in the document (it could be zero).

c.         Cosine similarity with TF-IDF weights (a popular metric in information retrieval): each document is represented by a numeric vector as in (b). However, now each number is the TF-IDF weight for the corresponding word (as defined below). The similarity between two documents is the dot product of their corresponding vectors, divided by the product of their norms.

3.        Let w be a word, d be a document, and N(d,w) be the number of occurrences of w in d (i.e., the number in the vector in (b)). TF stands for term frequency, and TF(d,w)=N(d,w)/W(d), where W(d) is the total number of words in d. IDF stands for inverted document frequency, and IDF(d,w)=log(D/C(w)), where D is the total number of documents, and C(w) is the total number of documents that contains the word w; the base for the logarithm is irrelevant, you can use e or 2. The TF-IDF weight for w in d is TF(d,w)*IDF(d,w); this is the number you should put in the vector in (c). TF-IDF is a clever heuristic to take into account of the "information content" that each word conveys, so that frequent words like "the" is discounted and document-specific ones are amplified. You can find more details about it online or in standard IR text.

4.       You should try k = 1, k = 3 and k = 5 with each of the representations above. Notice that with a distance measure, the k-nearest neighborhoods are the ones with the smallest distance from the test point, whereas with a similarity measure, they are the ones with the highest similarity scores.

 

 


Related Discussions:- K-nearest neighbor for text classification

Describe the types of flip-flops and latches, Describe the types of flip-fl...

Describe the types of flip-flops and latches. Flip-flops are of two types as illustrated below: a. Positive edge triggered b.  negative edge triggered Latches are

Multiple instruction and single data stream (misd), Multiple Instruction an...

Multiple Instruction and Single Data stream (MISD) In this association, multiple processing elements are structured under the control of multiple control units. Each control un

Difference between synchronous and asynchronous updates, What is the differ...

What is the difference between Synchronous and Asynchronous updates? A program asks the system to perform a particular task, and then either waits or doesn't wait for the task

How can a parent and child process communicate, How can a parent and child ...

How can a parent and child process communicate? A parent and child can communicate by any of the normal inter-process communication schemes (pipes, sockets, message queues, sha

What is static timing, What is Static timing a. Delays over all paths a...

What is Static timing a. Delays over all paths are added up. b. All possibilities, including false paths, verified without the need for test vectors. c. Faster than simul

What is a null object in c++, It is an object of a number of classes whose ...

It is an object of a number of classes whose purpose is to indicate that a real object of that class does not exist. One common use for a null object is a return value from a membe

Flow charts, n=(x*2)/(1=0) the value x=0 and is used to stop the algerithin...

n=(x*2)/(1=0) the value x=0 and is used to stop the algerithin.The calculation is repeated using values of x=0 is input. There is only a need to check for error positions. The va

Which algorithm is used to solve computational problems, Which algorithm is...

Which algorithm is used to solve computational problems If we want to solve any problem then we use a series of well-defined steps. These steps are collectively known as algori

Scsi bus - computer architecture, SCSI Bus:   Defined by ANSI - X3....

SCSI Bus:   Defined by ANSI - X3.131   50, 68 or 80 pins   Max. transfer rate - 160 MB/s, 320 MB/s. SCSI Bus Signals   Small Computer System Interface

Write Your Message!

Captcha
Free Assignment Quote

Assured A++ Grade

Get guaranteed satisfaction & time on delivery in every assignment order you paid with us! We ensure premium quality solution document along with free turntin report!

All rights reserved! Copyrights ©2019-2020 ExpertsMind IT Educational Pvt Ltd