K-nearest neighbor for text classification, Computer Engineering

Assignment Help:

Assignment 2: K-nearest neighbor for text classification.

The goal of text classification is to identify the topic for a piece of text (news article, web-blog, etc.). Text classification has obvious utility in the age of information overload, and it has become a popular turf for applying machine learning algorithms. In this project, you will have the opportunity to implement k-nearest neighbor and apply it to text classification on the well known Reuter news collection.

1.       Download the dataset from my website, which is created from the original collection and contains a training file, a test file, the topics, and the format for train/test.

2.       Implement the k-nearest neighbor algorithm for text classification. Your goal is to predict the topic for each news article in the test set. Try the following distance or similarity measures with their corresponding representations.

a.        Hamming distance: each document is represented as a boolean vector, where each bit represents whether the corresponding word appears in the document.

b.       Euclidean distance: each document is represented as a numeric vector, where each number represents how many times the corresponding word appears in the document (it could be zero).

c.         Cosine similarity with TF-IDF weights (a popular metric in information retrieval): each document is represented by a numeric vector as in (b). However, now each number is the TF-IDF weight for the corresponding word (as defined below). The similarity between two documents is the dot product of their corresponding vectors, divided by the product of their norms.

3.        Let w be a word, d be a document, and N(d,w) be the number of occurrences of w in d (i.e., the number in the vector in (b)). TF stands for term frequency, and TF(d,w)=N(d,w)/W(d), where W(d) is the total number of words in d. IDF stands for inverted document frequency, and IDF(d,w)=log(D/C(w)), where D is the total number of documents, and C(w) is the total number of documents that contains the word w; the base for the logarithm is irrelevant, you can use e or 2. The TF-IDF weight for w in d is TF(d,w)*IDF(d,w); this is the number you should put in the vector in (c). TF-IDF is a clever heuristic to take into account of the "information content" that each word conveys, so that frequent words like "the" is discounted and document-specific ones are amplified. You can find more details about it online or in standard IR text.

4.       You should try k = 1, k = 3 and k = 5 with each of the representations above. Notice that with a distance measure, the k-nearest neighborhoods are the ones with the smallest distance from the test point, whereas with a similarity measure, they are the ones with the highest similarity scores.

 

 


Related Discussions:- K-nearest neighbor for text classification

Name the processes of oom, Name the processes of OOM In OOM the modelli...

Name the processes of OOM In OOM the modelling passes through the given processes: System Analysis System Design Object Design, and Final Implementation.

Console application with a class, Make a console application with a class '...

Make a console application with a class 'Account' having variables as id(int) and name (String).Add property 'account-no' which can be set andget. Add a 'ReadOnly' property 'IntRat

Managerial benefits, MANAGERIAL BENEFITS Implement integrated relia...

MANAGERIAL BENEFITS Implement integrated reliability engineering and product assurance program in all aspects of the product life-cycle, covering purchase, engineering, R&D

user to enter the weight, A red and blue car were involved in a head-on co...

A red and blue car were involved in a head-on collision. The red car was at a standstill and the blue car was possibly  speeding. Eye witness video recorded suddenly following the

Salient features of direct addressing mode, Q. Salient features of direct a...

Q. Salient features of direct addressing mode? A number of salient points about this technique are: This scheme offers a limited address space since if address field has

What will be the result of adding hexadecimal A6 to 3A, The result of addin...

The result of adding hexadecimal number A6 to 3A is ? Ans. The result will be E0.

Show the importance of risc processors, Q. Show the Importance of RISC Proc...

Q. Show the Importance of RISC Processors? Reduced Instruction Set Computers recognize a comparatively limited number of instructions. One benefit of a reduced instruction set

Explain what are video and graphic in multimedia application, Question: ...

Question: (a) Copying a graphic image from the Web will be as easy as pointing to it and clicking the right mouse button or performing a screen capture. The question is, under

Explain potential of parallelism, Potential of Parallelism Problems in ...

Potential of Parallelism Problems in the actual world differ in respect of the amount of inherent parallelism intrinsic in respective problem domain. Some problems can be easil

What are the elements of an instruction, Q. What are the elements of an ins...

Q. What are the elements of an instruction? As the function of instruction is to communicate to CPU what to do it needs a minimum set of communication such as:  What op

Write Your Message!

Captcha
Free Assignment Quote

Assured A++ Grade

Get guaranteed satisfaction & time on delivery in every assignment order you paid with us! We ensure premium quality solution document along with free turntin report!

All rights reserved! Copyrights ©2019-2020 ExpertsMind IT Educational Pvt Ltd