K-nearest neighbor for text classification, Computer Engineering

Assignment Help:

Assignment 2: K-nearest neighbor for text classification.

The goal of text classification is to identify the topic for a piece of text (news article, web-blog, etc.). Text classification has obvious utility in the age of information overload, and it has become a popular turf for applying machine learning algorithms. In this project, you will have the opportunity to implement k-nearest neighbor and apply it to text classification on the well known Reuter news collection.

1.       Download the dataset from my website, which is created from the original collection and contains a training file, a test file, the topics, and the format for train/test.

2.       Implement the k-nearest neighbor algorithm for text classification. Your goal is to predict the topic for each news article in the test set. Try the following distance or similarity measures with their corresponding representations.

a.        Hamming distance: each document is represented as a boolean vector, where each bit represents whether the corresponding word appears in the document.

b.       Euclidean distance: each document is represented as a numeric vector, where each number represents how many times the corresponding word appears in the document (it could be zero).

c.         Cosine similarity with TF-IDF weights (a popular metric in information retrieval): each document is represented by a numeric vector as in (b). However, now each number is the TF-IDF weight for the corresponding word (as defined below). The similarity between two documents is the dot product of their corresponding vectors, divided by the product of their norms.

3.        Let w be a word, d be a document, and N(d,w) be the number of occurrences of w in d (i.e., the number in the vector in (b)). TF stands for term frequency, and TF(d,w)=N(d,w)/W(d), where W(d) is the total number of words in d. IDF stands for inverted document frequency, and IDF(d,w)=log(D/C(w)), where D is the total number of documents, and C(w) is the total number of documents that contains the word w; the base for the logarithm is irrelevant, you can use e or 2. The TF-IDF weight for w in d is TF(d,w)*IDF(d,w); this is the number you should put in the vector in (c). TF-IDF is a clever heuristic to take into account of the "information content" that each word conveys, so that frequent words like "the" is discounted and document-specific ones are amplified. You can find more details about it online or in standard IR text.

4.       You should try k = 1, k = 3 and k = 5 with each of the representations above. Notice that with a distance measure, the k-nearest neighborhoods are the ones with the smallest distance from the test point, whereas with a similarity measure, they are the ones with the highest similarity scores.

 

 


Related Discussions:- K-nearest neighbor for text classification

Grid computing, Grid Computing means applying the resources of a lot of com...

Grid Computing means applying the resources of a lot of computers in a network simultaneously to a one problem for solving a scientific or a technical problem that needs a large nu

Difference between char a[] = "string", What is the difference between char...

What is the difference between char a[] = "string"; and char *p = "string";? Ans) In the first case 6 bytes are assigned to the variable a which is fixed, where as in the secon

Show the programmes for parallel systems, Q. Show the Programmes for Parall...

Q. Show the Programmes for Parallel Systems? Adding elements of an array using two processor      int sum, A[ n] ;  //shared variables

Which is the best tool for monitoring weblogic server(wls8), WLS8 handles J...

WLS8 handles JMX but it uses weblogic execution of JMX server. It does not supports generalise sun javax API which can be used with any JVM. There are some patches available which

Avoiding over fitting in decision trees, A v o iding Over fitting - Arti...

A v o iding Over fitting - Artificial intelligence As  we  discussed  in  the last  lecture,  over fitting  is  a  normal  problem  in machine learning. Decision trees suffe

3 variable k-maps, 3-variable K-maps have 8 squares which arenormally arran...

3-variable K-maps have 8 squares which arenormally arranged in 4 columns and 2 rows.Columns are labeled with 2 variables. The columns are arranged so that either A or B cha

What are the characteristics of dram, What are the characteristics of DRAM?...

What are the characteristics of DRAM? Low cost High density Refresh circuitry is needed

What is icon, An icon is a picture used to show an object. Some example obj...

An icon is a picture used to show an object. Some example objects are: data files, program files, folders, email messages, and drives. Every type of object has a dissimilar icon. T

Create a complete mp3 player, Write a GUI/MP3 program called MP3Random that...

Write a GUI/MP3 program called MP3Random that reads all MP3 les in a directory and plays them in random order. The GUI should have a little window with: 1. A button Next that s

Diffrence between cd-r vs cd rw, Q. Diffrence between CD-R vs CD RW? A ...

Q. Diffrence between CD-R vs CD RW? A CD-R disc looks same as a CD. Though all pressed CDs are silver, CD-R discs are gold or silver on their label side and a deep cyan or gree

Write Your Message!

Captcha
Free Assignment Quote

Assured A++ Grade

Get guaranteed satisfaction & time on delivery in every assignment order you paid with us! We ensure premium quality solution document along with free turntin report!

All rights reserved! Copyrights ©2019-2020 ExpertsMind IT Educational Pvt Ltd