Homophone disambiguation-unigram-bigram model , Basic Statistics

The BBC hosts a homophone quiz on its website; your task for this lab is to develop an automatic method for completing the quiz, with the aim being to get as high a score as possible. Since this is the fourth lab, and you have no doubt become language processing experts, we have chosen the advanced quiz!

The questions are as follows (the blank indicates the position of the disputed word, and the words appearing in brackets at the end are the possible options).

1. I don't know to go out or not (weather/whether)

2. Houses were being built on this (site/sight)

3. We went the door to get inside (through/threw)

4. I really want a car (new/knew)

5. They all had a of the cake (piece/peace)

6. She had to go to prove she was innocent (caught/court)

7. We were only to visit at certain times (allowed/aloud)

8. We had to a car while we were on holiday (hire/higher)

9. Tip the jug and lots of cream on the strawberries (poor/pour/paw)

10. She went back to she had locked the door (check/cheque)

For the purposes of this exercise, you will use a simple language model to estimate the probability that each of the candidate words is correct. To do this you will need to compute the frequency of each of the words in a large corpus. Additionally you will need a count of all bigrams (two word sequences) in the corpus. Using nltk this becomes trivial. For this lab we will be using the entire Brown corpus to get these counts.

1 Unigram Model

The first part of this lab is to use a very simple model to select the word which goes in the blank: simply pick the most frequent word (using the unigram frequencies above). You should write a Python program to read in the sample sentences available.

Your program should then output for every sentence the candidate word it thinks should go in the blank.

2 Bigram Model

The second method you should attempt is to make use of the bigram counts to determine which of the potential candidates makes the whole sentence more probable (i.e. you should develop a basic language model). If one is willing to make certain assumptions, the probability of a sequence of words w1,w2,w3,. . .,wn is given by:

782_biagram model.png

When using a bigram language model, we approximate the above probability with using only the previous word:

1830_biagram model1.png

You should think about the entire calculation you need to make, and which parts of it are common to all possible choices in the blank space for the homophone disambiguation task.

We estimate the bigram probabilities in the equation above using counts from a large corpus.

The standard way to estimate bigram probabilities is:

1934_Biagram 3.png

3 Smoothing

Results for the task can be improved using smoothing. Implement the "Plus One Bi-gram

Smoothing" that was described in lecture. The bigram probabilities are estimated as:

1081_biagram 4.png

where V is the number of distinct words in the training corpus (i.e. the number of word types).

4 Hand-in

Hand in four files:

1. A Python program called lab4a.py that reads on standard input a file of sentences in the format of the test file supplied and outputs on standard output one word per line, where the word on the k-th line is that homophone from the pair of homophones at the end of the k-th input sentence which the unigram model (section 1 above) predicts as the most probable to fill in the blank in the k-th input sentence.

2. A Python program called lab4b.py which is the same as lab4a.py, except that the words proposed should be the homophones deemed most probable by the bigram model (section 2 above).

3. A Python program called lab4c.py which is the same as lab4b.py, except that the words proposed should be the homophones deemed most probable by the bigram model with plus-one smoothing (section 3 above).

4. A brief report (maximum 1 side of A4 - half a side is fine) called lab4-report (.doc or.pdf) that:

_ Describes how your programs work and reports the result for each.

_ Discusses why you get the results you get.

Posted Date: 3/1/2013 5:57:49 AM | Location : United States







Related Discussions:- Homophone disambiguation-unigram-bigram model , Assignment Help, Ask Question on Homophone disambiguation-unigram-bigram model , Get Answer, Expert's Help, Homophone disambiguation-unigram-bigram model Discussions

Write discussion on Homophone disambiguation-unigram-bigram model
Your posts are moderated
Related Questions
An individual has $ 100 initially. He repeatedly plays a game of chance in which he earns $ 100 with probability 0.8 or loses $ 100 with probability 0.2. The stops playing at the f

Northern Hi-Tec Electronics Limited manufactures six computer peripheral devices: internal modems , external modems , circuit boards , CD drives , hard disk drives , and memo

Cash basis A basis for accounting whereby revenues are recorded only when received & expenses are recorded only that salaried with no regard to the era in which they be earned, in

Mike sells on the average 15 newspapers per week (Monday – Friday). Find the probability that 2.1 In a given week he will sell all the newspapers

meanings of statistics by different authors and importance of statistics in the field of Education

AMOUNT AVAILABLE IN DEBT SERVICE FUNDS An "other debit" common journal value used in the General Long-Term Bills Additional Consideration that designates the quality of resources

Mean Deviation The two methods of dispersion discussed above namely range and quartile deviation are not measure of dispersion in the strict sense of te term because they do not s

Capital grants Grants that are restricted for the acquisition: structure: renewal of capital assets associated with an accurate program, Refer to GRANTS & OPERATING GRANTS


The results of a study using focused comparison techniques showed that two independent variables distinguished between successful and unsuccessful negotiations with the US on secur