Class MarkovChain
public class MarkovChain extends Object implements Iterator<String>
TRAINING:
An example: Suppose we train the MarkovChain on these two Strings that represent tweets: "a table" and "A banana? A banana!"
We first "clean up" the tweets and parse them into individual sentences to use as training data. This process will remove punctuation and put all words into lower case, yielding these three sentences (written using OCaml list notation):
[ ["a"; "table"]; ["a"; "banana"]; ["a"; "banana"] ]
The MarkovChain that results from this training data maps each observed string to a ProbabilityDistribution that is based on the recorded occurrences of bigrams (adjacent words) in the data:
- "a" maps to "table":1, "banana":2
- "table" maps to null:1
- "banana" maps to null:2
"a" is followed by "table" one time and "banana" twice, "table" is the end of a sentence once, and "banana" is the end of a sentence twice. NOTE: we use null to mark the end of a sentence.
The MarkovChain also records a ProbabilityDistribution that contains the frequencies with which words start any sentence. In this case, that startWords data will just say that "a" started 3 sentences.
GENERATING A TWEET:
Once we have trained the Markov model, we can use it to generate a tweet. Given a desired length of tweet (in characters), we repeatedly generate sentences until the tweet is long enough.
To generate a sentence, we treat the MarkovChain as an iterator that maintains state about the current word (i.e. the one that will be generated by next()).
- the reset() method picks (at random) one of the startWords to be the current word. We use reset() to start a new sentence.
- the next() method picks (at random) a successor of the current word according to the current word's probability distribution. That successor will be the new "current" word after the current one is returned by next().
In the example above, reset() sets the current word to "a" (the only choice offered by startWord). Then: next(); // yields "a" (the start word) with probability 3/3 next(); // yields "table" with probability 1/3 and "banana" with probability "2/3" then the iterator is finished (the current word will be null), since both "table" and "banana" appeared only at the end of sentences.
The random choices are determined by a NumberGenerator.
-
Field Summary
Fields Modifier and Type Field Description (package private) Map<String,ProbabilityDistribution<String>>
chain
private NumberGenerator
ng
(package private) ProbabilityDistribution<String>
startWords
-
Constructor Summary
Constructors Constructor Description MarkovChain()
No need to write any constructors.MarkovChain(NumberGenerator ng)
No need to write any constructors. -
Method Summary
Modifier and Type Method Description (package private) void
addBigram(String first, String second)
Adds a bigram to the Markov Chain dictionary.void
fixDistribution(List<String> words)
Modifies all ProbabilityDistributions to output words in the order specifiedvoid
fixDistribution(List<String> words, boolean recordStart)
Modifies all ProbabilityDistributions to output words in the order specified(package private) ProbabilityDistribution<String>
get(String token)
Returns the ProbabilityDistribution for a given token.boolean
hasNext()
This method should check if there is another word to retrieve from the Markov Chain based on the current word of our walk.String
next()
Returns: either 1.void
reset()
DO NOT EDIT THIS METHOD.void
reset(String start)
Given a starting String, sets up the Iterator functionality such that: (1) the Markov Chain will begin a walk at start.String
toString()
Use this method to print out markov chains with words and probability distributions.void
train(Iterator<String> sentence)
Adds a sentence's training data to the MarkovChain frequency information.Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, wait, wait, wait
Methods inherited from interface java.util.Iterator
forEachRemaining, remove
-
Field Details
-
ng
-
chain
-
startWords
-
-
Constructor Details
-
MarkovChain
public MarkovChain()No need to write any constructors. They are provided for you. -
MarkovChain
No need to write any constructors. They are provided for you.- Parameters:
ng
- - A (non-null) NumberGenerator used to walk through the MarkovChain
-
-
Method Details
-
addBigram
Adds a bigram to the Markov Chain dictionary. Note that the dictionary is a field called chain of typefinal Map<String, ProbabilityDistribution<String>>
.- Parameters:
first
- - The first word of the Bigram (should not be null)second
- - The second word of the Bigram- Throws:
IllegalArgumentException
- if the first parameter is null.
-
train
Adds a sentence's training data to the MarkovChain frequency information.This method is meant to be called multiple times. Each call to this method should provide this method with an Iterator that represents one sentence. If we were to train a Markov Chain with four unique sentences, we would convert each sentence into an iterator and call train() four times, once on each of the iterators.
Look at the homework instructions for more details on bigrams. You should use addBigram() in this method.
Once you reach the last word of a sentence, add a bigram of that word and null. This will teach the Markov Chain that that word can be used to end a sentence.
Do nothing if the sentence is empty.
- Parameters:
sentence
- - an iterator representing one sentence of training data- Throws:
IllegalArgumentException
- if the sentence Iterator is null
-
get
Returns the ProbabilityDistribution for a given token. Returns null if none exists.- Parameters:
token
- - the token for which the ProbabilityDistribution is sought- Returns:
- a ProbabilityDistribution or null
-
reset
Given a starting String, sets up the Iterator functionality such that: (1) the Markov Chain will begin a walk at start. (2) the first call to next() made after calling reset(start) will return start.If start is null, then hasNext() should return false. start need not actually be a part of the chain (but it should have no successor).
- Parameters:
start
- - the element that will be the first word in a walk on the Markov Chain.
-
reset
public void reset()DO NOT EDIT THIS METHOD. WE COMPLETED IT FOR YOU.Sets up the Iterator functionality with a random start word such that the MarkovChain will now move along a walk beginning with that start word.
The first call to next() after calling reset() will return the random start word selected by this call to reset().
-
hasNext
public boolean hasNext()This method should check if there is another word to retrieve from the Markov Chain based on the current word of our walk.Your solution should be very short.
-
next
Returns: either 1. the most recent word passed to reset() if reset was previously called 2. a successor picked from the probability distribution associated with the most recently returned from next()- Specified by:
next
in interfaceIterator<String>
- Returns:
- the next word in the MarkovChain (chosen at random via the number generator if it is a successor)
- Throws:
NoSuchElementException
- if there are no more words on the walk through the chain.
-
fixDistribution
Modifies all ProbabilityDistributions to output words in the order specified- Parameters:
words
- - an ordered list of words that the distributions should generate- Throws:
IllegalArgumentException
- if there are less than 1 words fed in
-
fixDistribution
Modifies all ProbabilityDistributions to output words in the order specified- Parameters:
words
- - an ordered list of words that the distributions should generaterecordStart
- - whether or not to pick the first word from `startWords`- Throws:
IllegalArgumentException
- if there are less than 1 words fed in
-
toString
Use this method to print out markov chains with words and probability distributions.
-