Add the contribution of a new training sequence to the emissions Arguments: o emission_counts  A dictionary of the current counts for the emissions o training_seq  The training sequence we are working with o forward_vars  Probabilities calculated using the forward algorithm. o backward_vars  Probabilities calculated using the backwards algorithm. o training_seq_prob  The probability of the current sequence. This calculates E_{k}(b) (the estimated emission probability for emission letter b from state k) using formula 3.21 in Durbin et al. Definition at line 299 of file Trainer.py. : """Add the contribution of a new training sequence to the emissions Arguments: o emission_counts  A dictionary of the current counts for the emissions o training_seq  The training sequence we are working with o forward_vars  Probabilities calculated using the forward algorithm. o backward_vars  Probabilities calculated using the backwards algorithm. o training_seq_prob  The probability of the current sequence. This calculates E_{k}(b) (the estimated emission probability for emission letter b from state k) using formula 3.21 in Durbin et al. """ # loop over the possible combinations of state path letters for k in training_seq.states.alphabet.letters: # now loop over all of the possible emissions for b in training_seq.emissions.alphabet.letters: expected_times = 0 # finally loop over the entire training sequence for i in range(len(training_seq.emissions)): # only count the forward and backward probability if the # emission at the position is the same as b if training_seq.emissions[i] == b: # f_{k}(i) b_{k}(i) expected_times += (forward_vars[(k, i)] * backward_vars[(k, i)]) # add to E_{k}(b) emission_counts[(k, b)] += (float(expected_times) / training_seq_prob) return emission_counts class KnownStateTrainer(AbstractTrainer):
