One way to get around the problem of local maxima, and related problems like ridges and plateaux in hill climbing is to allow the agent to go downhill to some extent. In virtual annealing - named just because of an analogy with cooling a liquid until it freezes - the agent chooses to consider a random move. Whether the move improves the evaluation function that it is always carried out. If the move doesn't improve the evaluation function, so the agent will carry out the move with some probability among of 0 and 1. The probability decreases as the move gets worse in terms of the evaluation function, so really bad moves are rarely carried out. This strategy can often nudge a search out of a local maximum and the search can be continue towards the global maximum.