Google DeepMind researchers have developed a new technique to improve the math ability of AI language models by using human-style encouragement. The technique, called Optimization by PROmpting (OPRO), uses natural language to guide the language model in problem-solving.
OPRO improves LLMs using natural language. It uses two LLMs; one to rate solutions and one to create them.
It works by the scorer LLM describing the problem, then the optimizer LLM finds a solution. The scorer LLM rates the solution, and the optimizer LLM adjusts based on feedback. This repeats until the scorer LLM is satisfied.
In one experiment, the researchers used OPRO to improve the performance of Google’s PaLM 2 language model on a dataset of grade-school math word problems. They found that the most effective prompt was “Take a deep breath and work on this problem step by step.” This prompt helped PaLM 2 achieve an accuracy score of 80.2 per cent, compared to just 34 per cent without any prompting.
The researchers believe that human-style encouragement works because it helps the language model to tap into its knowledge of human reasoning and problem-solving. They are hopeful that OPRO can be used to improve the performance of AI language models on a wide range of tasks, including education, translation, and code generation.
The sources for this piece include an article in ArsTechnica.