Inspired by a math teacher’s penchant for scrutinizing the problem-solving process rather than focusing solely on the final result, OpenAI has developed a novel method of model training. Termed as ‘process supervision,’ this technique rewards each step of correct reasoning, thus deviating from the traditional ‘outcome supervision,’ where the model’s correct final outcome is rewarded.
This training approach aims to yield a model with reduced hallucinations and enhanced alignment, two attributes OpenAI considers critical for building aligned AGI. However, the question remains: Can these innovative training methodologies pave the way towards AGI?
OpenAI proposes two distinct ways to train models to curtail hallucinations: process supervision and outcome supervision. The company claims the former approach, involving stepwise feedback, has substantially improved mathematical reasoning.
OpenAI’s emphasis on reducing hallucinations reflects the broader industry trend. NVIDIA, for instance, recently launched NeMo Guardrails, an open-source toolkit designed to improve accuracy and security in Language Model-based applications.
Given the persistent issue of misinformation and biases produced by chatbots, efforts are concentrated on reducing such inaccuracies. According to OpenAI, its new training method can help control irrational chatbot outputs by employing a process-oriented feedback mechanism at each step.
A Step Towards AGI?
OpenAI’s plan to build an aligned AGI points to its long-term strategy. Sam Altman, the company’s figurehead, has frequently highlighted AGI’s potential, despite acknowledging its dangers. His recent tweet suggested that AGI could expedite the rate of societal change, even as he signed a statement alongside other AI luminaries calling for safeguards against the potential existential threat posed by AI.
However, Altman’s ambiguous stance on AGI – evident from his refusal to sign an open letter calling for a halt on advanced AI models – raises questions about OpenAI’s ultimate direction.