OpenAI improves GPT-4 with CriticGPT
OpenAI introduces CriticGPT, based on GPT-4, to critique ChatGPT’s outputs, aiding trainers with improved feedback and reducing false outputs by 60%.
OpenAI has launched CriticGPT, a new model based on GPT-4, designed to identify and critique errors in ChatGPT’s outputs. The tool aims to enhance human trainers’ effectiveness by assisting them in providing feedback on the chatbot’s performance.
According to OpenAI, CriticGPT-assisted trainers have demonstrated a 60% improvement over those without assistance, particularly in reducing false outputs. However, challenges remain, especially in handling complex tasks and scattered errors.
Similar to ChatGPT’s training process, CriticGPT learns through human feedback, focusing on identifying intentionally inserted errors in ChatGPT’s code outputs. Evaluations showed that CriticGPT’s critiques were preferred over ChatGPT’s in 63% of cases involving naturally occurring bugs, highlighting its ability to minimize irrelevant feedback.
OpenAI plans to further develop CriticGPT’s capabilities, aiming to integrate advanced methods to improve human-generated feedback for GPT-4. The initiative underscores the ongoing role of human oversight in refining AI technologies despite their increasing automation capabilities.