OpenAI's much-touted model GPT-4, lauded for its multimodal abilities, including advanced image recognition, still has significant flaws. These glitches range from inventing facts to misinterpreting chemicals' images and hate symbols, according to a new paper from OpenAI.
To stay ahead of AI developments, look here first.
https://preview.redd.it/seg5x4zn3uqb1.png?width=1108&format=png&auto=webp&s=635a6c58cf6255f62d8eae3077678864e5b0e248
Unintended GPT-4V behaviors
- GPT-4V has a tendency to hallucinate or invent facts with unwarranted confidence.
- The model struggles to make correct inferences, sometimes creating fictional terms by wrongly combining text strings.
- It misinterprets certain symbols of hate and can give incorrect answers in the context of medical imaging.
OpenAI’s mitigation strategies
- OpenAI has implemented various safeguards to prevent GPT-4V's misuse, such as breaking CAPTCHAs or using images to infer personal details.
- The company insisted that GPT-4V is not to be used for identifying dangerous chemicals from image structures.
- OpenAI acknowledged it has a long way to go in refining the model and is working on it.
Discrimination and bias
- When OpenAI’s production safeguards are disabled, GPT-4V displays bias against certain sexes and body types.
- The paper reported offensive responses related to body positivity when prompted by an image of a woman in a bathing suit.
(source)
P.S. If you like this kind of analysis, I write a free newsletter that dissects the most impactful AI news and research. 1000s of professionals from Google, Meta, and OpenAI read it daily.
submitted by