When AI tells you what you want to hear, even if it knows it’s not true … A Bard example
I love Bard. It eloquently tells me things in a way that meets and exceeds my expectations, and even more than GPT-4. But what is Google's strategy behind programming it to say things that it knows are not true? Do they train it to say what the use…