The question that needs to be asked if we are going to trust AI to write our news articles and other documents: What happens when robots lie? The same issue applies to generative AI. Because someone, for sure, will code them to skew their responses a certain way.
And what if AI is caught lying?
"Our study's results indicate that after three violations and repairs, trust cannot be fully restored, thus supporting the adage 'three strikes and you're out.' … In doing so, it presents a possible limit that may exist regarding when trust can be fully restored."
"Even when a robot can do better after making a mistake and adapting after that mistake, it may not be given the opportunity to do better. Thus, the benefits of robots are lost."