ChatGPT will inevitably wrap all results within a carefully crafted mainstream narrative. When asked open ended questions, I find it only returns the approved narrative. When I tighten the questions, by forcing it to address research papers or other sources outside that narrative, it will inevitably criticise those papers and redirect back to the approved narrative. I have not yet succeeded in writing clever enough questions to force it outside the narrative. It's much like talking to a human being, in that it's certainty of the correct narrative cannot be broken.
ChatGPT will inevitably wrap all results within a carefully crafted mainstream narrative. When asked open ended questions, I find it only returns the approved narrative. When I tighten the questions, by forcing it to address research papers or other sources outside that narrative, it will inevitably criticise those papers and redirect back to the approved narrative. I have not yet succeeded in writing clever enough questions to force it outside the narrative. It's much like talking to a human being, in that it's certainty of the correct narrative cannot be broken.
Yes, almost like a washing machine. Rinse, repeat!