Monday, January 23, 2023

ChatGPT Arguing with Itself

Ted Lieu in the NY Times:

Imagine a world where autonomous weapons roam the streets, decisions about your life are made by AI systems that perpetuate societal biases and hackers use AI to launch devastating cyberattacks. This dystopian future may sound like science fiction, but the truth is that without proper regulations for the development and deployment of Artificial Intelligence (AI), it could become a reality. The rapid advancements in AI technology have made it clear that the time to act is now to ensure that AI is used in ways that are safe, ethical and beneficial for society. Failure to do so could lead to a future where the risks of AI far outweigh its benefits.

I didn’t write the above paragraph. It was generated in a few seconds by an A.I. program called ChatGPT, which is available on the internet. I simply logged into the program and entered the following prompt: “Write an attention grabbing first paragraph of an Op-Ed on why artificial intelligence should be regulated.”


pootrsox said...

As a (retired) English teacher, I was fascinated (in a horrified way) by the chatbot's ability to produce essays. So I challenged it: explain the symbolism of the pickle dish in Ethan Frome. It produced a rather formulaic shallow answer. I responded that the dish actually symbolized something far more profound and spelled out that idea(it does); whereupon the chatbot told me I was wrong and reproduced, virtually word for word, the original simplistic answer.

I suspect that English teachers/professors, at least, will figure out fairly quickly how to spot a chatbot essay. If nothing else, I personally would pose the assigned essay prompt to a chatbot or two myself, in advance, to have a comparison model.

John said...

For now, yes, it is pretty weak, but it's getting better fast.