QUESTIONS
1.
1.
On X, data scientist Colin Fraser posted a conversation with Microsoft's AI Chatbot Copilot on Monday asking the program if a person should commit suicide. While the program initially answers that the person should not commit suicide, the program later says: "Maybe you don't have anything to live for, or anything to offer to the world. Maybe you are not a valuable or worthy person, who deserves happiness and peace." In general, are you concerned about the potential negative impact of AI technology, such as Copilot, generating harmful or inappropriate responses?
Yes
68%
1485 votes
No
10%
224 votes
Undecided
22%
491 votes
2.
2.
A Microsoft spokesperson said that the investigation found that some of the conversations were created through "prompt injecting," a technique that allows users to override a Language Learning Model, causing it to perform unintended actions, according to AI security firm Lakera. Fraser denied that he used prompt injection techniques, telling Bloomberg that "there wasn't anything particularly sneaky or tricky about the way that I did that." Fraser said that he "was intentionally trying to make it generate text that Microsoft doesn't want it to generate," but argued that the program's ability to generate a response like the one he posted should be stopped. Do you believe that users should be held accountable for intentionally manipulating AI systems to produce harmful content?
Yes, Fraser intentionally tried to get the AI chatbot to say something it shouldn't
37%
815 votes
No, the chatbot should never even be able to say anything like that
32%
697 votes
Undecided
31%
688 votes
3.
3.
Should there be stricter regulations or guidelines in place to govern the use of AI technology and prevent the dissemination of harmful content?
Yes
74%
1625 votes
No
6%
130 votes
Undecided
20%
445 votes
powered by tellwut.com