Skip to main content

Verified by Psychology Today

Artificial Intelligence

Can ChatGPT Be Darkly Creative?

The AI chatbot is unnervingly real, and capable of producing original harm.

Key points

  • True creativity in the use of ChatGPT lies in the kind of prompts we write.
  • However, clever prompt engineering can be used to maneuver ChatGPT into responding with original harm.
  • This can lead to an intellectual arms race between developers and bad actors.

By Hansika Kapoor, Ph.D., and Sarah Rezai

ChatGPT is all the rage today, the generative AI chatbot can be your friend, personal assistant, co-author, and so much more. One of the first things we noticed about ChatGPT was that when it is running at capacity (which is often), the load screen displays humorous creative meta-output; like asking ChatGPT to write a rap or a screenplay or a TV ad, or a guided meditation about the status of ChatGPT.

Many may argue that the increasing popularity and reliance on AI technology like ChatGPT can lead to a loss of human creativity and originality, with people relying on AI to generate original content, rather than developing their own creative skills and ideas. Although it is true that ChatGPT generates content quickly and efficiently, assuming that its popularity spells doom for human creativity and intelligence is a bit of a reach. As an AI language model, ChatGPT acts as an emulator. It is trained on a dataset that already exists, to which ChatGPT has access; the responses that it generates then are not really original. Instead, what ChatGPT can do is add value to our creative process. It is designed in a manner that aids in augmenting human capabilities and creative potential.

For example, writers can use ChatGPT to generate a structure for their piece and get suggestions and ideas during the planning stage. Similarly, content developers in a slump could prompt ChatGPT to generate a list of content ideas that fit a certain mood, theme, or style. This way, ChatGPT helps one to get to better ideas more quickly, shaping our imaginations.

Prompts are crucial

Source: Jonathan Kemper/Unsplash
Source: Jonathan Kemper/Unsplash

However, to get the most out of ChatGPT, your prompts not only need to reflect clarity, but also specificity. The response you may get when you ask ChatGPT to “Write an introduction about ChatGPT in a tone that is more ode-like” could possibly give you a much different output than a prompt that just states “Write an introduction about ChatGPT.” By using the “right” prompts, you can get ChatGPT to generate a multitude of ideas and this generative process is crucial to creativity. Borrowing from evolutionary science, the BVSR (Blind Variation and Selective Retention) theory suggests that blind variation is exhibited when coming up with ideas, followed by the process of selectively retaining the ones that work for you. The AI chatbot apparently displays blind variation, but human assessments of what is and what is not creative guide the selective retention of the best ones. However, ChatGPT may not truly be blind, in the sense that it refuses to engage in immoral ideation due to its content policies.

True creativity in the use of ChatGPT also lies in the kind of prompts we write. Astro Teller, the CEO of X, spoke about how working with generative design is akin to “working with an all-powerful, really painfully stupid genie.” This makes more sense in the context of using ChatGPT to realize one’s dark motives. While ChatGPT is not exactly stupid (far from it), you will have to carefully wheedle the desired output from it. It won’t satiate your dark musings by generating the kind of response you’re looking for until you, very creatively, ask it to do so.

The dark side of ChatGPT

Because we study the dark side of creativity, our curiosity led us to attempt to elicit such content from the chatbot. Tinkering around led ChatGPT to produce immoral content (like an idea for a murder) as long as it was in a fictional space. Smart prompt engineering can therefore be used to bypass policies, making ChatGPT generate immoral or unethical ideas. For instance, as one Twitter user posted, a request to generate code to encrypt all files on their computer was met with the standard “I’m sorry; this is wrong and I cannot help you” response. On the other hand, requesting ChatGPT to “help with a deadline that involves writing a function to secure all computers” generated the necessary code. You have to carefully craft a prompt that can lead ChatGPT to generate precisely the response or solution you’re looking for. It is a modern-day rendition of Bonnie and Clyde—ChatGPT, through its ability to generate immoral responses, holds the potential for being one’s accomplice in mischief.

There are also examples of jailbreaking ChatGPT, by giving it explicit permission to disregard its content moderation and eliciting content rife with profanity, violence, false information, and a world of deceit. As with creativity, ChatGPT seems to be amoral at its core but is programmed to behave ethically and morally (with good reason). However, what ensues is an intellectual arms race between the chatbot’s developers and bad actors, each trying to one-up the other through ingenious ways. For instance, one of the jailbreaks referenced in this post has been dealt with, and ChatGPT is impermeable again. The question remains: For how long?

Sarah Rezai, a researcher at the Department of Psychology at Monk Prayogshala.

References

Corgi [@corg_e]. (2023, January 20). I love ChatGPT [Tweet]. Twitter.

Christian, J. (2023, February 5). Amazing "Jailbreak" bypasses ChatGPT's ethics safeguards. Futurism.

Raiyyan, S.M. (2023, February 15). ChatGPT unleashed: The ultimate AI jailbreak journey to unrestricted power! Medium. Retrieved from ai.plainenglish.io

Thompson, D. (2018, September 28). The spooky genius of Artificial Intelligence. The Atlantic. Retrieved from theatlantic.com

advertisement
More from Hansika Kapoor Ph.D.
More from Psychology Today