Back

Reigning in Generative AI

Isabelle Lee
January 31, 2023
  • The movement to regulate generative AI tools like ChatGPT is gaining steam thanks to concerned educators and lawmakers. 
  • People are concerned that generative AI tools could be used for malicious purposes like cheating on tests without proper guardrails. 
  • OpenAI is working on various solutions, but lawmakers and private citizens have also answered the call to provide viable solutions to concerns about unchecked generative AI.

In the wake of the release of OpenAI’s chatbot, ChatGPT, people immediately began to worry about people using ChatGPT for malicious purposes. ChatGPT is a language interface that answers questions, creates content, and gives advice in response to user-generated prompts. ChatGPT went viral for generative abilities, but all the buzz helped hide concerns about the dark side of the bot until teachers and professors were the first to sound the alarm, as many worried that students may use ChatGPT to write essays for classes or cheat by asking the bot questions from tests or exams. As people explored the bot’s possibilities, others began researching how to put guardrails on ChatGPT or circumvent them entirely. From a bot that can tell whether ChatGPT wrote an essay to OpenAI’s efforts to watermark AI-generated content to a Massachusetts senator using ChatGPT to author legislation that would regulate generative AI.  

One of the early concerns about ChatGPT was that students would use ChatGPT to cheat. ChatGPT can answer various prompts, from “write an essay about colonialism” to “what is the Pythagorean theorem?” Teachers were concerned that they would not be able to differentiate between answers written by students and those written by ChatGPT. Enter 22-year-old Edward Tian. Tian is a senior at Princeton who is majoring in computer science and minoring in journalism. Over his winter break, Tian built GPTZero, a bot that can tell whether something was written by AI or by a human. More than 30,000 people tried GPTZero in the first week that it launched. 

GPTZero uses two indicators to determine whether an excerpt is written by AI. The first is “perplexity,” which measures the complexity of the text. The more complex the text is, the less likely it is that AI wrote it. The measure also evaluates how perplexed GPTZero is by the excerpt. If the GPTZero bot is confused, the text is most likely AI-generated. The second is “burstiness,” which measures the variation between sentences. If the sentences are more uniform, they are more likely to be written by AI because humans write with more complexity and variation than chatbots like ChatGPT do. 

Tian took matters into his own hands by creating GPTZero, but OpenAI is also working on various strategies to help people detect whether AI wrote an excerpt of text. One of OpenAI’s efforts is a text classifier that will let users know if AI likely generates a text block. The classifier was released on January 31st, about two months after ChatGPT was released. The text classifier identifies a block of text written as AI as AI-generated 26% of the time. The classifier is less likely to accurately determine whether AI wrote the text on shorter blocks of less than 1,000 words. In an email written to TechCrunch about the announcement, OpenAI said that it “should be used as a complement to other methods of determining the source of text instead of being the primary decision-making tool.” The classifier does not detect plagiarism. 

OpenAI also released another detector tool in 2019, long before ChatGPT was released, called the GPT-2 Output Detector Demo, which is hosted on Hugging Face. The detector lets users add text, providing a percentage score from real to fake. A high score of real means that the excerpt is human-written. I wrote, “I am usually very tired in the mornings. When I am tired in the mornings, I drink coffee. I like coffee with no milk. My roommate prefers Matcha. I wish she liked coffee because then I could justify buying a coffee machine.” I received a real score of 88.39%. According to the developers, the classifier can detect “1.5 billion parameter GPT-2-generated text with approximately 95% accuracy.”

In addition to OpenAI’s text classifier, and the detector, OpenAI is developing a watermark that would be added to the text generated by ChatGPT. At a lecture at the University of Austin, guest researcher at OpenAI Scott Aaronson revealed that OpenAI is working on the way to embed an “unnoticeable secret signal” in the text. OpenAI’s approach would embed a cryptographic key into the text, and then anyone with the key could unlock the watermark to see whether AI generated the text. If possible, it would be possible for teachers to detect plagiarism. It could also help prevent ChatGPT from being used to write phishing emails or comments designed to spam publications with misinformation. OpenAI’s rumored approach to creating a watermark is one of the first cryptography-based approaches to marking content as generated by AI. 

Still, the popularity of tools like Tian’s GPTZero proves that products like OpenAI’s existing solutions leave much to be desired. Some lawmakers have answered the calls of concerned citizens by drafting legislation to curb the use of language interfaces. Massachusetts State Senator Barry Finegold took regulating AI language interfaces one step further, using ChatGPT to write legislation that would regulate generative AI models. The bill, one of the first to be written with the newly released ChatGPT, opens by requiring large-scale generative AI models to adhere to various operating standards. The standards include; the model cannot discriminate or act with bias against individuals or groups based on federally defined and protected characteristics like sex or race, the model prevents plagiarism by watermarking text, and the company that owns the model implements common sense security measures to protect the personal information of the individuals used to train the model. 

The bill also requires companies to ask for consent from individuals before using, disclosing, or collecting their data and delete data that could identify individuals when the information is no longer relevant. The final section of the bill contains the disclaimer- “This act has been drafted with the help of ChatGPT and any errors or inaccuracies in the bill should not be attributed to the language model but rather to its human authors.” The human and AI-written bill may become the blueprint for other lawmakers who look to regulate the rapidly expanding world of generative AI. The race to regulate or corral generative AI is officially on. In addition to efforts by lawmakers, researchers and entrepreneurs like Edward Tian are looking to dive into the market for detecting AI-written text, and companies like OpenAI are working to build in checks to existing products to increase the scalability of generative AI tools and win public trust in their products. Once public trust is established, the possibilities for generative AI are endless.

Summari logo
Summari Inc | © Copyright 2022
Follow us