Brett Vogelsinger: “We need to speak.” Just a few moments ago, a student asked for feedback about an essay. One paragraph caught my attention. Vogelsinger realized that the student had not written the piece. He used ChatGPT.
, a free artificial intelligence tool, made available by OpenAI late last year, can respond to simple prompts, and create essays and stories. It can also create code.
In less than a week it reached more than one million users. Microsoft had planned would invest $10 billion in OpenAI by early 2023. OpenAI was valued at $29 billion – more than twice what it was worth in 2021.
Get the best of Science News delivered straight to your mailbox.
No wonder other tech firms have been racing to release competing tools. Anthropic is an AI company created by former OpenAI staffers. They are testing a chatbot named Claude. Google released Bard early in February and Baidu, a Chinese search company, released Ernie Bot.
Many people use ChatGPT for fun or out of curiosity. I asked the program to create a ridiculous excuse for not doing my homework, in the form of a medieval declaration. It replied in less than a moment: “Hark!” Your servant was attacked by a group of leprechauns who stole my quill and paper, preventing me from completing his homework.
Students can also cheat using it. ChatGPT is the start of a new wave in AI that will disrupt education.
Stanford University’s student newspaper surveyed students and found that 17 percent had used ChatGPT for assignments or exams by the end of 2022. Some students admitted that they submitted the chatbots’ writing as theirs. These students, and perhaps others, are getting away with this for now. ChatGPT does a great job.
Vogelsinger claims that the program can perform better than many middle school students. He may not have been aware that his student used it, unless he noticed one thing: “He pasted and copied the prompt.”
Vogelsinger did not consider it cheating because the essay was still in progress. He saw it as an opportunity. The student and AI now work together. ChatGPT helps the student improve his research and writing skills.
Subscribe to Science News
“[We’re] color-coding,” Vogelsinger says. The student’s writing is in green. ChatGPT is in blue. Vogelsinger helps the student choose and expand a few sentences that the AI has provided. Other students can also collaborate using the tool. Some kids use it frequently, while others don’t. Vogelsinger believes the tool helped them get their ideas started and focus.
The story ended happily. Many educators at schools and universities are still struggling to figure out how to use ChatGPT and AI tools.
New York City Public Schools banned ChatGPT in their devices and networks at the beginning of January. Teachers were concerned that students using ChatGPT would not learn problem-solving and critical thinking skills. Also, they were worried that the answers provided by the tool might not be safe or accurate. Similar bans have been imposed by many other school systems around the United States.
According to the Stanford Daily, Keith Schwarz, a computer science professor at Stanford, has “switched to pencil-and paper exams” so that students can’t use ChatGPT.
ChatGPT, and its cousins, could be of great benefit to all learners. AI, like calculators or Google facts for math, can speed up writing. These tools allow anyone to create well-formed paragraphs and sentences. What could this mean for the way we learn and teach?
Chat: The good, the bad and the weird
ChatGPT is a hit with its users. Avani Rao is a sophomore at a high school in California. She says, “It seems so much more real than I imagined a robot to be.” She hasn’t asked the bot to help her with homework. She’s used it for fun to make it say silly or creative things. She asked the computer to explain addition in the voice, say, of a villain.
ChatGPT is a great tool for those who are learning a second or third language, or have difficulty composing sentences. ChatGPT’s text isn’t technically plagiarism because it generates original, new material.
Students can use ChatGPT as a tutor to improve their writing, grammar or to explain difficult subjects. Vogelsinger says that ChatGPT is a tutor. He had a student who was excited to tell him how ChatGPT clearly explained a concept in science class.
ChatGPT can be used by educators to create lesson plans, assessments or activities. They could even personalize them to meet the specific needs of students.
, an expert on science education from the University of Georgia, Athens, , tested ChatGPT in order to determine if it was able to write an academic article . The tool made it easy to sum up knowledge and produce good writing. He says, “It is really amazing.”
There are some serious problems with this. ChatGPT, and similar tools, can sometimes get things wrong. They do not pull information from databases. They are instead trained to create new text that sounds like it was written by a human. They mix language without understanding it. This can lead to obvious mistakes.
CNET, a news website, was criticized earlier this year for relying on AI to produce dozens of articles that were full of errors. It made an error in an early advertisement of the Bard chatbot. It claimed incorrectly that the James Webb Space Telescope had taken the first image of an exoplanet. ChatGPT also said that in a Twitter conversation, the peregrine was the fastest marine animal. The falcon is a bird, and it doesn’t live on the ocean.
Casey Fiesler is an expert on the ethics of technology from the University of Colorado Boulder. She says that ChatGPT “is definitely wrong”. “There are errors and bad information.” She has made several TikTok Videos about the pitfalls ChatGPT.
If asked for sources, it makes them up. Fiesler revealed in one video. Fiesler revealed that if asked about sources, ChatGPT will make them up. Zhai who views the tool as a personal assistant discovered the same thing. ChatGPT gave him sources which looked right when he asked for them. They didn’t exist.
Definition of biodiversity
Dante A. Grade 10, Clark Magnet High school, Calif.
Biodiversity is the diversity of species and ecosystems in a specific region or the entire planet. It includes the genes, species, and ecosystems of the natural world as well as the relationships that exist between them.
ChatGPT
Biodiversity is the diversity of organisms and ecosystems that live on the Earth. The diversity of species, genetics and ecosystems is essential for the maintenance of the natural balance and the sustainability of life on Earth.
ChatGPT: How it works
If you understand how ChatGPT works, the mistakes are not so surprising. “It doesn’t reason. It has no ideas. It doesn’t think,” says Emily M. Bender a computational linguist from the University of Washington, Seattle.
ChatGPT has been developed using two different types of machine-learning. The first type is a large-language model based upon an artificial neural net. This computing architecture, loosely based on how neurons interact in the brain, finds statistical patterns within vast amounts of data.
By churning vast amounts of text, a language model can learn to predict which words will appear next in a phrase or sentence. It maps out words and phrases in a three-dimensional space that shows their relationship to each other. This map brings together words that are often used together, such as peanut butter and jam.
Parameters are used to measure the size of a neural network. As the model learns, these internal values are tweaked. OpenAI released GPT-3 in 2020. It was at the time the largest language model, with 175 billion parameters. The model was trained using text from the web, as well as digitalized books and academic journal articles. Sasha Luccioni is a Montreal-based researcher for Hugging Face, a firm that develops AI tools. She says the training text included essays, transcripts, and exams.
OpenAI enhanced GPT-3 and created GPT-3.5. In early 2022 the company released an improved version of GPT 3.5, called InstructGPT. OpenAI has added a brand new machine learning type. It is called reinforcement learning with feedback from humans and involves people in the training process. These workers are checking the AI output. People are rewarded for positive responses. Feedback from humans can help to reduce inappropriate, hurtful or biased responses. ChatGPT is powered by this finely tuned language model. Since March, users who pay for answers receive them powered by GPT-4. This is a larger language model.
OpenAI developed ChatGPT and added additional safety rules. It will not respond to certain sensitive questions or give harmful information. This step brings up another question: Whose values have been programmed into the robot, and what it’s allowed to say or not?
OpenAI does not provide specific details on how it created and trained ChatGPT. The company hasn’t released any code or data. This is disappointing to Luccioni, as it means that the tool cannot benefit from the perspective of the AI community. She says, “I would like to understand how it works in order to improve it.”
OpenAI responded with a statement by an unidentified spokesperson when asked for comment. The statement stated that “We made ChatGPT accessible as a preview for research to learn from real world use, which is critical in developing and deploying safe AI systems.” We are always incorporating feedback, and we have learned a lot from our users. OpenAI is patching up the tool to fix these issues as they arise.
ChatGPT does not exist as a completed product. OpenAI requires data from the real-world. It is the users who are being tested. Bender notes: “You’re working for OpenAI free.”