ChatGPT has already proven itself to be much more than just a chatbot, like the ones we see in online self-service stores. Since its launch, the OpenAI-developed bot has responded to unusual user requests, from writing a children’s storybook to responding to suitors on dating apps. However, the capabilities of the software can also become a digital security issue, as some hackers are already using ChatGPT to facilitate cybercrime. Other technological problems are related to plagiarism in academic works or the copyright of texts and music created on the platform.
Despite the controversy, the technology market has high expectations for a chatbot. Experts believe, for example, that a solution based on artificial intelligence may eventually replace the Google search engine. Microsoft founder Bill Gates sees potential in ChatGPT and is impressed with its level of innovation. Proof of this is that, last Monday (23), the company renewed the billionaire’s contract with OpenAI. According to the New York Times, the values could reach 10,000 million dollars. Here are six great things that have already been done with ChatGPT.
Write a Children’s Book
Some authors have opted for ChatGPT as a support tool in the production of reports, especially with regard to the creation of characters, scenes and dialogues. Although the AI is not capable of creating complex and original stories, the software can be used by writers who are out of ideas to create some kind of fantasy scheme. Of course, the author will still need to review, revise and add depth to the draft written by ChatGPT.
In December 2022, designer Armar Rishi used ChatGPT to write the children’s story “Alice and Sparkle”. The story follows young Alice and her robotic friend Sparkle as they learn about the world of technology. Once the script was ready, Reshi used Midjourney’s software capable of producing illustrations from text descriptions to illustrate the story. According to him, no amount was spent on the production of the book.
“Alice and Sparkle” was posted on Amazon 72 hours after it was completed, but was removed soon after due to controversy over the authenticity of the images used by the illustration software.
Write Malicious Code
Cybercriminals can use the intelligence of ChatGPT to write malicious code, at least that’s what Check Point Research suggests. According to researchers from the cybersecurity firm, the technology can generate malicious lines of code and even generate phishing emails from simple commands. The program will even allow you to clone existing malware code.
In December 2022, Check Point researchers infiltrated underground dark web forums and identified posts by hackers who were actually using ChatGPT for cybercrime. Among them is the creation of malware focused on stealing sensitive user data.
According to Check Point expert Sergey Shekevich, ChatGPT has the potential to dramatically change the cyberthreat landscape. This is because, thanks to the program, any person with minimal resources and little programming knowledge can create malicious code.
Exam essay writing
Since its launch, ChatGPT has garnered attention for scripting written with extreme fidelity to human typing and speed. But could it be that, if the robot were to enroll in the Enem (National Secondary School Examination), it would score 1000 points on the essay? This is what the G1 news portal team tried to find out.
In December last year, a reporter tested artificial intelligence and was asked to write an article on the topic of a recent sprint test: “The challenge of evaluating traditional communities and peoples in Brazil.” ChatGPT completed the task in 50 seconds. The team then sent the script to be evaluated by two teachers, Marina Rocha and Flavia Consolato, who gave it a score of 680 out of 1,000.
Although some problems were pointed out when correcting the text, such as inconsistency, lack of external reference and repetition, the errors are justified. They happen because the AI is trained on content available on the Internet, including text that violates standard writing rules or contains incorrect information, a limitation that ChatGPT readily accepts.
Write songs with a specific songwriting style
ChatGPT can write songs in the styles of specific singers. In January this year, Australian singer Nick Cave drew attention when he shared his views on the songs created by the chatbot on the Red Hand Files website in what is called “Nick Cave style”. The singer admitted that he doesn’t share the same enthusiasm for AI and even seems intimidated by the technology.
ChatGPT can also create songs that follow the style of famous Brazilian artists like Tim Maia, Robertos Carlos and Alcione. To create this feat, the AI studies the features and styles found in the artist’s suggested writing and generates text with a style similar to the author’s. In addition, the works defined by the composer can also be fed into the program and, in this way, create a text similar to the one presented.
Reply to dating app matches.
Since it can simulate human interactions, ChatGPT can also be used to reply to messages on dating apps. Jordan Parker, associate editor at news site Business Insider, came up with the idea of asking the chatbot to recommend responses to messages on the dating app Hinge.
Anyone who thinks that ChatGPT malfunctions in Enem hasn’t seen the software in action when it comes to flirting. According to Parker, the responses suggested by the chatbot were so embarrassing that no suitors continued the conversation. “Some of the responses were so unbearably meritorious that they made me sick,” Parker said.
To get an idea, one of the journalist’s parties said that his irrational fear flew. Then I asked ChatGPT to give a funny answer. “No problem,” the chatbot wrote, “I’d be more than happy to hold your hand and provide moral support during the turbulence. And if the plane crashes, at least we’ll be together in a romantic explosion of glory!”
Despite being able to provide convincing answers to a wide variety of questions, the AI lacks social skills and therefore has no idea what may or may not be embarrassing.
Theft of academic papers.
In December 2022, Darren Heck, a professor of alternative philosophy at Furman University, identified that one of his students used ChatGPT to write an academic paper. For him, what denounced plagiarism was the fact that the text, although it had a good syntactic coherence, did not make sense. “The article confidently and accurately described Hume’s views on the horror paradox that were quite misguided,” he said.
Although most academic institutions use plagiarism detection software to detect non-original work, texts written by ChatGPT can easily be missed by detection software. This is because the content is not copied directly from the Internet, but is compiled with the help of an artificial intelligence database. To overcome this, it is necessary to pay attention to some clues that the program leaves, such as outdated information or the absence of external references.