Mon 17 Jul 2023

ChatGPT: Users beware!

ChatGPT: Users beware!

What is ChatGPT?

ChatGPT is a chatbot developed by OpenAI on GPT-4 (at the time of writing), which is a type of language model. It is trained by artificial intelligence and machine learning and can answer questions, provide information and in-content format (including poetry, computing codes, legal drafts etc) through a conversion with the user.

Through massive amounts of human created text and an information database, it will look for statistical regularities. It learns what words and phrases are associated with each other and is able to predict the next word in a sentence and how sentences are formulated. The results are that it mimics and replicates human language. Despite this, there are important limitations that ought to be borne in mind when using the chatbot.

The Limitations

Misinformation/nonsensical answers: One of the biggest drawbacks that faces the likes of ChatGPT is the inaccuracies produced within its content and sources. The implications of this in a real-world instance are explored below.  Why does this occur?  As ChatGPT is built on a large language model with a multiplicity of internet sources, it cannot read and analyse the text it sources but rather recognises the language patterns. This carries a risk that it might summarise the wrong source or provide inaccurate summaries depending on the topic and availability of that data, thus fabricating details and conclusions. Equally, text can be used and taken out of context, or used in ways which were not intended.

Given the vast amount of data it can access, ChatGPT may struggle to condense information particularly where the prompts are too broad.  ChatGPT is sensitive to how questions are phrased. If a user slightly tweaks their prompt, it can produce a different answer or response despite it being the same question merely phrased in a different manner.  Furthermore, when it retrieves information, it does not provide references or citations. This leads to potential copyright infringement issues since the responses are based on human generated text. It will also “believe” any information provided which, in turn, may result in further inaccurate answers and misinformation being produced.

ChatGPT themselves acknowledge these shortcomings on their homepage, noting the chatbox “sometimes writes plausible-sounding but incorrect or nonsensical answers.” Therefore, users should be privy to the unreliable nature of the content and sources of answers produced and should ensure the sources are always fact checked.

When asked how it can prevent generating misinformation, ChatGPT stated that it will:

  • Diversify and enhance its dataset to broaden its perspective, provide reliable sources and fact check information.
  • Perform human reviews to address instances of misinformation.
  • Integrate a fact-checking mechanism within the chatbot’s responses in real time.
  • Create a feedback system to report inaccurate or misleading information created by ChatGPT.
  • Develop techniques so that ChatGPT can understand context and clarify ambiguities in user intent.
  • Allow users to state the type and depth of information required so ChatGPT can generate responses aligning with the user’s intent.
  • Encourage users to evaluate responses provided and fact check them against reliable sources.
  • Collaborate with academic institutions, fact-checkers, and the general research community to improve and refine the methodologies used to identify misinformation.

As the technology develops and these factors are put into practice, misinformation produced in answers generated by ChatGPT will likely decrease.

Lack of common sense/emotional intelligence: Misinformation can also be produced by ChatGPT due to its inability to possess human common sense. It cannot replicate true emotional intelligence that humans’ possess, albeit it may seem empathetic. It does not detect emotional cues or respond suitably to complicated emotional situations.

Limited knowledge: The chatbot will only hold information up to when it was last updated, currently based on GPT-4 for paid subscribers and GPT-3 for free users. It will have limited or no knowledge of events or developments following on from the date of its latest update. This narrows the availability and pool of its dataset, and its responses can and will be outdated. Furthermore, for free users, since they will be on an older version, results will be more susceptible to outdated references and responses.

Real world example of challenges presented by the use of ChatGPT

In a recent US personal injury case against airline Avianca, two New York lawyers were sanctioned for submitting legal arguments which cited six fictitious cases that ChatGPT produced. US District Judge P. Kevin Castel in Manhattan fined the lawyers and the law firm for which they worked $5,000. The case serves to exemplify the perils of using ChatGPT without, at least, fact checking the responses generated. The lawyers disagreed with the courts contention that they acted in bad faith and argued that that it was a mere mistake. Whilst the Judge declared that there is nothing fundamentally improper about using AI to assist in one’s case, ethics rules dictate that lawyers should ensure their findings are accurate.

In this case it is not exactly clear what one factor or accumulation of factors resulted in ChatGPT producing falsified case references, nonetheless, it should always be used in caution particularly where the stakes are high. 

This article was co-written by Arina Yazdi, Trainee Solicitor. 

Make an Enquiry

From our offices we serve the whole of Scotland, as well as clients around the world with interests in Scotland. Please complete the form below, and a member of our team will be in touch shortly.

Morton Fraser MacRoberts LLP will use the information you provide to contact you about your inquiry. The information is confidential. For more information on our privacy practices please see our Privacy Notice