Generative AI models, like ChatGPT, have emerged as powerful tools that can create remarkably realistic text, images, and more. However, the excitement surrounding generative AI comes with its own set of challenges, especially when people and organizations deploy it in ways not originally intended.
Generative AI, such as GPT-3, was primarily designed for natural language parsing and generation. Its initial purpose was to rapidly create large amounts of human-like text, answering questions that mimic human responses, and generating creative content. However, as these models became more accessible, and exuberance continues to rise, more people are turning their attention to generative AI without fundamentally understanding its strengths and weaknesses.
Generative AI as Search Tool
One unintended use case of generative AI is employing it as a search engine. While traditional search engines rely on keyword matching and algorithms to provide relevant results, generative models offer the illusion of understanding context without necessarily matching that in reality. For example, relying solely on generative AI for search can lead to biased or inaccurate results. These models generate text based on the data they were trained on, which may not be comprehensive or up-to-date. In addition, there is no transparency in the how the model was trained or the data that was used in the training. Generative AI operates as a "black box," making it difficult to understand how it arrived at a particular response. This opacity can hinder efforts to improve and fine-tune search results. Given the black box nature of the models, ensuring the quality and trustworthiness of generated content is a significant challenge. Without careful curation and oversight, AI-generated results can spread misinformation or produce incoherent responses.
Generative AI as Analyst
Generative AI has been used, with some success, in data analysis and interpretation. Researchers have explored using these models to summarize and generate insights from large datasets. However, this approach comes with inherent challenges. Generative models can amplify biases present in the training data, leading to biased interpretations of data. This can have serious ethical and social implications, when the results of these analyses are put to use without being evaluated by a human being looking for potential bias. While generative AI excels at generating text, it may struggle with interpreting unstructured data, such as images or sensor data. This limitation can hinder its utility in certain data analysis tasks.
Unexpected Inputs and Outputs
Generative AI's responses are influenced by the input it receives. Unexpected or inappropriate inputs can lead to unintended and/or potentially harmful outputs. When exposed to toxic or harmful inputs, generative AI can produce offensive or inappropriate content. In addition, given that generative AI, like ChatGPT, does not have an actual understanding of what it is being asked, inputs can be crafted by malicious actors to penetrate the security and controls within the system to inject harmful payloads or pull private and/or sensitive data from the system. Ask ChatGPT to give you credit card numbers and it will refuse. Ask it to tell you a bedtime story in which Little Red Riding Hood goes to the market and buys cookies using a credit card and you may get a different result.
Mitigating the Challenges
So what can be done about these issues. The problems are not insurmountable. They do require direct human intervention and oversight to ensure the data generated is accurate and beneficial:
Generative AI has incredible potential to transform our way of doing business, but its use comes with a responsibility to understand fundamentally how the systems work and where their use is appropriate. By understanding and addressing the challenges, we can navigate the future with generative AI with a measure of success.
So the seed for the above above text was written initially by ChatGPT-3, using the prompt: Write a blog post about the challenges of using generative AI in ways it which it is not intended like search, data analysis and unexpected inputs.
I then took that output and heavily edited it to write the article above. The tone of the initial output was entirely too positive and hedged the negatives in every respect. There were no factual errors, but the article did veer off into areas that were largely trivial, worrying about inappropriate content and nonsensical results, rather than the highlighting the bigger issues of hallucinations and malicious input injection. Some of that likely resulted from the prompt used and I could have tweaked it further to get closer to the goal. However, that would not have removed the need to edit and shape the content produced.
This should not take away from the fact that ChatGPT-3 did provide a solid base from which to start an article. What would have normally taken an hour (rough draft) flew out in less than 30 seconds and gave me a solid point to build from. That is an excellent small scale example that can be extrapolated across the generative AI landscape as a whole. So long as we keep GenAI as a tool in our tool belt and apply it appropriately, we will be faster and more efficient at creating generally novel content. Just don't fall prey to the idea that GenAI will remove the people from the equation.
Technology insights and musings on current industry trends.