Responsible AI starts with democratizing AI knowledge

The average user of AI lacks an adequate understanding of the tools they increasingly depend on for decision-making and work. We need to change that.

shutterstock 166345925 open door with sunlight shining through doorway
PHOTOCREO Michal Bednarek / Shutterstock

Since May 2023, the White House, in collaboration with leading AI companies, has been steering towards a comprehensive framework for responsible AI development. While the finalization of this framework is pending, the industry’s attempts at self-regulation are accelerating, primarily to address growing AI security concerns.

The shift towards embedding trust and safety into AI tools and models is important progress. However, the real challenge lies in ensuring that these critical discussions don’t happen behind closed doors. For AI to evolve responsibly and inclusively, democratizing AI knowledge is imperative.

The rapid advancement of AI has led to an explosion of widely accessible tools, fundamentally altering how end users interact with technology. Chatbots, for instance, have woven themselves into the fabric of our daily routines. A striking 47% of Americans now turn to AI like ChatGPT for stock recommendations, while 42% of students rely on it for academic purposes.

The widespread adoption of AI highlights an urgent need to address a growing issue—the reliance on AI tools without a fundamental understanding of the large language models (LLMs) they’re built upon. 

Chatbot hallucinations: Minor errors or misinformation?

A significant concern arising from this lack of understanding is the prevalence of “chatbot hallucinations,” meaning instances where LLMs inadvertently disseminate false information. These models, trained on vast swaths of data, can generate erroneous responses when fed incorrect data, either deliberately or through unintentional internet data scraping.

In a society increasingly reliant on technology for information, the ability of AI to generate seemingly credible but false data surpasses the average user’s capacity to process and verify it. The biggest risk here is the unquestioning acceptance of AI-generated information, potentially leading to ill-informed decisions affecting personal, professional, and educational realms.

The challenge, then, is two-fold. First, users must be equipped to identify AI misinformation. And second, users must develop habits to verify AI-generated content. This is not just a matter of enhancing business security. The societal and political ramifications of unchecked AI-generated misinformation are profound and far-reaching.

A call for open collaboration

In response to these challenges, organizations like the Frontier Model Forum, established by industry leaders OpenAI, Google, and Microsoft, help lay the groundwork for trust and safety in AI tools and models. Yet, for AI to flourish sustainably and responsibly, a broader approach will be necessary. This means extending collaboration beyond corporate walls to include public and open-source communities.

Such inclusivity not only enhances the trustworthiness of AI models but also mirrors the success seen in open-source communities, where a diverse range of perspectives is instrumental in identifying security threats and vulnerabilities.

Knowledge builds trust and safety

An essential aspect of democratizing AI knowledge lies in educating end users about AI’s inner workings. Providing insights into data sourcing, model training, and the inherent limitations of these tools is vital. Such foundational knowledge not only builds trust but also empowers people to use AI more productively and securely. In business contexts, this understanding can transform AI tools from mere efficiency enhancers to drivers of informed decision-making.

Preserving and promoting a culture of inquiry and skepticism is equally critical, especially as AI becomes more pervasive. In educational settings, where AI is reshaping the learning landscape, fostering an understanding of how to use AI correctly is paramount. Educators and students alike need to view AI not as the sole arbiters of truth but as tools that augment human capabilities in ideation, question formulation, and research.

AI undoubtedly is a powerful equalizer capable of elevating performance across various fields. However, “falling asleep at the wheel” with an over-reliance on AI without a proper understanding of its mechanics can lead to complacency, negatively impacting both productivity and quality. 

The swift integration of AI into consumer markets has outpaced the provision of guidance or instruction, unveiling a stark reality: The average user lacks adequate education on the tools they increasingly depend on for decision-making and work. Ensuring the safe and secure advancement and use of AI in business, education, and our personal lives hinges on the widespread democratization of AI knowledge.

Only through collective effort and shared understanding can we navigate the challenges and harness the full potential of AI technologies.

Peter Wang is chief AI and innovation officer and co-founder of Anaconda.

Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.com.

Copyright © 2024 IDG Communications, Inc.