How to Build Trust, and Limit the Spread of Misinformation by LLMs

Grow Your Perspective Weekly: Misinformation and Information Synthesis with Generative AI

   ⏳ Reading Time: 5min 24sec 

💭 Why is Trustworthiness in AI systems important?

Misinformation has, historically, spread through word-of-mouth. However, with the rise of Language Models like LLMs (Large Language Models), the potential scale and speed of its dissemination have reached unprecedented levels. As these models become intertwined in our daily lives – offering suggestions, automating tasks, or even influencing decisions – it's crucial to address their inadvertent role in spreading false information. This article delves deep into this complex issue's technical, business, and societal facets:

Questioning the Fabric of Reality

AI-generated content can spread virally on platforms like TikTok, Instagram, or broader news channels, discerning the integrity of information. Generative AI’s capacity to mimic and create content with alarming proficiency blurs the lines between fact and fabrication, challenging the fabric of societal consensus on truth, especially on argumentative or perceptive topics. As the giant tech companies implement resources to track content generated on their platforms, OpenSource platforms do not have that capability. Similar to downloading music from LimeWire and avoiding paying iTunes at the time, many users can do the same by simply integrating open-source tools.

Hear this out - while the enterprise use cases such as automating workflows with LLMs are very powerful, at scale, if everyone adopts similar capabilities, the defensibility of most enterprises also decreases. As AI democratizes automation globally, the competition will be fierce, and therefore, as always, the industry will consolidate at the expense of startups.


Who controls that reality?

Last week, two giants of AI (Yann LeCun and Andrew Ng) stated that major tech companies are spreading fear of AI to gain control over the industry.

Andrew:
"There are large tech companies that would rather not have to try to compete with open source, so they're creating fear of AI leading to human extinction". "It's been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community."

Yann:
"Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment. They are the ones who are attempting to perform a regulatory capture of the AI industry. You, Geoff, and Yoshua are giving ammunition to those lobbying for a ban on open AI R&D.

If your fear-mongering campaigns succeed, they will inevitably result in what you and I would identify as a catastrophe: a small number of companies will control AI.

Most of our academic colleagues are massively in favor of open AI R&D. Very few believe in the doomsday scenarios you have promoted.
You, Yoshua, Geoff, and Stuart are the singular but vocal exceptions."


These companies already control the largest cluster of AI processors, the best models, the most advanced quantum computing and the overwhelming majority of robotics capacity and IP.”

 - Mustafa Suleyman, The Coming Wave: Technology, Power, and the Twenty-first Century's Greatest Dilemma

📜 A recently published paper covers a trustworthy general AI that should possess various capabilities to ensure its reliability and effectiveness, specifically “trustworthiness.” These include:

  1. Sufficient speed

  2. Linguistic and embodied understanding

  3. Ability to explain and reason about knowledge

  4. Ability to consider the source and trustworthiness of information

  5. Deductive reasoning skills

  6. Ability to draw analogies from acquired knowledge

  7. Skill in leveraging knowledge and reasoning abilities

  8. Access to and comprehension of relevant facts

🌏 Societal Impacts:

  1. Influencing Elections: Misinformation has the potential to skew public perceptions and influence elections. As LLMs can generate vast amounts of content quickly, they could be weaponized to spread false narratives.

  2. Daily News Consumption: LLMs could inadvertently generate or propagate false news stories, affecting public opinion.

  3. Legal Ramifications: Misinformation can lead to legal consequences, especially if decisions based on it were just hallucinations.

    Example: A recent case saw a lawyer base their argument on information sourced from ChatGPT. It later transpired that the critical inputs were hallucinations, leading to a misrepresentation of facts in court.

  4. Protecting IP and Trademarks: Getting around LLMs with prompting is easy. You can generate profile images, custom personas, voices, etc., which makes it easy to copy any persona in the world and build up fake profiles based on actual persons and their hard work.

💸 Now that is said, here is what is new in the world of AI and automation:

  • Mistral, a Wannabe OpenAI of Europe, Seeks $300 Million

    By Kate Clark and Stephanie Palazzolo · Monday, Oct. 30, 2023

    Mistral, an artificial intelligence startup founded by former Meta Platforms and Alphabet researchers, plans to raise an additional $300 million from investors just four months after raising $113 million in a seed round led by Lightspeed Venture Partners, according to two people familiar with the discussions.

    The round is expected to value the Paris-based startup, which is developing a large open-source language model and has framed itself as the “OpenAI of Europe,” at over $1 billion before the investment, according to another person. It’s unclear which VC firms Mistral has spoken to about investing. Andreessen Horowitz, one of the most active investors in generative AI, is currently seeking an investment in an open-source LLM developer, according to a person familiar with the matter.

  • UK government to introduce 'Gov.uk Chat' AI chatbot.

    UK Prime Minister Rishi Sunak is collaborating with OpenAI to launch an AI chatbot named “Gov.uk Chat”. The chatbot will assist the British public with legal queries, tax payments, pension access and info on student loans.

    Alex’s take: As long as AI continues to hallucinate, prescriptive legal advice will be too hazardous. A single directive idea is risky. A few suggestive ideas can be helpful. It’s critical for users to sense-check ideas put forward by the AI—even more so when relating to legal action.

  • How AI could wipe out the $68 billion SEO industry.

    For the past 25 years, websites have used search engine optimization (SEO) to convince search engines to rank their content on Google as highly as possible. This helps drive traffic to their sites. But AI is now disrupting the game.

    Alex’s take: Answers on demand mean less incentive to browse search listings. This will render the efforts employed by websites to improve their SEO scores, alongside the efforts of the consultants and marketers, useless. I can’t help but feel this will be a reality within the next 5 years.


Key Insights from "Biden lays down the law on AI."

  • Comprehensive AI Governance: President Biden’s executive order introduces a robust framework for AI development and usage, emphasizing safety, privacy, and ethical considerations. This much-anticipated move comes as a response to the rapid proliferation of generative AI technologies and their associated risks.

  • Addressing Bias and Privacy: The order notably aims to tackle issues of bias in AI systems, such as those seen in automated hiring tools, and puts forth measures to protect Americans’ privacy rights. It also requires leading genAI developers to be transparent with the government regarding safety test results.

  • National and Global AI Standards: NIST is tasked with establishing standards for safe AI, reinforcing the government's commitment to mitigating risks. Furthermore, this initiative aligns with global efforts, as G7 nations adopt a set of AI safety principles, signaling an international movement towards standardized AI governance.

  • AI in National Security and Commerce: The order outlines roles for the National Security Council and the Department of Commerce, including the development of AI for cybersecurity and content authentication. It sets a precedent for authenticating government communications and influences the private sector's approach to AI transparency.

  • Concerns and Critiques: While the executive order is a significant step forward, experts like Avivah Litan of Gartner Research and Adnan Masood of UST point out its limitations, particularly around definitions, scope, and enforcement. They highlight the need for detailed implementation and compliance mechanisms.

  • Bioengineering and AI: A critical aspect of the order is the establishment of standards to prevent AI from being used to create harmful biological agents. This measure aims to safeguard against potential biotechnological threats to public health.

  • Market Expectations and Industry Impact: The order is set to shape market expectations for AI, with a focus on responsible development through testing and transparency. It could influence small businesses and entrepreneurs in the AI space by providing technical assistance and resources.

  • Immigration Policies for AI Talent: In a move to strengthen the AI workforce, the order includes provisions to streamline immigration for highly skilled individuals with expertise in critical areas, facilitating their ability to contribute to US innovation in AI.

  • Government Leadership in AI Training: The US government aims to set an example by hiring AI governance professionals and providing AI training across agencies, ensuring that AI safety measures are developed with a deep understanding of the technology.

  • Bipartisan Potential: The executive order’s focus on AI regulation may provide a rare opportunity for bipartisan cooperation, positioning the US for leadership on a critical topic for the current century.


Creator & Host, Alp Uguray

Alp Uguray is a technologist and advisor with 5x UiPath (MVP) Most Valuable Professional Award and is a globally recognized expert on intelligent automation, AI (artificial intelligence), RPA, process mining, and enterprise digital transformation.

Alp is a Sales Engineer at Ashling Partners.

Previous
Previous

When everyone has an AI, how will we know who we speak to?

Next
Next

#4 -Achieving workforce transformation and innovation by upskilling and training