Categories: News

Microsoft’s New Tool Targets AI Hallucinations: Caution Advised

AI is a notorious liar, but Microsoft now says it has a fix for that. Understandably, that’s going to raise some eyebrows — and there’s reason to be skeptical.

Microsoft today revealed Correction, a service that attempts to automatically revise AI-generated text that’s factually wrong. Correction first flags text that may be erroneous — say, a summary of a company’s quarterly earnings call that possibly has misattributed quotes — then fact-checks it by comparing the text with a source of truth (e.g. uploaded transcripts).

Correction, available as part of Microsoft’s Azure AI Content Safety API (in preview for now), can be used with any text-generating AI model, including Meta’s Llama and OpenAI’s GPT-4o.

“Correction is powered by a new process of utilizing small language models and large language models to align outputs with grounding documents,” a Microsoft spokesperson told TechCrunch. “We hope this new feature supports builders and users of generative AI in fields such as medicine, where application developers determine the accuracy of responses to be of significant importance.”

Google introduced a similar feature this summer in Vertex AI, its AI development platform, to let customers “ground” models by using data from third-party providers, their own datasets, or Google Search.

But experts caution that these grounding approaches don’t address the root cause of hallucinations.

“Trying to eliminate hallucinations from generative AI is like trying to eliminate hydrogen from water,” said Os Keyes, a PhD candidate at the University of Washington who studies the ethical impact of emerging tech. “It’s an essential component of how the technology works.”

Text-generating models hallucinate because they don’t actually “know” anything. They’re statistical systems that identify patterns in a series of words and predict which words come next based on the countless examples they are trained on.

It follows that a model’s responses aren’t answers, but merely predictions of how a question would be answered were it present in the training set. As a consequence, models tend to play fast and loose with the truth. One study found that OpenAI’s ChatGPT gets medical questions wrong half the time.

Microsoft’s solution is a pair of cross-referencing, copy-editor-esque meta models designed to highlight and rewrite hallucinations.

A classifier model looks for possibly incorrect, fabricated, or irrelevant snippets of AI-generated text (hallucinations). If it detects hallucinations, the classifier ropes in a second model, a language model, that tries to correct for the hallucinations in accordance with specified “grounding documents.”

Image Credits:Microsoft

“Correction can significantly enhance the reliability and trustworthiness of AI-generated content by helping application developers reduce user dissatisfaction and potential reputational risks,” the Microsoft spokesperson said. “It is important to note that groundedness detection does not solve for ‘accuracy,’ but helps to align generative AI outputs with grounding documents.”

Keyes has doubts about this.

“It might reduce some problems,” they said, “But it’s also going to generate new ones. After all, Correction’s hallucination detection library is also presumably capable of hallucinating.”

Asked for a backgrounder on the Correction models, the spokesperson pointed to a recent paper from a Microsoft research team describing the models’ pre-production architectures. But the paper omits key details, like which datasets were used to train the models.

Mike Cook, a research fellow at Queen Mary University specializing in AI, argued that even if Correction works as advertised, it threatens to compound the trust and explainability issues around AI. The service might catch some errors, but it could also lull users into a false sense of security — into thinking models are being truthful more often than is actually the case.

“Microsoft, like OpenAI and Google, have created this issue where models are being relied upon in scenarios where they are frequently wrong,” he said. “What Microsoft is doing now is repeating the mistake at a higher level. Let’s say this takes us from 90% safety to 99% safety — the issue was never really in that 9%. It’s always going to be in the 1% of mistakes we’re not yet detecting.”

Cook added that there’s also a cynical business angle to how Microsoft is bundling Correction. The feature is free on its own, but the “groundedness detection” required to detect hallucinations for Correction to revise is only free up to 5,000 “text records” per month. It costs 38 cents per 1,000 text records after that.

Microsoft is certainly under pressure to prove to customers — and shareholders — that its AI is worth the investment.

In Q2 alone, the tech giant ploughed nearly $19 billion in capital expenditures and equipment mostly related to AI. But the company has yet to see significant revenue from AI. A Wall Street analyst this week downgraded the company’s stock, citing doubts about its long-term AI strategy.

According to a piece in The Information, many early adopters have paused deployments of Microsoft’s flagship generative AI platform, Microsoft 365 Copilot, due to performance and cost concerns. For one client using Copilot for Microsoft Teams meetings, the AI reportedly invented attendees and implied that calls were about subjects that were never actually discussed.

Accuracy and the potential for hallucinations are now among businesses’ biggest concerns when piloting AI tools, according to a KPMG poll.

“If this were a normal product lifecycle, generative AI would still be in academic R&D, and being worked on to improve it and understand its strengths and weaknesses,” Cook said. “Instead, we’ve deployed it into a dozen industries. Microsoft and others have loaded everyone onto their exciting new rocket ship, and are deciding to build the landing gear and the parachutes while on the way to their destination.”

hbspt.forms.create({
region: “na1”,
portalId: “44101848”,
formId: “09f96578-8d24-4d8a-8e9f-7700cff83c29”
});

admage

Recent Posts

InstantIR: Restore Your Images

Figuring out the model that can fix your low quality pictures? Now, restoring your low…

4 days ago

Omnigen: Next-Gen Image Generation & Editing Tool

Traditional diffusion models uses various mechanisms for image modification like ControlNet, IP-Adapter, Inpainting, Face detection,…

1 week ago

Create Engaging Videos with Mochi1

Mochi 1, an open-source text-to-video diffusion model has been released by Genmo.  Trained with 10…

3 weeks ago

Local Installation of Stable Diffusion 3.5

So, it's finally here. Stable Diffusion 3.5 has been released by StabilityAI on October 22nd,…

4 weeks ago

Video Depth Mapper: Efficient 3D Depth Mapping Solution

Due to the extreme diversity of video content like motion, camera panning, and length, the…

1 month ago

Top 30 Negative Prompts for Reliable Diffusion Models

 You can use these negative prompts while generating images using Stable Diffusion models.1. Clear BrandingPrompt:text,…

1 month ago