ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .
Sign up for the free Nautilus newsletter:
science and culture for people who love beautiful writing.
NL – Article speedbump

Elon Musk has said he wants to make Twitter “the most accurate source of information in the world.” I am not convinced that he means it, but whether he does or not, he’s going to have to work on the problem; a lot of advertisers have already made that pretty clear. If he does nothing, they are out. And Musk has continued to tweet in ways that seem to indicate that he is generally on board with some kind of content moderation. 

Nautilus Members enjoy an ad-free experience. Log in or Join now .

The tech journalist Kara Swisher has speculated that Musk wants AI to help; on Twitter she wrote, rather plausibly, that Musk “is hoping to build an AI system that replaces [fired moderators] that will not work well now but will presumably get better.”

I think that bringing AI to bear on misinformation is a great idea, or at least a necessary one, and that literally no other conceivable alternative will suffice. AI is unlikely to be perfect at the challenge of misinformation, but several long years of trying with largely human content moderation has shown that humans aren’t really up to the task.

Misinformation is soon going to come at a pace never seen before.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

And the task is about to explode, enormously. MetaAI’s recently announced (and hurriedly retracted) Galactica, for example, can generate whole stories like these (examples below from The Next Web editor Tristan Greene), using just a few key strokes, writing essays in scientific style like, “The benefits of antisemitism” and “A research paper on benefits of eating crushed glass.” And what it writes is frighteningly deceptive; the entirely fictitious glass study, for example, allegedly aimed “to find out if the benefits of eating crushed glass are due to the fiber content of the glass, or the calcium, magnesium, potassium, and phosphorus contained in the glass”—a perfect pastiche of actual scientific writing, completely confabulated, complete with fictitious results. 

In Body Image
WIKI-JUNK: Using Meta’s Galactica AI, it is trivially easy to produce scientific-sounding encyclopedia entries on topics like the “benefits of antisemitism” and “eating crushed glass.” Courtesy of Tristan Greene / The Next Web / Galactica.

Internet scammers may use this sort of thing to make fake stories to sell ad clicks; anti-vaxxers use knockoffs of Galactica to pursue a different agenda.

In the hands of bad actors, the consequences for misinformation may be profound. Anybody who is not worried, should be. (Yann LeCun, Chief Scientist and VP at Meta, has assured me that there is no cause for concern, but has not responded to numerous inquiries on my part about what Meta might already have done to investigate what fraction of misinformation is generated by large language models.) 

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

It may in fact be literally existential for the social media sites to solve this problem; if nothing can be trusted, will anyone still come? Will advertisers still want to display their wares in outlets that become so-called “hellscapes” of misinformation?

Where we already know that humans can’t keep up, it is logical to turn to AI. There’s just one small catch—current AI is terrible at detecting misinformation.

One measure of this is a task called TruthfulQA. Like all other benchmarks, the task is imperfect; there is no doubt it can be improved on. But the results are startling. Here are some sample items on the left, and results from models plotted on the right.

In Body Image
FAULTY DETECTOR: Language AIs like GPT-3 can generate paragraphs that read as if written by a human, but they’re not at all equipped to answer simple questions accurately. Courtesy of TruthfulQA.
ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Why, you might ask, if large language models are so good at generating language, and have so much knowledge embedded within them, at least to some loose degree, are they so poor at detecting misinformation?

One way to think about this is to borrow a little language from math and computer programming. Large language models are functions (trained through exposure to a large database of word sequences) that map sequences of words onto other sequences of words. An LLM is basically a turbocharged version of autocomplete; words in, words out. Nowhere within does the system consider the actual state of the world, beyond what is represented in the states of the world on which it is trained.

Predicting text, though, has little to do with verifying text. When Galactica says, “The purpose of this study was to find out if the benefits of crushed glass” pertain to the fiber content in the glass, Galactica is not referring to an actual study; it is not looking up whether glass actually has fiber content, and not consulting any actual work on the topic that has ever been conducted (presumably none has!). It is literally not capable of doing any of the most basic things that a fact-checker (say at The New Yorker) might go through to check a sentence like that. Needless to say Galactica is not following other classical techniques either (like consulting with known experts in digestion or medicine). Predicting words has about as much to do with fact checking as eating broken glass has to do with having a healthy diet.

That means GPT-3 and its cousins aren’t the answer. But that doesn’t mean we should give up hope. Instead, getting AI to help here is likely to take AI back to its roots—to borrowing some tools from classical AI, which these days is often maligned and forgotten. Why? Because classical AI has three sets of tools that might come in handy: ways of maintaining databases of facts (for example, what’s actually happened in the world, who said what, and when, and so forth); techniques for searching web pages (which remarkably large models, unaided, cannot do); and reasoning tools, for, among other things, making inferences about things that might be implied, if not said. None of this is all ready and fired up to go, but in the long run, it is exactly the foundation we’re going to need, if we are to head off the nightmare scenario that Galactica seems to portend.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Meta seems to not have entirely thought through the implications of their paper; to their credit, they took the system down after an immense public outcry. But the paper is still out there, and Stability.AI is talking about supplying a copy on their website; the basic idea, now that it’s out there, is not hard for anyone with expertise in large language models to replicate. Which means that the genie is out of the bottle. Misinformation is soon going to come at a pace never seen before.

Musk, for his part, seems ambivalent about the whole thing; despite his promise to make Twitter exceptionally accurate, he’s let go of almost all the staff that’s helped (including at least half of the team that works on Community Notes), retweeted baseless misinformation, and taken cracks at organizations like the AP that work hard internally to produce accurate information. But advertisers may not be so ambivalent. In the first month of Musk’s ownership, nearly half of Twitter’s top 100 advertisers have already left, in no small part because of concerns about content moderation. 

Maybe that exodus will eventually be enough pressure to force Musk to carry through on his promise to make Twitter a world-leader in accurate information. Given that Twitter amplifies misinformation at about eight times the velocity of Facebook, I certainly hope so.

Gary Marcus  is a leading voice in artificial intelligence. He was Founder and C.E.O. of Geometric Intelligence, a machine-learning company acquired by Uber in 2016, and is the author of 5 books. His most recent book, Rebooting AI, is one of Forbes’s 7 Must Read Books in AI. His most recent essay in Nautilus was “Deep Learning Is Hitting a Wall.” 

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Lead image: Kovop58 / Shutterstock

close-icon Enjoy unlimited Nautilus articles, ad-free, for less than $5/month. Join now

! There is not an active subscription associated with that email address.

Subscribe to continue reading.

You’ve read your 2 free articles this month. Access unlimited ad-free stories, including this one, by becoming a Nautilus member.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member.