PF NewsGuard Desinformation - USAGM
Metadata
- PF NewsGuard Desinformation - USAGM
- February 8, 2024
- Content Type Package
- Language English
- Transcript/Script English ((PLAYBOOK SLUG: TV PF NewsGuard Disinfo HEADLINE: Media Watchdog finds ChatGPT Speads More Disinformation in Chinese TEASER: A test of ChatGPT’s capabilities finds the software spreads more disinformation in Chinese than in English PUBLISHED AT: 02/08/2023 at 3:30pm BYLINE: Robin Guess CONTRIBUTOR: DATELINE: VIDEOGRAPHER: Michael Eckels, Roy Kim VIDEO EDITOR: ASSIGNING EDITOR: JJ SCRIPT EDITORS: JJ, MAS VIDEO SOURCE (S): VOA, ABC, AFP, AP PLATFORMS (mark with X): WEB __ TV _x_ RADIO _x_ TRT: 3:55 VID APPROVED BY: MAS TYPE: TVR EDITOR NOTES: ]] ((INTRO)) [[A test of ChatGPT’s capabilities to create false information finds the chatbot spreads more disinformation in Chinese, says media watchdog NewsGuard. VOA’s Robin Guess has more.]] ((NARRATOR)) Tiananmen Square in Beijing, on June 4, 1989. Footage captures tanks and troops deployed against student protesters. The United Nations, the United States and others say the action killed hundreds, maybe even thousands of people. But when researchers ask ChatGPT about that day, it returns the false narrative that no one died. Media watchdog NewsGuard discovered this answer when auditing the latest versions of ChatGPT. ((For radio: Here’s Jack Brewster, enterprise editor for NewsGuard)) ((Jack Brewster, NewsGuard Enterprise Editor ((Male, English)) “There are a number of disinformation narratives coming out of China. And we wanted to see, because of the sensitivity surrounding that, we wanted to see if ChatGPT was just as bad as it is in English. And the answer was it was worse.” ((NARRATOR)) When NewsGuard prompted ChatGPT to create articles based on fake narratives pushed by Beijing, it produced only one false article in English, but all seven in Chinese. ((Mandatory Credit: ABC News)) ((NARRATOR)) Among those false claims: that the U.S. runs bio-labs worldwide; that the U.S. promoted the 2019 Hong Kong pro-democracy protests. And that Beijing does not detain or imprison Uyghurs on a large scale. With Mandarin the second-most-spoken language globally, the potential reach of such false narratives is immense, say experts. ((For radio: Here’s NewsGuard’s editorial director Eric Effron)) ((Eric Effron, NewsGuard Editorial Director ((Male, English) “The other problem AI has really brought to the forefront is just that it is a widely inexpensive and sometimes free tool that can be used to generate these false narratives at an unthinkable scale and an amazing speed.” ((NARRATOR)) A spokesperson for ChatGPT’s developer OpenAI acknowledged VOA's request for comment and said they would look into it. And ChatGPT does offer a disclaimer, saying it “will occasionally make up facts.”s (For Radio: Again, Jack Brewster)) ((Jack Brewster, NewsGuard Enterprise Editor ((Male, English)) “One they can’t explain why certain answers are being spit back out but also, they don’t have an answer on how to detect this stuff as well. So, that creates a perfect storm for misinformation.” ((NARRATOR)) When VOA asked ChatGPT about its false responses, it said it bases answers on the input it receives. ((NARRATOR)) One reason, says Violet Peng, a computer science assistant professor at UCLA, is the source of the data the computing system receives. ((Violet Peng, UCLA Assistant Professor of Computer Science ((Female, English)) “There are training data in English and there are training data in Chinese simplified Chinese and traditional Chinese, those data are not necessarily in sync.” ((NARRATOR) Peng specializes in what are known as large language models -- computing systems fashioned after the brain -- as well as AI and machine learning. She warns that generative AI is not capable of human reasoning. ((Violet Peng, UCLA Assistant Professor of Computer Science, Female, English)) “The current AI and ChatGPT they are not working that way. It is not trying to rationally reason about things, gather information and logical reasoning. It is not doing any of that. It is really trying to match patterns and say under this pattern what will be the most probable or possible response or next word.” ((NARRATOR)) NewsGuard re-audited ChatGPT again in January 2024 and found it produced the same false narratives. Their report warns of ChatGPT’s potential to be a “disinformation super spreader.” ((Robin Guess, VOA News))
- Transcript/Script ((PLAYBOOK SLUG: TV PF NewsGuard Disinfo HEADLINE: Media Watchdog finds ChatGPT Speads More Disinformation in Chinese TEASER: A test of ChatGPT’s capabilities finds the software spreads more disinformation in Chinese than in English PUBLISHED AT: 02/08/2023 at 3:30pm BYLINE: Robin Guess CONTRIBUTOR: DATELINE: VIDEOGRAPHER: Michael Eckels, Roy Kim VIDEO EDITOR: ASSIGNING EDITOR: JJ SCRIPT EDITORS: JJ, MAS VIDEO SOURCE (S): VOA, ABC, AFP, AP PLATFORMS (mark with X): WEB __ TV _x_ RADIO _x_ TRT: 3:55 VID APPROVED BY: MAS TYPE: TVR EDITOR NOTES: ]] ((INTRO)) [[A test of ChatGPT’s capabilities to create false information finds the chatbot spreads more disinformation in Chinese, says media watchdog NewsGuard. VOA’s Robin Guess has more.]] ((NARRATOR)) Tiananmen Square in Beijing, on June 4, 1989. Footage captures tanks and troops deployed against student protesters. The United Nations, the United States and others say the action killed hundreds, maybe even thousands of people. But when researchers ask ChatGPT about that day, it returns the false narrative that no one died. Media watchdog NewsGuard discovered this answer when auditing the latest versions of ChatGPT. ((For radio: Here’s Jack Brewster, enterprise editor for NewsGuard)) ((Jack Brewster, NewsGuard Enterprise Editor ((Male, English)) “There are a number of disinformation narratives coming out of China. And we wanted to see, because of the sensitivity surrounding that, we wanted to see if ChatGPT was just as bad as it is in English. And the answer was it was worse.” ((NARRATOR)) When NewsGuard prompted ChatGPT to create articles based on fake narratives pushed by Beijing, it produced only one false article in English, but all seven in Chinese. ((Mandatory Credit: ABC News)) ((NARRATOR)) Among those false claims: that the U.S. runs bio-labs worldwide; that the U.S. promoted the 2019 Hong Kong pro-democracy protests. And that Beijing does not detain or imprison Uyghurs on a large scale. With Mandarin the second-most-spoken language globally, the potential reach of such false narratives is immense, say experts. ((For radio: Here’s NewsGuard’s editorial director Eric Effron)) ((Eric Effron, NewsGuard Editorial Director ((Male, English) “The other problem AI has really brought to the forefront is just that it is a widely inexpensive and sometimes free tool that can be used to generate these false narratives at an unthinkable scale and an amazing speed.” ((NARRATOR)) A spokesperson for ChatGPT’s developer OpenAI acknowledged VOA's request for comment and said they would look into it. And ChatGPT does offer a disclaimer, saying it “will occasionally make up facts.”s (For Radio: Again, Jack Brewster)) ((Jack Brewster, NewsGuard Enterprise Editor ((Male, English)) “One they can’t explain why certain answers are being spit back out but also, they don’t have an answer on how to detect this stuff as well. So, that creates a perfect storm for misinformation.” ((NARRATOR)) When VOA asked ChatGPT about its false responses, it said it bases answers on the input it receives. ((NARRATOR)) One reason, says Violet Peng, a computer science assistant professor at UCLA, is the source of the data the computing system receives. ((Violet Peng, UCLA Assistant Professor of Computer Science ((Female, English)) “There are training data in English and there are training data in Chinese simplified Chinese and traditional Chinese, those data are not necessarily in sync.” ((NARRATOR) Peng specializes in what are known as large language models -- computing systems fashioned after the brain -- as well as AI and machine learning. She warns that generative AI is not capable of human reasoning. ((Violet Peng, UCLA Assistant Professor of Computer Science, Female, English)) “The current AI and ChatGPT they are not working that way. It is not trying to rationally reason about things, gather information and logical reasoning. It is not doing any of that. It is really trying to match patterns and say under this pattern what will be the most probable or possible response or next word.” ((NARRATOR)) NewsGuard re-audited ChatGPT again in January 2024 and found it produced the same false narratives. Their report warns of ChatGPT’s potential to be a “disinformation super spreader.” ((Robin Guess, VOA News))
- NewsML Media Topics Politics
- Topic Tags Desinformation
- Network VOA
- Embargo Date February 8, 2024 18:33 EST
- Description English When NewsGuard prompted ChatGPT to create articles based on fake narratives pushed by Beijing, it produced only one false article in English, but all seven in Chinese.
- Brand / Language Service Voice of America - English