Search This Blog

Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Saturday, December 16, 2023

AI and Deliberative Democracy

 From Helene Landemore at the International Monetary Fund:

We now have the chance to scale and improve such deliberative processes exponentially so that citizens’ voices, in all their richness and diversity, can make a difference. Taiwan Province of China exemplifies this transition.

Following the 2014 Sunflower Revolution there, which brought tech-savvy politicians to power, an online open-source platform called pol.is was introduced. This platform allows people to express elaborate opinions about any topic, from Uber regulation to COVID policies, and vote on the opinions submitted by others. It also uses these votes to map the opinion landscape, helping contributors understand which proposals would garner consensus while clearly identifying minority and dissenting opinions and even groups of lobbyists with an obvious party line. This helps people understand each other better and reduces polarization. Politicians then use the resulting information to shape public policy responses that take into account all viewpoints.

Over the past few months pol.is has evolved to integrate machine learning with some of its functions to render the experience of the platform more deliberative. Contributors to the platform can now engage with a large language model, or LLM (a type of AI), that speaks on behalf of different opinion clusters and helps individuals figure out the position of their allies, opponents, and everyone in between. This makes the experience on the platform more truly deliberative and further helps depolarization. Today, this tool is frequently used to consult with residents, engaging 12 million people, or nearly half the population.

Corporations, which face their own governance challenges, also see the potential of large-scale AI-augmented consultations. After launching its more classically technocratic Oversight Board, staffed with lawyers and experts to make decisions on content, Meta (formerly Facebook) began experimenting in 2022 with Meta Community Forums—where randomly selected groups of users from several countries could deliberate on climate content regulation. An even more ambitious effort, in December 2022, involved 6,000 users from 32 countries in 19 languages to discuss cyberbullying in the metaverse over several days. Deliberations in the Meta experiment were facilitated on a proprietary Stanford University platform by (still basic) AI, which assigned speaking times, helped the group decide on topics, and advised on when to put them aside.

For now there is no evidence that AI facilitators do a better job than humans, but that may soon change. And when it does, the AI facilitators will have the distinct advantage of being much cheaper, which matters if we are ever to scale deep deliberative processes among humans (rather than between humans and LLM impersonators, as in the Taiwanese experience) from 6,000 to millions of people.

Wednesday, December 6, 2023

AI Fellows on the Hill


Brendan Bordelon at Politico:
Top tech companies with major stakes in artificial intelligence are channeling money through a venerable science nonprofit to help fund fellows working on AI policy in key Senate offices, adding to the roster of government staffers across Washington whose salaries are being paid by tech billionaires and others with direct interests in AI regulation.

The new “rapid response cohort” of congressional AI fellows is run by the American Association for the Advancement of Science, a Washington-based nonprofit, with substantial support from Microsoft, OpenAI, Google, IBM and Nvidia, according to the AAAS. It comes on top of the network of AI fellows funded by Open Philanthropy, a group financed by billionaire Facebook co-founder Dustin Moskovitz.

The six rapid response fellows, including five with PhDs and two who held prior positions at big tech firms, operate from the offices of two of Senate Majority Leader Chuck Schumer’s top three lieutenants on AI legislation — Sens. Martin Heinrich (D-N.M.) and Mike Rounds (R-S.D.) — as well as the Senate Banking Committee and the offices of Sens. Ron Wyden (D-Ore.), Bill Cassidy (R-La.) and Mark Kelly (D-Ariz.).

Alongside the Open Philanthropy fellows — and hundreds of outside-funded fellows throughout the government, including many with links to the tech industry — the six AI staffers in the industry-funded rapid response cohort are helping shape how key players in Congress approach the debate over when and how to regulate AI, at a time when many Americans are deeply skeptical of the industry.

The apparent conflict of tech-funded figures working inside the Capitol Hill offices at the forefront of AI policy worries some tech experts, who fear Congress could be distracted from rules that would protect the public from biased, discriminatory or inaccurate AI systems.

Monday, December 4, 2023

AI and Political Ads

 The new political ad machine: Policy frameworks for political ads in an age of AI Center on Technology Policy at the University of North Carolina at Chapel Hill

While there is limited empirical research on GAI in political ads, our reading of the literature considering online misinformation, political ads, and bias in AI models offers five important insights into the potential harm of GAI in political ads: 

  • First, research suggests that the persuasive power of both political ads and online misinformation is often overstated. Political ads likely have more of an effect on behavior – such as voter turnout and fundraising –  than on persuasion. 
  • Second, political ads likely have the greatest impact in smaller, down-ballot races where there is less advertising, oversight, or familiarity with candidates. 
  • Third, GAI content has the potential to replicate bias, including racial, gender, and national biases.
  •  Fourth, research on political disclaimers suggests that watermarks and disclaimers are unlikely to  significantly curb risks. 
  • Fifth, significant holes in the research remain.

 These insights from the literature help to formulate recommendations for policymakers that can mitigate the potential harm of GAI without unduly constraining its potential benefits. Research suggests that policy should focus more on preventing abuse in smaller, down-ballot races and in mitigating bias than on banning deceptive GAI content or requiring disclaimers or watermarks. Although the research points in this direction, holes in the literature remain. The result is that we should approach its insights from a position of curiosity, rather than certainty, and conduct additional research into the impact of GAI on the electoral process. Building on our assessment of the academic literature, we offer ten recommendations for policymakers seeking to limit the potential risks of GAI in political ads. These recommendations fall into two categories: First, public policy should target electoral harms rather than technologies. Second, public policy should promote learning about GAI so that we can govern it more effectively over time.

Thursday, July 20, 2023

Fake Trump Photos, Fake Trump Voice



The fake images of King and Trump together were created using artificial intelligence software, though it’s not clear precisely which program was used. AI generator tools like DALL-E, Stability Diffusion and Midjourney allow anyone to create a photo-realistic image simply by using a text prompt and describing the scene they’d like to see created. Companies with large photo libraries have filed suit against various image generators this year, including a lawsuit from Getty Images against Stability AI filed in February.

Louis Jacobson and Loreben Tuquero at Poynter:
Never Back Down, a political action committee supporting Florida Gov. Ron DeSantis for the Republican presidential nomination, used former President Donald Trump’s own words against him in a new ad.

Candidates do that all of the time. In this case, however, the ad-makers pushed the boundaries by manipulating audio to read out loud an attack against Iowa Gov. Kim Reynolds in Trump’s voice.

The message spoken in the ad accurately reflects what Trump wrote on Truth Social, but he did not speak those words himself.

The ad criticized Trump for “attacking” Reynolds, a popular fellow Republican from one of the most important early states in the presidential primary calendar.

The post on Trump’s Truth Social platform said, “I opened up the Governor position for Kim Reynolds, & when she fell behind, I ENDORSED her, did big Rallies, & she won. Now, she wants to remain ‘NEUTRAL.’ I don’t invite her to events!”

A viewer wouldn’t know that Trump didn’t say this out loud: Never Back Down took Trump’s words and used artificial intelligence to create audio of a Trump-like voice reading them.

 

Wednesday, May 31, 2023

AI-Generated Research and Hallucinations

Artificial intelligence is an increasingly important topic in politics, policy, and law.

Benjamin Weiser at NYT:
The lawsuit began like so many others: A man named Roberto Mata sued the airline Avianca, saying he was injured when a metal serving cart struck his knee during a flight to Kennedy International Airport in New York.

When Avianca asked a Manhattan federal judge  to toss out the case, Mr. Mata’s lawyers vehemently objected, submitting a 10-page brief that cited more than half a dozen relevant court decisions. There was Martinez v. Delta Air Lines, Zicherman v. Korean Air Lines and, of course, Varghese v. China Southern Airlines, with its learned discussion of federal law and “the tolling effect of the automatic stay on a statute of limitations.”

There was just one hitch: No one — not the airline’s lawyers, not even the judge himself — could find the decisions or the quotations cited and summarized in the brief.

That was because ChatGPT had invented everything.

The lawyer who created the brief, Steven A. Schwartz of the firm Levidow, Levidow & Oberman, threw himself on the mercy of the court on Thursday, saying in an affidavit that he had used the artificial intelligence program to do his legal research — “a source that has revealed itself to be unreliable.”

This case was not unique. Gerrit De Vynck explains at WP:

Recently, researchers asked two versions of OpenAI’s ChatGPT artificial intelligence chatbot where Massachusetts Institute of Technology professor Tomás Lozano-Pérez was born.

One bot said Spain and the other said Cuba. Once the system told the bots to debate the answers, the one that said Spain quickly apologized and agreed with the one with the correct answer, Cuba.

The finding, in a paper released by a team of MIT researchers last week, is the latest potential breakthrough in helping chatbots to arrive at the correct answer. The researchers proposed using different chatbots to produce multiple answers to the same question and then letting them debate each other until one answer won out. The researchers found using this “society of minds” method made them more factual.

“Language models are trained to predict the next word,” said Yilun Du, a researcher at MIT who was previously a research fellow at OpenAI, and one of the paper’s authors. “They are not trained to tell people they don’t know what they’re doing.” The result is bots that act like precocious people-pleasers, making up answers instead of admitting they simply don’t know.
The researchers’ creative approach is just the latest attempt to solve for one of the most pressing concerns in the exploding field of AI. Despite the incredible leaps in capabilities that “generative” chatbots like OpenAI’s ChatGPT, Microsoft’s Bing and Google’s Bard have demonstrated in the last six months, they still have a major fatal flaw: they make stuff up all the time.

Figuring out how to prevent or fix what the field is calling “hallucinations” has become an obsession among many tech workers, researchers and AI skeptics alike. The issue is mentioned in dozens of academic papers posted to the online database Arxiv and Big Tech CEOs like Google’s Sundar Pichai have addressed it repeatedly. As the tech gets pushed out to millions of people and integrated into critical fields including medicine and law, understanding hallucinations and finding ways to mitigate them has become even more crucial.

Thursday, May 25, 2023

AI and Politics


Emily A. Vogels at Pew:
About six-in-ten U.S. adults (58%) are familiar with ChatGPT, though relatively few have tried it themselves, according to a Pew Research Center survey conducted in March. Among those who have tried ChatGPT, a majority report it has been at least somewhat useful.

ChatGPT is an open-access online chatbot that allows users to ask questions and request content. The versatility and human-like quality of its responses have captured the attention of the media, the tech industry and some members of the public. ChatGPT surpassed 100 million monthly users within two months of its public launch in late November 2022, setting a world record as the fastest-growing web application. Due to these factors, the Center chose to ask Americans about ChatGPT specifically rather than chatbots or large language models (LLMs) more broadly.

Jim Saksa at Roll Call:

AI is already being used in politics. After President Joe Biden announced his reelection campaign, the Republican National Committee released an AI-generated video that envisioned a dystopian future wrought by his four more years in office. In the Chicago mayoral primary earlier this year, a Twitter account posing as a local news outlet posted a deepfake video impersonating candidate Paul Vallas on the eve of the election. And campaigns have used machine-learning models to guide their ad buys on social media platforms like Facebook for years now.

Right now, though, it’s the potential to use large language models like OpenAI’s ChatGPT to update voter files, perform data analysis and program automated functions that excite political operatives the most. While well-funded Senate or gubernatorial races can afford to hire data scientists to crunch numbers, smaller campaigns rarely have that luxury, said Colin Strother, a Democratic political consultant based in Texas. AI will change that.

“I’m excited about some of the brute work that would be really great to do, but — unless you’re on a big-time campaign, with a ton of money and a ton of staff — you can’t afford to do,” Strother said.