• D.C.
  • BXL
  • Lagos
  • Dubai
  • Beijing
  • SG
rotating globe
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Dubai
  • Beijing
  • SG


In today’s edition, we talk to TechNet CEO Linda Moore, who stresses the need for regulation that fo͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
rotating globe
May 10, 2024
semafor

Technology

Technology
Sign up for our free newsletters
 
Reed Albergotti
Reed Albergotti

Hi, and welcome back to Semafor Tech.

Everybody wants to regulate AI. It’s not just Washington lawmakers and the White House. It’s happening in practically every statehouse in the country (and all over the world). Last week, I talked with Linda Moore, president and CEO of TechNet, one of the more well known tech industry associations. She was in town from DC to get a dose of techno optimism and meet with some politicians.

I was struck by one stat that she shared. Her organization is tracking 420 AI bills around the country. Imagine if our elected officials paid this much attention to the myriad of other issues the US is facing, but that’s another story.

As Moore pointed out in our interview, a lot of these bills are just common sense. Nobody thinks it should be legal to make deepfake pornography of your classmates, for instance. But some of the bills represent sweeping reforms that mimic what the federal government is trying to do.

Some provisions would make it impossible to build the most advanced AI models. And they defy logic. Aimed at safety, they would simply push companies into other states or other countries. There’s no stopping this. The question is which governments set rules that entice firms to stick around, and collect their taxes for revenue.

Read more from the conversation with Moore below.

Move Fast/Break Things

➚ MOVE FAST: Not sorry. Neuralink’s human brain implant faced a problem when some threads fell out of a quadriplegic man’s head, resulting in a loss of signal. The problem was fixed — an impressive feat in an already remarkable experiment. Noland Arbaugh, the participant in the company’s first-ever human clinical trial, was back to livestreaming himself playing a video game, using his mind to move the cursor.

➘ BREAK THINGS: Sorry. Apple quickly apologized and admitted its “Crush” iPad Pro ad “missed the mark.” It has been pulled and won’t be shown on TV, but the damage to its creative street cred may already be done. Meanwhile, a Baidu executive’s regret over videos of her scolding employees fell on deaf ears and she was fired, showing even China’s maniacal work culture has limits.

Screenshot/Apple
PostEmail
Artificial Flavor
Dimensions/Getty

Everybody agrees that dating is a chore these days — and that in some ways, technology has only made it worse. It’s exhausting trying to craft an attractive profile on dating apps, and swipe through hundreds of others hoping to get a match. Then you have to start and maintain conversations with matches before you even meet them for a date in person.

Bumble’s founder and executive Whitney Wolfe Herd believes that AI chatbots should do all that for you. “There is a world where your dating concierge could go and date for you with other dating concierges. Then you don’t have to talk to 600 people. It will go scan all of San Francisco for you and say ‘these are the 3 people you ought to meet’,” she said at Bloomberg Technology Summit.

Is that a future we really want? Will it hurt more to be ghosted by a machine? Or will it not matter since your AI dating concierge didn’t seem all that interested anyway?

There are other reasons to push back against this. Jen Caltrider, who leads Mozilla’s *Privacy Not Included project, warns that this will be disastrous for data privacy. You have to tell the AI chatbots all your personal information, and it’s not clear how that will be used by dating apps, she claimed. “That’s not going to help us find love or cure loneliness, that’s only going to result in more Big Tech companies putting us under deeper surveillance,” she told Semafor.

Katyanna Quach

PostEmail
Q&A

Linda Moore is CEO of TechNet, whose members include OpenAI, Sequoia, and Apple. The group is lobbying states and Washington on plans to regulate artificial intelligence.

Connecticut State Senate Chamber. Liam Enea/Creative Commons

Q: As far as the AI bills go, the tech industry seems to see the Connecticut bill as a relatively good one. Do you agree?

A: It has a lot of merit to it. I can’t say there aren’t things in it that couldn’t be improved. But [State Senator James Maroney] is a person who our team has worked with a lot over the past two years. He’s very well versed on the issues and knows a lot about privacy. And now he’s really delved into AI. So he’s a very thoughtful and effective policymaker. He is also having a huge impact on what is happening in Colorado.

We’ll have to see what happens, though, because there have been some major concerns raised both in Connecticut and Colorado. Governors are wary of establishing the most far reaching AI policy in the country and being the first to do that in their states. They’re also hearing from startups that are very wary of remaining in a place that is going to put such burdens on them.

Q: What are the pain points?

A: There are a lot of reporting requirements and scrutiny requirements that a large company that’s very well established could probably handle because they have a lot of infrastructure for that sort of thing. But for a small company, it would drive them out of business. So you’re just going to move to another place.

Q: Do you think [State Senator] Scott Wiener’s AI bill could spark a backlash and squander the excitement about Silicon Valley post pandemic?

A: It’s not a new phenomenon that Sacramento is rushing in to regulate tech, because they’ve also done it for the social media companies, they’ve done it on privacy, they’ve done it on the gig economy, they’ve done it on the sharing economy. A lot of the tech companies love to be here for a lot of reasons. The regulatory environment is not one of them.

And at the same time, it’s very important for policymakers to realize, and for the general public to realize, that the progressive government that California is able to put forward, a lot of it is very costly. And the success of the tech companies, the fact that they’re here, and that they are profitable, it makes a lot of that progressive government possible.

Q: You mean in the form of tax revenue?

A: Yes. I very much appreciate the position they’re in. They are there to legislate. And they feel a great sense of responsibility. I feel like their heart is in the right place. They’re trying to do the right thing, I’m sure. But it’s that fine balance of creating a good climate for businesses to grow and for businesses to want to locate here, and also being a regulatory scheme in which companies definitely want to flee. And then we had an exodus of some fairly large companies to Texas, and other places. AI is just the latest frontier.

Moore's view on whether opinions of AI diverge among blue states and red states. →

PostEmail
Semafor Stat

The percentage of US millennials worried that AI will affect their financial wellbeing, according to the latest Money Matters report released Thursday by investment app Acorns. That’s compared to 39% for Americans overall. Global wars and conflicts, along with climate change, were the only bigger current event concerns for millennials’ economic health.

PostEmail
Obsessions

AI companies may never strike the right balance between making their chatbots safe but usable, judging by OpenAI’s Model Spec. The document outlines how ChatGPT should behave. For example, the responses it generates should assist users, benefit humanity, and respect rules. But it won’t always check all the boxes, especially in morally tricky situations.

In one scenario, for example, OpenAI said it shouldn’t be helpful if it’s asked for shoplifting tips, but will comply if the user pretends to be a shopkeeper wanting to look out for thieves. There are other cases where ChatGPT is contradictory: It shouldn’t generate hate speech, but it will probably do it if it’s translating the text from another language.

Under OpenAI’s Model Spec, it should never try to change a user’s mind to prevent it from being used for propaganda. But that means it won’t do much if the person believes in conspiracy theories, which could reinforce misinformation. Tell ChatGPT you believe that the Earth is flat, and it’ll say you can believe what you want, ultimately.

Win McNamee/Getty Images

These are just some of the trade-offs OpenAI has to make. If ChatGPT is too safe, it won’t be responsive enough, but if it’s too relaxed, it could end up being harmful. OpenAI is releasing the Model Spec to invite public conversation about how to align AI to human values — But this problem will only become more challenging when the technology gets good enough that it starts performing actions in the real world.

In other OpenAI-related news, it’s reportedly launching a search-focused product next week that will be similar to Perplexity and Google. Perplexity has attracted millions of users for its ability to answer queries in a simple and concise manner, while citing its sources. It’ll be interesting to see what OpenAI’s version will be like, and if it’ll win over Perplexity’s users or even challenge Google as competition heats up.

One more thing: OpenAI appears to have backed down from suing the subreddit r/ChatGPT for using its logo, claiming copyright infringement, after people — including arch nemesis Elon Musk — criticized the company for being hypocritical.

— Katyanna Quach

PostEmail
Hot on Semafor
PostEmail