• D.C.
  • BXL
  • Lagos
  • Riyadh
  • Beijing
  • SG
  • D.C.
  • BXL
  • Lagos
Semafor Logo
  • Riyadh
  • Beijing
  • SG


In today’s edition, we look at the Emirates’ regulatory environment for AI and how that compares wit͏‌  ͏‌  ͏‌  ͏‌  ͏‌  ͏‌ 
 
rotating globe
September 18, 2024
semafor

Technology

technology
Sign up for our free newsletters
 
Reed Albergotti
Reed Albergotti

Hi, and welcome back to Semafor Tech.

When I was in Abu Dhabi last week, I kept thinking about tech regulation back in the US and how it looks from far away, in a place that is fighting tooth and nail just to get a chance to compete in the field of AI.

The US is technology spoiled. Since the Cold War, the country has been the leader in innovation and reaped the economic benefits that come with that title.

But over the last eight years, America’s view of the industry has become complicated, which to some extent, informs the regulations being proposed by policymakers today. The calculus on new AI regulation is at least partly influenced by how they perceive the impact of social media. That’s probably true in Europe, too.

But that is not how much of the world views the industry. In the Gulf, technology is a tool that promises economic transformation and a future after oil. In essence, they don’t have the luxury of a complicated relationship with the tech industry. Harnessing innovation is existential.

It’s not that there is no regulation there. There certainly is, and the Emirates and Saudis also think about the downside risks of AI. But they can’t afford to stifle it, or send the wrong signal about supporting it.

And, as you’ll read in the article below, some entrepreneurs in the Gulf region have radical ideas about how to encourage it.

Move Fast/Break Things
Chalinee Thirasupa/File Photo/Reuters

➚ MOVE FAST: Getting ahead. Microsoft, BlackRock and Abu Dhabi’s MGX are launching a $30 billion fund to build AI infrastructure, The Financial Times reported and Semafor confirmed. It’s just one small piece of the massive resources needed to keep improving AI models, but it shows forethought. And it highlights the urgency that big tech companies feel on AI, as they compete with one another and with China.

➘ BREAK THINGS: Long overdue. Meta made major changes to the way it handles teen Instagram accounts, making them private by default to prevent predators from targeting young users. But for a company that coined the term “move fast and break things,” the announcement Tuesday, coming years after being heavily criticized for how it handles teen accounts, reminded people that it can move slowly when fixing things.

PostEmail
Artificial Flavor
NeuThroné

There are sunglasses that harness the power of AI, like the Ray-Ban Meta Smart Glasses. Now, you can buy sunglasses designed to block AI from replicating your identity or turning you into a deepfake.

Visionaries sunglasses, which I haven’t personally tested, don’t use cutting-edge electronics to throw off AI algorithms. Instead, maker NeuThroné says a simple image on the side of the glasses is enough to keep AI from using photos online to replicate your face. The catch: you have to be wearing them when the photos are taken.

The company hopes that it will find an early market with young online creators, many of whom have had their identities repurposed by AI without their consent. (Wednesday actress Jenna Ortega said she received explicit photos of herself as a child that were created with AI.)

For people like me, it’s too late. I’ve been putting photos of myself online since my 20s. These glasses highlight an interesting question for young people who haven’t plastered their social media with selfies yet or for my own kids, who I’ve kept off social media until they can decide what they want.

PostEmail
Reed Albergotti

A radical idea to make the UAE an AI innovation hub

Devarya Ruparelia/Unsplash

THE SCENE

ABU DHABI — Courtney Powell, chief operating officer of venture capital firm 500 Global, recalled an idea that a fellow investor recently floated: What if the United Arab Emirates created a “special economic zone” in which copyright doesn’t exist?

The lack of a copyright law would allow AI companies to train powerful models without worrying about lawsuits from book publishers, musicians and others who claim their work was ripped off by the technology.

Powell, who was visiting from her home in Riyadh and has helped seed a growing ecosystem of startups there, wasn’t endorsing the idea. And there’s no indication that the leaders of the UAE would entertain the concept.

The suggestion, though, highlights an increasingly common perception among some in the tech industry that the United States is coming down too hard on new developments like crypto and AI, and risks hurting innovation.

Meanwhile, Powell said the Gulf region has become an increasingly attractive place for startup founders from all over the world, from South Korea to Latin America to Russia. There were 79 venture deals in Saudi Arabia and the UAE in 2015, according to PitchBook. That number reached 402 in 2022 before dropping slightly last year to 337.

As the UAE and Saudi Arabia build out their data centers capable of training and running powerful AI models, the region could be seen as an attractive refuge for smaller startups that are overburdened by the uncertainty of regulation in places like the US and the EU, she said.

To be sure, the UAE has its own regulations, but the centralized power of the Emirati leadership has made the rules more straightforward and easy to adjust when necessary.

The country has also created several special economic zones, or “sandboxes,” where startups can experiment freely with new technologies, from autonomous vehicles to healthcare to crypto.

Reed’s view on why the US can afford to be tougher on AI rules.  →

PostEmail
Live Journalism

September 24, 2024 | New York City | Request Invitation

President Lazarus Chakwera, Malawi and Dr. Monique Nsanzabaganwa, Deputy Chairperson, African Union Commission will join the stage at The Next 3 Billion summit — the premier US convening dedicated to unlocking one of the biggest social and economic opportunities of our time: connecting the unconnected.

PostEmail
What We’re Tracking
Marco Bello/Reuters

California governor Gavin Newsom hinted Tuesday that he’s leaning toward a veto of the state’s controversial AI law, SB 1047. He didn’t come right out and say it, but he said he was concerned about a possible chilling effect. “We dominate this space, and I don’t want to lose that,” he said, speaking at Dreamforce, the annual San Francisco conference/soirée put on by Salesforce.

Last month, I wrote about why Newsom was likely to lean this way. The symbolism of this bill is now bigger than the legislation, which has been somewhat defanged, anyway. And it’s even bigger than California. Democrats need to be seen as pro-technology again, or they’ll lose an important source of fundraising support.

On Tuesday, Newsom signed a handful of bills into law that deal with more narrow issues around AI, including one that gave actors more control over their likenesses. That was a smart move, considering the Screen Actors Guild was a supporter of SB 1047.

PostEmail
Obsessions
Sam Balye/Unsplash

Racial biases have been known to creep into artificial intelligence algorithms. Now teachers are bringing it into the classroom as they police students’ use of generative AI tools like ChatGPT to complete homework, according to a new study by children’s safety nonprofit Common Sense Media.

The group found that Black teenagers in the US are about twice as likely as their white and Latino peers to have teachers incorrectly flag their schoolwork as AI-generated. Common Sense Media surveyed 1,045 13- to 18-year-olds and their parents from March 15 through April 20.

“This suggests that software to detect AI, as well as teachers’ use of it, may be exacerbating existing discipline disparities among historically marginalized groups,” said the report, which was released Wednesday. “In the United States, Black students face the highest rate of disciplinary measures in both public and private schools — despite being no more likely to misbehave — which contributes to negative impacts, such as lower academic performance.”

The findings aren’t surprising, Robert Topinka, a senior lecturer in media and cultural studies at Birkbeck, University of London told Semafor in an interview.

Part of the problem is that AI detection software is “wholly unreliable,” he said. It’s trained to flag generic and formulaic phrasing using pattern matching, but can’t distinguish between ChatGPT and spelling and grammar checkers like Grammarly, which means students risk being penalized even for using approved tools.

— Mizy Clifton

PostEmail
Hot on Semafor
PostEmail