The Scene
Marietje Schaake, author of the new book The Tech Coup, is a former Dutch member of the European Parliament, where she focused on tech policy. She is now a fellow at Stanford’s Cyber Policy Center and the Institute for Human-Centered AI.
She spoke to Semafor about the book and how she sees the tech debate from both sides of the Atlantic.
Q&A
Reed Albergotti: We’re talking on Zoom from halfway across the world, which to me is still kind of mind-boggling. How do you square all of the benefits of technology with the downsides? Do you worry about coming off as “anti-technology?”
Marietje Schaake: As I write in the book, this is not a book against technology. It’s a book for democracy. I mean, who could be against technology? It’s brought so many amazing things and it still holds so much promise. But what I am against, and what I think is the core message of the book is this unchecked corporate power that dominates the whole technology ecosystem.
It’s about big tech. It’s about small tech. It’s sometimes about very specific anti-democratic technology like spyware or the monopolistic, ever-growing companies that may have started as cool startups, challenging the incumbents, but that are firmly the incumbents now, pushing out the competitors and really consolidating, including when it comes to government decisions. I wanted to shed light on that problem.
If you look at Facebook after the 2016 election, there was a lot of outcry that forced the company to change. Do you think there are any good aspects to concentration of power, like having a single company to pressure?
I used to use the hypothetical example of a tech CEO with a very firm political agenda that would leverage all his money and the platform that he ran for that agenda. People said to me, literally, “You watch too many movies like this.” And now we have Elon Musk.
In the example of Facebook, it’s pretty tragic that it took years and years of warnings, not only of experts who cautioned against the bad outcomes of these algorithmic settings, but let’s not forget the many, many painful lessons that were learned around the world in Myanmar and Kenya.
Only when the shit hit the fan in the United States did they care about reputational harms and start changing their behavior.
COVID was actually a big turning point with the lies about vaccines that actually led to a public health crisis.
There’s a theory that when you try to silence disinformation on these platforms, it almost has the opposite of the intended effect. What kind of research did you dig up out there for the book?
When I looked at this years ago, I wanted to see how a platform like Amazon dealt with information about measles at the time. I typed measles on Amazon and the first thing I got was a book, Maggie’s Marvelous Measles. It was celebrating having measles and the body resisting. This is not so much about silencing speech. This is about the algorithmic settings that allow the gamification of this kind of content, that allow virality of things [that] really are morally objectionable.
It shouldn’t be companies that have so much discretion in deciding not only how this information gets curated to hundreds of millions of people, but also that there’s no way of looking into it as an academic, as a journalist, as a civil society leader.
Is the solution government regulation?
I think transparency, although it may have to be forced through regulation, will lead to a variety of potential outcomes. For example, with more transparency, we may learn that there’s an overestimation of disinformation’s impact on public health. We may learn different aspects of the business model that lead to different causes and effects.
Governments can also use their purchasing power in procurement. They can create markets that decide to dedicate public resources, not so much to these already wealthy tech companies but more public interest solutions, alternatives that better serve the public.
We should have higher standards on cybersecurity, like a three strike system when companies are negligent or fail to do what they say they will do. We learn so much about these incidents through their failure and there is some kind of blind trust in them. I just think it’s no longer working.
I focus on data centers, where companies actually deceive the public when they want to build data centers and they want energy contracts and water contracts. I don’t think it’s too much to say, “We’re going to require these companies to at least tell us who they are.”
What really disrupts these big companies is new startups with new technology. Do you think governments should start spending more to foster innovation?
Investing more on the part of the government in an R&D ecosystem is always a good idea. But what is missing in relation to technology in particular, is that these investments are made conditional on some core public-interest feature. For example, for the public to learn about these technologies, for there to be better accountability mechanisms, for there to be more coordination between local, state and federal governments.
In the case of many European countries, there are so many more creative and I think public-interest serving ways that governments can spend. So yes, spending can help, but let’s look at how.
You’ve spent time in the US, in the heart of Silicon Valley. Can you compare that culture to the European viewpoint?
One of my key insights of having spent time in Silicon Valley is how much of it is about money, rather than actual innovation.
In Europe, people want the success that Silicon Valley has, but not the social inequality, the fallout for society. The idea is that you need some buffer for society’s least wealthy in terms of opportunities. There’s not a great word for that in English, but we have a nice word for that to describe how you can be rich in opportunities.
Gaining access to capital remains a problem in the EU, which is unfortunate. When I look at the “values” lens, I think many in the United States believe that Europeans are adopting these laws because they want to go after American companies.
More often, the deep anchoring in needing to protect people from abuse of power by both companies and governments is much more historically informed. Data protection rules were really put in place because of the Second World War, when information about people, Jews, was weaponized against them. When I served in the European Parliament, there were lots of people who grew up in the Soviet Union, who had, as activists, dissidents, journalists, been profiled by the Stasi.
What we’re unfortunately seeing unfolding is that again in the United States some of these cautionary tales which are not difficult to find, have to hit home before people take them seriously.
In a recent podcast, Mark Zuckerberg said that he and Facebook accepted responsibility for some things that were really more political and outside of their control. And that seems like a point worth exploring.
We cannot blame everything on corporations of course. But the point I’m making in The Tech Coup is these tech companies are making incredibly important political decisions. They are political players. Silicon Valley is a hub of politics. It’s just not treated that way.
If you look at the discrimination persistent in so many AI applications, there are projections of all kinds of disasters down the road. Who do you think is going to pay for the fallout?
With the new crop of AI models, the scale is getting so huge, doesn’t it take really large, wealthy companies to build it? Who else would do it?
The question is, this technology exists. Do we know enough about it? Does it make sense to say just because a new discovery has been made to push it out to the market as soon as possible? That’s the dynamic now, even though everyone acknowledges there’s unpredictability in AI. So this is all one live experiment without many guardrails.
You talk about the tech coup, but it seems like there’s an opportunity for governments to take back the reins because it has a lot of leverage in the development of AI at these massive scales that really require government involvement.
I am recommending democratic governments reassert themselves, but also make sure there are checks on them. Because we’ve seen the worst instincts with governments and technology.
So there are many reasons for more democratic control. I personally think the government asking companies what to do next is an illustration of the tech coup.