The Scene
Sneha Revanur has become one of the leading Gen Z voices calling for AI guardrails, a push that has taken her to the White House, state legislatures, and the cover of Time magazine.
She is the founder and president of Encode Justice, a group she established when she was 15 to mobilize high school and college students to ensure AI is aligned with human values. The organization, funded by groups connected to billionaire eBay founder Pierre Omidyar and others, recently called on global policymakers to pass rules protecting the livelihoods and rights of young people as the technology is developed by 2030.
We talked with Revanur, 19, and Adam Billen, 22, director of policy who attends American University, about Encode’s ambitious AI 2030 agenda. The manifesto outlines Encode’s policy goals as it works alongside governments and companies to develop the technology in a way that benefits young people.
The View From Sneha Revanur and Adam Billen
Q: What are some of the top AI concerns for young people?
Billen: The most present thing for people that feels very direct is deepfakes and disinformation online. Anyone that’s actively using social media is encountering that now. Young people are also thinking more broadly about these tools in the workplace and in schools, and some of the larger risks in the future like autonomous weapons. I’ve talked to a lot of people my age, seeing especially in Ukraine recently, the use of autonomous drones by Russia, and then also economics and labor. People are very concerned about their career paths going into the future, and whether what they’re studying now will even be relevant in 10 or 15 years.
Revanur: For a young person navigating the digital world, there’s a whole host of things you have to worry about that previous generations didn’t have to. We have seen young people turning to chatbots when they should be turning to friends and family, or mental health professionals. That’s obviously very concerning, because sometimes these chatbots aren’t equipped to navigate mental health emergencies. I really worry that that will impact the fabric of our society, and that will lead to a collapse of the bonds that really sustain us.
Q: How do you figure out what young people want or are concerned about? It’s such a large general group.
Revanur: Obviously, there are so many young people all around the world who have varied experiences. For example, there are countries where we’re first talking about access to the internet as a more fundamental barrier, before we’re even entering those conversations about AI. So we’re definitely sensitive to those varying social, cultural and political contexts, but we do our best to kind of get a sense of what young people are feeling on the ground through our workshop program. We have a huge public awareness wing of the organization, and we actually run workshops directly in high school and college classrooms for young people. And we always want to ask: How are you feeling about these technologies? How are you using these technologies?
Q: There’s a big focus on how addictive and manipulative social media is right now for Gen Z. How does AI play into this?
Billen: People are starting to recognize that the basic algorithms on these platforms are at the crux of a lot of what is driving their sort of toxic patterns of attention and associations with themselves and their friends. It’s driving eating disorders, CSAM [child sexual abuse material], all of these issues are being driven partially just because the fundamental profit mechanism of these companies is to push whatever gets clicks, and will keep people on the platform.
Revanur: I would say that it’s really important to shift the blame from individual users to these larger companies that could honestly make very minute design choices that wouldn’t really impact their bottom line all too much, but would have a dramatic impact on user experience.
Billen: I personally have paid a lot of money for an app on my phone that forces me to not be able to use those apps for more than a certain amount of time each day and between certain periods of time. My phone is in black and white all the time. It’s taken me years just to figure out those things.
Q: Do you think that companies are doing enough to let people opt out of using their algorithms or tools? What should they be doing to give us more choices?
Revanur: They’re not, and we’re asking for them to let us opt out in our AI 2030 agenda.
Billen: Yeah, absolutely. Two key examples of this would be Snapchat. Its AI bot is glued to your home screen and is the top thing whenever you open the app, and there’s no way to easily opt out of that. And just on Instagram, we’ve seen now that the Meta AI search is extraordinarily annoying. Sometimes you search for things and it pops up with the Llama chatbot screen instead of just going and searching what you actually want to search.
Q: Are you working with companies to make it easier for people to opt out of using AI? Do you think that they’re willing to be cooperative?
Revanur: We’ve had some conversations with companies like Meta and OpenAI. There has been some willingness to engage with civil society. But at the same time, it’s important to also remember that a lot of these companies will superficially engage with groups behind closed doors, and will try to broadcast publicly that they’re cooperating with them. But they aren’t really walking the walk.
Q: So how do you move Encode Justice’s agenda forward and actually make it actionable?
Revanur: We have presented the agenda as an itemized list of things we want more leaders to enact by 2030 but it’s also, in many ways, a to-do list for us by 2030 and what we’re going to be pushing for in the US and internationally. We hope to continue working with legislators at the state level, at the federal level, and at the international level to get these things passed. We also believe that this could be a really important turning point for just changing the general public narrative around AI. We actually, earlier this week, got a sign on from one of my favorite actors, Joseph Gordon-Levitt, who supported AI 2030 and I think it’s really exciting to see this work broadening beyond a couple of people who are AI experts or AI leaders.
Q: Are there any policies on the agenda that you think are closer to being more achievable than others?
Billen: An example would be in Biden’s executive language around watermarking for AI generated outputs. That’s sort of started to become accepted as relatively common sense. There is some stuff around the use of AI-generated content and political advertising that is seeing some movement at the federal level in the US with Senator Klobuchar’s election bill.
Q: What are some of the most ambitious items on the agenda that will be more challenging?
Revanur: One of the central calls of the entire statement is obviously our global AI body bringing together actors like the US, the EU, China, India, and other top players in AI. We have to understand that it will take years for us to create any sort of central authority, and there will also be plenty of international pushback.
Q: At the end of the agenda, there’s a line where you talk about going against the blurring of human and machine. Can you give me some examples of the harmful ways you think it’s blurring? And what values from humanity would you like to see preserved for future generations?
Billen: If you’re interacting with a chatbot, you should know that you’re interacting with a chatbot, for example, with a customer service representative. We don’t want to live in a world where you’re talking to someone on the phone and you have no idea whether you’re talking to a human or a machine. It’s going to take real work to make these machines actually reflect human values based on the current technology. We don’t want to see one where they’re entirely built on the predicate of appeasing us, especially with the interaction of chatbots with young people or in romantic relationships.
Revanur: We want a world where we can see trust and community and connection and creativity and critical thinking not just preserved, but also revitalized. That is a future that is possible with AI, but it’s not the future that we’re headed towards right now. Those are all the core values that make human society resilient and so strong, and that’s what I want to keep fighting for.