Loading...

User safety laws should not be too prescriptive: Antigone Davis, Meta

User safety laws should not be too prescriptive: Antigone Davis, Meta
Loading...

In February, the National Center for Missing and Exploited Children (NCMEC) in the US announced a platform called Take It Down, which aims to stop intimate images of young people being shared online. Today, Big Tech firm Meta announced that this platform will be available in Hindi and other Indian languages soon, bringing its to India as well. The move is part of Meta’s efforts to up safety standards online, something that governments have been pressuring Big Tech firms to do for years now. In an interview, Antigone Davis, VP, Global Head of Safety, Meta, said that the company welcomes regulations, but warned that making them too prescriptive may not be the right idea. Edited excerpts:

How do you look at user safety today, now that regulations are emerging?

We share the same interest as policy makers when it comes to safety. We want people to be safe on our platform, we want them to be able to connect on our platform, and we think there needs to be fair industry standards, so that we're all on the same page in terms of what's expected of industry and industry then knows has clear guidance on what's expected. 

Loading...

I think it's important that in that guidance we ensure that people still have access to these technologies, that they're still competitive, that they're still creative, that people can still make connections. But I believe with collaboration with policy makers we can land in the right space. 

And we really do welcome those standards.

Big Tech has mostly asked for uniformity in regulations across the world, does that affect how you design safety standards?

Loading...

Well we certainly want to have as much uniformity as we can. We're building our platform at that scale, so we want to build standards at scale. 

That said different countries are different and we recognize that there will be some differences that play to that. But I think this is an area where we are communicating and collaborating. We can reach something that's close globally.

I’ll give you an example. If you think about age verification, and knowing the age of users so that we can provide age appropriate experiences. It's a vexing problem for all of industry. 

Loading...

But it is something that we have taken seriously and have put in place technology to help us identify age. We also know that policy makers around the world uniformly, for the most part, think it's important for companies to understand age and to provide age appropriate experiences. 

So we're seeing conversations right now, including in India, in terms of parental consent in age verification, we're seeing those same conversations in the United States, and in Europe. I think trying to find a way in which we can deliver the age-appropriate experiences, and do that globally is imperative for our company, and I think trying to set a standard that works globally is really important.

There’s some conversation around using IDs as a way to verify. There's some value, and some countries have national ids systems, like in India. But even with those ID systems, there are many people who will, who don't have IDs, who won't have access if they can only present an ID. Also, IDs force industry to take in even more information than is needed to verify age.

Loading...

But it doesn't mean that that shouldn’t be potentially one option, but other options are potentially technology, for example, that uses the face to identify and guess the age. That’s very highly accurate and doesn't require taking in other information. 

In order to do that, we have to engage with policy makers to get to that consistency.

How are you gauging the effectiveness of current safety standards, like age verification?

Loading...

So for example, we have a way for people to set up a notification that they've been on for a certain period of time and that they should take a break. Early testing showed that when people turned on those notifications, 90 percent right after, thus they are effective. 

We have something called nudging where we will nudge people to other content if we see they’ve been on one kind of content for a long period of time. I think we studied that for about a one week period, and found that one out of five people when they saw that nudge did move to other content. 

So we don't just want to just build tools that sound good, we want to make sure that they work. Another example of testing was in the context of our parent supervision, finding that right balance between tools that teens would be willing to let their parents use and not try to go around them. 

Loading...

We work with experts to figure out what that right balance is. 

How do you look at content safety, on what people should and shouldn’t see?

Well, we have our community standards and we try to balance it with people's ability to express themselves, but also ensure that people are safe on our platform.

In addition, we also have tools — some of them are in the background that we use to find content that might be violating (standards) and remove it from the platform. We also have borderline content, content that doesn't necessarily violate our policies, but in a context of young people might be more problematic. 

Sometimes that content at the edges can be problematic, particularly for teens. We won't recommend it. We will age-gate it out for teen teen users. 

Can you give us examples of these tools that work in the background?

Yeah, so going back to the age issue. Even regardless of actually verifying age, we use background technology to try to identify people who might be lying about their age and remove them if they're under the age of 13. 

So maybe someone posts happy 12th birthday, that's a signal that the person is not 13 and above, and we can use that to signal to require that person to verify their age and if they are unable to verify their age, then we will take action against that account. 

So those are the kinds of signals that we use, we train and create classifiers, to identify violating content. 

How does safety change in the context of video? Do your technologies change?

I don’t know if the standards change, but certainly the technologies change. If you were to look at some of the ways that we're trying to address safety in the metaverse, for example, it's different. 

It's because of the complexities that are there. We actually have moderators that come into an experience and can be brought into an experience by someone who is using the platform. Which is very different, but it calls for it because it's a dynamic space. We don't have that in the same way, in a space that's primarily text-based, or photo based

How are you balancing disclosing proprietary information to policy makers that may be required to build policy around platforms?

Yeah, I think there more and more we’re seeing a push for a better understanding of our technologies. We've seen some legislation that has asked for risk assessments. And I think that in many ways our company has tried to be proactive in trying to provide some information around what we do, and provide ways to measure and provide accountability. 

We're trying to build those bridges, so that we can provide the kind of transparency that enables people to hold us to account, that enables people to measure our progress.

You're right. You have to find that balance between allowing companies to protect what's proprietary, but there are ways. As we've shown, there are ways to give enough information to enable policy makers to understand these things. 

I think the other danger is that trying to understand today doesn't necessarily mean that the technology would be (the same) tomorrow. So to some degree, trying to build out legislative solutions that focus on processes, without being too prescriptive is probably the best way to ensure that we develop legislation and standards that have a lifespan.

Have the IT Rules and current regulations at all affected how you build safety mechanisms? Any tweaks you had to make?

I think we’ve not waited for regulations. We’ve heard from policy makers, well before they started regulating, what their concerns were. And we've worked to build solutions. 

Because it takes a long time to create legislation and regulation. In the meantime, we feel that we have a commitment to safety that we want to ensure for our users. So I don't know that we have any specific changes in particular, but we have been listening to policy makers for a very long period of time and trying to meet their concerns.


Sign up for Newsletter

Select your Newsletter frequency