Technology
Facebook shows how AI bans the only thing people actually want more of on Facebook: Weed
Mike Schroepfer knows a dank nug when he sees one.
The chief technology officer at Facebook took the stage on the second day of the social media company’s annual developer conference in San Jose to talk about machine learning, marijuana, and making sure you don’t get your grubby little hands on any of that sticky icky. That’s right, the greatest minds in the Valley are spending their most productive years making sure you can’t buy THC Rice Krispies treats via their “private social platform.”
Of course, this being F8, Schroepfer was making a larger point about how the service’s artificial intelligence tools have improved in their ability to identify and remove content that violates Facebook’s policies. Content, he explained, like violence, nudity, harassment, and the listing of drugs for sale.
But how are Facebook’s systems to distinguish between an ad listing the kindest of buds and, say, broccoli tempura? Schroepfer asked the crowd to, with a show of hands, see who could tell them apart in two photos. Almost everyone got it right.
The chief technology officer helpfully laid out the three digital pillars of Facebook’s effort to keep you from scoring drugs on its platform: keyword matching, computer vision, and “nearest neighbor manifold expansion.” These tools, when combined, allow Facebook to ban the one thing people actually want more of from Facebook. Yes, we’re talking about weed.
We’re now getting a detailed walkthrough of how Facebook’s automated systems detect violating content with computer vision even when the images aren’t super obvious #F8 pic.twitter.com/6yARBKpWcR
— Karissa Bell (@karissabe) May 1, 2019
Keyword matching works by looking for words in posts like “marijuana,” or “drugs.” Computer vision, as the name suggests, works to identify the image displayed. Neighbor manifold expansion, on the other hand, really drives the buzzkill bummer home.
Schroepfer gave an example of a photo of weed packaging, and said that Facebook’s AI is able to identify these unfamiliar images as containing a federally illegal drug because it can figure out what they most resemble.
Obviously, Facebook still was a lot of work to do. A slide shown on stage reminded the audience that, as of Q3 of 2018, Facebook’s automated systems only proactively caught 14.9 percent of harassment removed from the site. The rest was user reported.
So we got a really thorough explanation about how Facebook can police very obvious policy violations (drugs, nudity, etc.), but I don’t think we’ve heard anything yet on how this can apply to more complicated issues, like misinformation #F8
— Karissa Bell (@karissabe) May 1, 2019
Of course, there are far worse things than weed on Facebook. Like hate speech and misinformation meant to sway elections and attack minority groups.
Maybe, and it’s just a suggestion, focus more on that and less on making sure we can’t buy cheap weed via your digital town square? Your newly mellowed out users will thank you.
-
Entertainment7 days ago
‘Only Murders in the Building’ Season 4 ending explained: Who killed Sazz and why?
-
Entertainment6 days ago
When will we have 2024 election results online?
-
Entertainment5 days ago
Halloween 2024: Weekend debates, obscure memes, and a legacy of racism
-
Entertainment6 days ago
Social media drives toxic fandom. Is there a solution?
-
Entertainment5 days ago
Is ‘The Substance’ streaming? How to watch at home
-
Entertainment5 days ago
M4 MacBook Pro vs. M3 MacBook Pro: What are the differences?
-
Entertainment3 days ago
Menendez brothers case reignites online: The questions that keep resurfacing
-
Entertainment4 days ago
‘A Real Pain’ review: Jesse Eisenberg and Kieran Culkin charm as odd-couple cousins