top of page
Writer's picturedasia olivares

Safer Internet and the Role of Big Tech


February 8th is Safer Internet Day, which has us wondering what the tech industry’s biggest players are doing to protect users and report crimes. Social networking and messaging apps are used by billions of people around the world. In fact, 80.7% of the total population in the Philippines was on social media as of January 2021. Along with an internet user growth of 4.2 million accounts in PI from 2020 to 2021 came an increasing number of child exploitation crimes. Child sexual abuse hasn’t just moved from the dirty back alleys to hide on the dark web; a growing number of these atrocities are found on the surface of the internet in the very apps we use every day.

It may be that when Big Tech created these platforms with the user experience in mind they forgot to prioritize safety and prepare for dangers. Without enough safeguards, we are seeing a dramatic uptick in instances of online trafficking, particularly since the pandemic lockdowns. The Philippines’ tiplines report that internet-based abuse materials have increased more than 260-percent. It’s hard to narrow down the leading factors that in this increase, but it is clear that overseas demand for child sexual abuse material, an increase in internet access, and technologies that ease image sharing and payments with relative anonymity are part of the problem.


Response from world leadership

Governments are taking note. The Philippines is considering an expansion of the Anti-Terrorism Act of 2020 to allow social media regulations that encourage companies to ban and remove criminal content posted by users to their platform. The UK has been working on the Online Safety Bill to protect its citizens with a “new legal framework for identifying and removing illegal and harmful content from the internet.” And in the U.S., members of Congress have been calling on Big Tech representatives to account for the known dangers on their platforms and do more to earn their Section 230 protections.

It’s becoming common to hear about Section 230, the part of the Communications Decency Act that protects internet companies from being held responsible for content posted by their users. These protections allow tech companies to do what they do best, creating platforms that connect people and promote a free flow of ideas. Currently, these protections come without any oversight or accountability. Social networking companies argue that without Section 230 protections, they could no longer operate because the liability could become too great.

The concern is that without any regulations or “best practices,” social networking companies may not be doing enough to prioritize the identification, removal, and reporting of accounts that promote trafficking and cybersex crimes, especially when it involves children. We know they are monitoring for misinformation in politics and public health. Leaders and advocates would like to see them do more to probe for egregious public harms.


How much responsibility should “Big Tech” have?

Social media started as a way to connect and has become an important space to do business, shop, learn, research, discuss, and collaborate. Facebook and Instagram, Google, Twitter, and similar Big Tech companies continue to grow around the world, but are out of step with their user protections. Growth came before important safety measures. While Snapchat set new protections in January of 2022, they only reduced the ways that adults connect with children on their platform. Instagram recently announced new protections for kids, too, but those won’t start until sometime in Spring 2022.

With more users comes a greater volume of user activity and uploaded content. Monitoring the content and activity of 4.5 billion users worldwide is a task that has far outgrown being managed by people. Even Big Tech’s work force isn’t big enough to handle this; they may be intelligent enough, though. It looks like the savvy algorithms that predict behavior, interests, and connections are just the pattern recognition technologies that could weed out trafficking practices and systems. Incorporating help from artificial intelligence (AI) technology could speed the process of scanning the vast number of posts, find the harmful content, and save law enforcement thousands of work hours. Then, cybersex crime teams have time to collect evidence and stop more perpetrators.

It is encouraging to see that emerging companies are thinking about safety before growth comes. Social platforms like Yubo began by building safety into the code of their leading-edge technology because they expected the majority of their users to be underage. SafeToNet is using AI in their keyboard technology to identify harmful content coming in and going out. A new product they are developing can even scan images and videos for threats in real time, blocking dangerous videos and images. SafeToNet’s partnership with device maker Samsung is part of an initiative to hard-wire safety into every device, not only protecting children but also thwarting efforts of anyone that might solicit or create abuse materials to share online.


Consumers and CtL: How can we help?

We are gradually learning what it means to be good citizens on the Internet. We may be adopting new technologies and apps before we fully understand how to be safe while using them. Until we have an internet space that is safer for everyone, consumers and advocates can:

  • Invest in technology that has built-in safety features. Showing the tech industry the value of safety could be the most effective way to encourage them to prioritize protections and technologies that fight internet crimes to children in every part of the world.

  • Petition technology companies, ISPs, telephone companies, banks, and hotels to collaborate and utilize AI that identifies, disrupts, interrupts, and reports trafficking material and activities.

  • Communicate with our representatives that we support legislation defining best practices on social networks and hold them accountable for the timely identification and removal of reported trafficking content.

  • Educate people about vulnerabilities, the need for protections, and how to recognize and report suspicious activity.

Whatever the final answer, stopping child exploitation crimes on the internet will be helped by the collaborative efforts of the industry, governments, and the people they serve.

Do you have ideas that would help each of us do something to stop the proliferation of trafficking and abuse on the internet? How can we support Internet safety as an organization and as consumers?

537 views0 comments

Recent Posts

See All

Entering a New Season

On September 25th is when every person who has supported us whether it's been since the beginning or in the last few years; got the email...

Hope Has Come! Sunday has come!

Written by CtL Missionary Cindy Rorher Good news! Sunday is coming. On that Friday when Jesus was crucified and breathed his last, that...

Comments


bottom of page