Forbes January 19, 2025
Lifestyle
As many parents know from experience, today’s kids spend a lot of time online. While concerns about screen time are often focused on teens—unsurprisingly, as nearly half of teen participants in a 2024 Pew Research Center survey reported being online “almost constantly”—young children are also being exposed to social media, with Statista finding YouTube to be the channel of choice for viewers ages 2 to 12.
Parents and guardians seeking allies in ensuring their children aren’t exposed to bad actors and harmful content are naturally turning to the social media platforms themselves for help—but what actions can these services take to protect their youngest users? While there are multiple ethical, legal and privacy issues to confront, change begins with a conversation about possibilities. Below, members of Forbes Technology Council share their expertise, detailing tech-forward and practical ways social media platforms could become parents’ partners in protecting children.
In my view, the most effective way to protect children on social media would be to have AI score all messages and either block or provide a form of two-factor authentication whenever a message has a low score. Kids or parents can set the threshold and be alerted to take action. The most important part is to block any questionable message immediately. Security has to be a priority over getting the message across. - Osmany Barrinat, SecureNet MSP
Social media platforms should send a summary of each child’s activities to their parents, highlighting any suspicious activities and prompting them to take any necessary actions. Also, platforms could provide a system for parents to flag any inappropriate interactions and prevent them from occurring again. - Tarun Eldho Alias, Neem Inc.
Social media platforms can prevent interactions between children and harmful accounts by implementing stricter age verification during account creation—possibilities include using biometric data or ID checks. They can also introduce automatic content filters, limiting kids’ exposure to inappropriate material and interactions and ensuring that underage users have a safer, more controlled experience online. - Madhava Rao Kunchala
Social media platforms can protect children using AI-driven age verification and monitoring to flag harmful accounts. If red flags like grooming or other suspicious behaviors are detected, platforms can restrict interactions, alert moderators or notify guardians. Combined with parental controls, education and reporting tools, this would create a safer, more secure environment for young users. - Diwakar Dwivedi, Circular Edge
Social platforms could implement “safe zones” for kids—AI-moderated communities where only verified child accounts can interact. Paired with real-time monitoring, these zones would use AI to flag unusual behavior and give kids gentle prompts to exit uncomfortable situations. This empowers children to recognize red flags while keeping parents informed, proactively creating a safer online environment. - Milavkumar (Milav) Shah, Amazon
A practical solution could be combining authentication with AI agents defining user ontologies. Think of “ontology” as a live history of a particular user tied to their authenticated cellphone. Phones provide personal verification, while ontology-based AI analyzes behaviors and contexts to identify risky interactions, creating a layered, adaptive approach to safeguarding children online. - Doug Shannon, PSI CRO
Social media companies could be much more proactive about educating parents. Many parents aren’t aware of things like limitations on users under 13 or clear on what parental controls are available. Further, if the platforms developed educational content on how to teach kids about being safe online, they could use their existing targeting capabilities to deliver it directly to parents. - Dave Rosen
Social media platforms can adopt dynamic trust scoring systems that assess user behavior over time, flagging accounts with abnormal engagement patterns, such as excessive friend requests or messaging activity targeting minors. By integrating such systems with mandatory educational prompts for both parents and young users, platforms could enhance awareness while preventing harmful interactions. - Sarah Choudhary, Ice Innovations
Social platforms could implement multistep “parental check” gateways. For example, when an unverified adult account attempts to contact a child, the platform would automatically notify the parent’s verified profile or device. Only after explicit parental approval—via a secure code or mobile prompt—would the child see the message. This simple measure adds a human safeguard before any risky interaction occurs. - Mark Mahle, NetActuate, Inc.
Platforms could develop separate rules for kids’ accounts. A thorough approach would include making all minor accounts private by default and limiting direct messaging to only approved contacts. Limiting the ability of adults to search for or contact minors’ accounts and preventing minors’ accounts from being shown in public search results or suggestions would enhance protections. - Ramasankar Molleti, Options Clearing Corporation (OCC)
Social media platforms could introduce time-limited child accounts that require periodic reverification by a guardian. These accounts would have default restrictions, such as blocking private messages and enabling guardian alerts for unusual activity. This approach ensures ongoing oversight while limiting long-term exposure to harmful interactions. - Jagadish Gokavarapu, Wissen Infotech
Social media platforms should implement AI-driven behavioral analysis to detect predatory patterns, such as repeated unsolicited contact or attempts to move conversations to private channels. Coupled with verified age-based restrictions, real-time alerts for parents or guardians, and immediate intervention mechanisms, this approach ensures proactive protection while respecting user privacy. - Rishit Lakhani, Nile
To protect minors, social media platforms must designate kid-specific accounts with stricter engagement algorithms to filter harmful content and interactions. Addressing minors posing as adults is crucial; platforms should enhance impersonation detection using AI to analyze behavioral patterns, language and activity, ensuring safe spaces and preventing minors from bypassing safety filters. - Mohan Kumar, AtoB
A radical approach would be to never let children interact with real people. Virtual, AI-driven accounts based on real accounts would be sufficient for most children and would be guaranteed to be safe, since all of the content could be created with clean prompts. Real humans introduce risk, but created content can be reviewed before being used within the platform. - Luke Wallace, Bottle Rocket
Platforms should consider “digital playgrounds”—isolated social networks within the main platform that only connect verified children from the same school or educational institution. Such a system would partner with schools to create verified student networks, using school email domains and teacher verification for account creation. In this way, kids’ interactions would be limited to their peers within the same educational institution. - Achraf Golli, Quizard
Platforms could use blockchain to create immutable digital identities for users, ensuring minors’ profiles are accurately age-verified. Blockchain solutions like IBM’s ensure transparency and accountability. Harmful accounts can’t bypass age restrictions, creating a tamper-proof, scalable system. - Srikanta Datta, Coupang
There’s no “silver bullet” that can protect children from harmful content on social media. However, the potential positive impact of a collective effort—where the government, society (that is, parents) and businesses (that is, social media companies) all play a part—is immense. Such an inclusive approach could combine technology, community engagement and education to create a safer environment for future generations. - Ernest Toh, Equinix
Stay up to date on the latest real estate trends.
January 20, 2025
The Chicago-area residence that gained pop culture immortality as the residence of the “Home Alone” family officially has new owners.
January 20, 2025
You have brains in your head.
January 20, 2025
Wondering what to expect when you buy or sell a home this year?
January 20, 2025
The Calif. Dept. of Insurance announced it is issuing a mandatory one-year moratorium on insurance non-renewals and cancellations for homeowners impacted by the Southe… Read more
January 20, 2025
The IRS has announced relief available to victims of California wildfires and straight-line winds that began on January 7, 2025.
January 19, 2025
Canada recorded 245,120 housing starts in 2024, up 2% year-over-year, according to data from the Canada Mortgage and Housing Corporation (CMHC).
We Guide Homeowners through the complicated process of selling their home using our 4 Phase Selling Process and 3 Prong Marketing Strategy that alleviates their stress and moves them effortlessly to their next destination. Schedule a 15 Minute Complimentary Strategy Session Today