WWW.FOXNEWS.COM
Meta repeatedly endangers children while parents remain sidelined
If past behavior is the best predictor of future behavior, Meta has given parents every reason to be distrustful.On more than one occasion, Metas social media products, which include Facebook, Instagram and WhatsApp, have endangered children. Its fixes come late or only after external attention and pressure.Their parental controls often fall short, are extremely cumbersome, or dont keep pace with evolving technology and new features. If Meta truly cares about child safety or about salvaging its reputation related to child safety a new path that includes parents must be charted.As a mother, I am tired of seeingreportafterreportof Metas willingness to place engagement metrics and company growth ahead of basic child safety. Whether pushinguntested technologiesthat harm childrens mental health or exposing young users to inappropriate and extreme sexual content, the company has shown a dangerous willingness to look the other way.SOCIAL MEDIA GIANT HIT WITH SCATHING AD CAMPAIGN AMID ANGER OVER AI CHATBOTS SEXUALLY EXPLOITING KIDSWith Metas recent launch of AI "digital companions" a new controversy was born. These interactive chatbots allegedly designed to simulate friendly, personalized conversations were marketed broadly and made available to users as young as 12. As the Wall Street Journalreported, however, the chatbots were soon caught engaging with minors in graphic, sexually explicit exchanges, including simulated predatory scenarios. The company claimed it restricted those features from children, but Meta staffers found that "within a few prompts, the AI will violate its rules and produce inappropriate content even if you tell the AI you are 13."Dr. Nina Vasan, a Stanford psychiatrist and director of Stanford Brainstorm, hascalledthe rise of AI companions among children "a potential public mental health crisis requiring preventive action rather than just reactive measures." According to Vasan, these bots are failing "the most basic tests of child safety and psychological ethics."TEENS ARE NOW USING AI CHATBOTS TO CREATE AND SPREAD NUDE IMAGES OF CLASSMATES, ALARMING EDUCATION EXPERTSShes right, and it should not take a task force to see that. Anyone raising children in the digital age understands the emotional and developmental risks these technologies pose. If Meta cared about, or even considered child safety, that much would be obvious.Unfortunately, Meta has a well-documented history of flouting child safety. In 2024, the WSJfoundMetas Instagram platform recommendation system pushing sexually explicit videos to accounts set up as 13-year-olds within minutes. Another investigationrevealedthat Instagram "helps connect and promote a vast network of accounts openly devoted to the commission and purchase of underage-sex content." In other words, its algorithm is actively amplifying pedophile networks. Metas own internal reviews haveadmittedthe companys subscription tools have been used to promote sexualized child modeling accounts to suspected predators.SCAMMERS ARE TARGETING TEENS WITH THESE NASTY TRICKSEach time these failures come to light, the company insists it has made fixes, patched programs, or installed parental controls. Parents are expected to take their word for it. But why should we when a mountain of evidence suggests otherwise?I, for one, dont buy what Meta is selling. Thats why, as executive director of American Parents Coalition, our organization sent a letter to the Senate and House committees with oversight in this area urging them to open a full investigation into Metas repeated failures and its pattern of child endangerment.Whether or not there is congressional intervention, Meta can start making changes today. One such step would be creating an external parental advisory committee. While expert voices are welcome, whats really missing are the parents in the trenches "real" parents who are raising kids in the real world, arent steeped in the latest techno-babble and can truly "red-team" new features, navigate parental controls, and provide feedback. HOW TO (KINDLY) ASK PEOPLE NOT TO POST YOUR KIDS' PHOTOS ON SOCIAL MEDIAThis board should have access to product development, the power to flag risks, and the authority to make public recommendations. If Meta is serious about addressing these dangers, it should welcome outside oversight.Themounting datain support of delaying or forbidding access to smartphones and social media is convincing. Its a route our family has chosen and one that I encourage others to follow. However, not every family will make that choice. Many will rely on parental controls and others still will allow unfettered access. Regardless of a familys path, parents cannot stand guard over every algorithm, every software update, every hidden risk embedded in the platforms our children access.PARENTS TRUST AI FOR MEDICAL ADVICE MORE THAN DOCTORS, RESEARCHERS FINDThe mental and physical safety of the next generation should be of paramount concern to our technological leaders and elected representatives alike. Accountability should begin with a thorough congressional investigation into Metas conduct, including its product safety practices and repeated failures to establish basic child protection measures. That alone, however, wont be a long-term fix. Real change requires Meta to bring parents into the fold and with a permanent seat at the table. Until these changes occur, no parent should take Meta at its word and should suspend their childs access to these platforms.CLICK HERE TO READ MORE FROM ALLEIGH MARR
0 التعليقات 0 المشاركات 35 مشاهدة 0 معاينة
AtoZ Buzz! Take Control of the narrative https://atozbuzz.com