All discussions filtered by tag "content moderation"

Pinterest Addresses AI Content Issues

Pinterest introduces AI labels to help users identify and avoid AI-generated content on its platform.

Meta's Oversight Board Questions Policies

Meta's Oversight Board demands clarity on new hate speech policies and assesses their impact on vulnerable groups.

Meta Loosens Content Moderation Rules

Meta terminates fact-checking and relaxes content moderation rules, allowing greater personal expression on its platforms.

Meta Ends Fact-Checking for Community Notes

Meta replaces its fact-checking program with a community-driven system to prioritize free speech and reduce content moderation errors.

Meta Ditches Fact Checkers, Promotes Freedom

Meta is eliminating fact checkers, acknowledging harmful content will rise while promoting user-generated community notes for moderation.

X's Transparency Report Post-Musk Takeover

X's transparency report reveals a significant increase in reports of hateful conduct since Elon Musk's takeover.

X's Transparency Report Under Musk

X's first transparency report since Elon Musk's takeover reveals significant changes in content moderation and reporting metrics.

Platformer's Fourth Year Reflections

Platformer reflects on its fourth year since leaving Substack, highlighting subscriber growth and new revenue strategies while maintaining editorial integrity.

Bluesky Introduces Video Support Feature

Bluesky introduces video support, allowing 60-second uploads, enhancing competition with rivals like X, Instagram Threads, and Mastodon.