In a controversial move, X (formerly Twitter) has updated its terms of service, which will come into effect on November 15. The new terms have sparked a wave of concern and debate among its users, as the platform now allows AI models to train on any content posted by users. This significant update has alarmed creative professionals, privacy advocates, and regular users alike, with many fearing that their intellectual property and personal information could be utilized for AI advancements without their explicit consent.
Major Update: X’s Terms of Service Now Permit AI Training on User Content
The change, buried deep within the updated terms of service, has drawn attention to a particular clause:
“By submitting, posting, or displaying Content on or through the Services, you grant us a worldwide, non-exclusive, royalty-free license to make your Content available to the rest of the world,”
This includes using the content for machine learning and artificial intelligence (AI) models—whether generative or other types. In essence, by continuing to use X, users are consenting to have their posts, images, and other content utilized to improve X’s AI tools.
Growing Concerns: Creative Community and Privacy Advocates Sound the Alarm
The update has sent ripples through the platform, with particular concern arising from two groups:
- Creative Professionals: Artists, writers, and photographers have voiced their worries that their original work, often shared on X, could now be used without their permission to train AI systems that might one day replace human creators.
- Privacy Advocates: The broader user base, especially those concerned about privacy, fear their personal information, including photos and conversations, could be mined for AI training, potentially leading to unforeseen consequences.
- Many users have already started deleting images and personal posts in response to the update, hoping to avoid having their content swept into X’s AI training pipeline.
- For artists, the prospect of their work contributing to the development of AI-generated art without credit or compensation is particularly worrisome. They fear the long-term effects of AI systems learning from their creations, as these tools might dilute the value of human-generated content.
Legal Impact: Disputes Will Be Handled in Texas Courts
Adding to the controversy is the legal angle embedded within X’s new terms. Any disputes over the changes will be handled in federal courts or state courts in Tarrant County, Texas. This jurisdiction has already been selected for two ongoing lawsuits involving X. The choice of this court, which is known for conservative rulings, raises further concern among users who fear biased judgments in favor of the company.
What This Means for Users
According to the updated terms, any users who continue to access X’s services after November 15 will automatically be considered as agreeing to the new terms. Users who wish to challenge these terms will be forced to do so in the Texas court system, a daunting prospect for many individuals, particularly those living far from the state.
Privacy at Risk: X’s AI Chatbot, Grok, and Data Sharing Concerns
The use of user data to train AI models is not new for X, but its new AI chatbot, Grok, has already faced controversy. Grok has been criticized for spreading misinformation, including false claims about the 2024 U.S. elections, and generating violent, doctored images of well-known public figures. The latest terms of service update further expands X’s ability to utilize user data for training Grok and other AI models.
The Current Data Privacy Landscape on X
Before the terms of service update, users had the ability to opt-out of having their data shared for AI training by adjusting their privacy settings. By navigating to the “Privacy and Safety” section and selecting “Data Sharing and Personalization,” users could uncheck a box to prevent their data from being used for AI purposes.
However, with the new terms, it is unclear whether this opt-out feature will remain. The updated language now gives X broad, unrestricted rights to use all content on the platform for its AI training, including content from private accounts. Prior distinctions between public and private content seem to have been removed in the new terms.
- This broad, unrestricted data usage is not unprecedented among social media platforms, but the explicit, unambiguous language used by X has set it apart from competitors like Meta and Google, where the terms are more vague about how user data is applied to AI training.
Concerns About Opting Out
One question that remains is whether users will still have the ability to opt-out of their data being used for AI training under the new terms. Alex Fink, CEO of Otherweb, a news-reading platform that uses AI to combat misinformation, explained that while many companies provide opt-out features, the legal terms often give them more leeway than the actual settings suggest. Users may find that even after opting out, their data could still be used in ways they hadn’t anticipated.
Broader Industry Trends: AI Training and Data Privacy
X’s move is part of a growing trend among tech companies to utilize vast amounts of user-generated content for training their AI systems. Other giants like Google and Microsoft have also come under fire for the opaque ways they handle user data, with AI tools sometimes producing bizarre or inappropriate results.
However, X’s decision to clearly spell out its intentions regarding AI training has stirred up more controversy than other companies, as users are now more aware of how their data is being used. Many have called for stronger regulations and clearer guidelines to protect user privacy in the rapidly evolving AI space.
The Potential Long-Term Effects of X’s New Terms
This latest development has raised several important questions about the relationship between social media platforms and their users:
- How will this impact user trust? As more users become aware of the extensive rights these platforms have over their data, they may be less inclined to engage fully, especially if they feel their privacy is being compromised.
- Will more users leave the platform? The creative community and those with privacy concerns may be among the first to reconsider their involvement with X, potentially migrating to other platforms that offer better protections for their content and data.
- What does this mean for the future of AI? As AI continues to develop, its reliance on user data will only grow. This raises ethical questions about who owns that data, how it’s used, and whether users should be compensated for contributing to the development of such powerful technologies.
What You Can Do: Protecting Your Content and Data on X
If you’re concerned about how your data or creative content might be used under X’s new terms, here are some steps you can take to safeguard your privacy:
- Review Your Privacy Settings: Go to the “Privacy and Safety” section in your settings and check your data-sharing preferences. Ensure you’ve adjusted the settings to limit data collection where possible.
- Consider Deleting Sensitive Content: If there are posts, photos, or other pieces of content that you do not want to be used in AI training, consider removing them from your account before November 15.
- Monitor Updates from X: Stay informed about any further changes to the terms of service and privacy policies that may impact how your content is used in the future.
- Consult Legal Resources: If you’re particularly concerned about the potential legal implications, seek advice from privacy experts or legal professionals to better understand your rights under the new terms.
Conclusion: Navigating the Changing Landscape of AI and Social Media
X’s latest terms of service represent a bold step into the future of social media, one where user-generated content serves as the fuel for AI development. While this may drive innovation, it also brings significant concerns about privacy, creativity, and user control. For users of X, understanding the full implications of these changes is essential to navigating the platform moving forward.
The November 15 deadline marks a critical juncture, and how users, creators, and privacy advocates respond to these changes will shape the future of AI ethics, social media, and user rights.
Source: CNN edited by BharatiyaMedia.
Add Comment