This Semester's #1 AI Skill? Knowing How to Stop Training a Large Language Model!
Our students deserve an honest discussion on AI - which means including the negatives, and doing the developers' work for free is one of them.
Image: bamenny via Pixabay.
As Higher Education infuses Artificial Intelligence (AI) into the curricula at breakneck speed, mature conversations on the ethical complications of this disruptive technology should not be pushed aside as we all squeeze on the bandwagon.
A literate user can more adeptly and appropriately apply their tool. Further, as someone who believes the ethical problems associated with AI are so concerning students should not be required to use the technology, I argue learners need to know AI companies and developers rely on their uncompensated chatbot interactions to train Large Language Models.
According to Arin Waichulis, in June 2023, Meta updated its privacy policy (more quietly in the United States than in Europe) so that “if you post or interact with chatbots on Facebook, Instagram, Threads, or WhatsApp, Meta may use your data to train its generative AI models.”
So where does this leave us?
Fostering AI literacy means addressing the negatives as well as the positives. Students need to know their uncompensated labor is training Large Language Models. Accordingly, if they (or any of us) are not interested in doing that, what can be done?
The following spreadsheet outlines, to the best of my ability, how users can stop their AI interactions from training the Large Language Models in question.
The spreadsheet can be accessed directly via my Google Docs folder.
Figure 1. Spreadsheet detailing how to turn off the data-sharing/training features of given Large Language Models (LLMs).