In these days of AI-generated images, videos, and “alternative facts,” identifying reality in the 2024 campaign cycle is going to be difficult. We have already seen stories about altered images and phone messages being deployed against political opponents. This made me start wondering – where is this all going to go? Are we really at a stage where there simply is no identifiable “truth?” Will Kellyanne Conroy’s “alternative facts” become a thing? The more I thought about this, the more concerned I became. I figured that there had to be people a lot more tuned into this than me who are already figuring out what to do. Here are some strategies they recommend
Technological Solutions
There are several technological ways to ensure the authenticity and origin of digital content in the age of AI-generated media. National governments are considering legislation to address this problem and tech companies like Google, Microsoft, and Meta are developing tools to enhance transparency and enable the detection of AI-generated content.
Digital Watermarking involves embedding an identifiable pattern into content to track its origin. I understand this theoretically but I don’t know what it would look like in practice.
Content Provenance securely embeds and maintains information about the content’s origin within its metadata, helping to trace it back to the source. I have a vague idea what metadata is (are?) but beyond that, I’m not sure what this means.
Retrieval-based detectors store all AI-generated content in a database that can be queried to verify the origin of the content.
Post-hoc detectors rely on machine learning models to identify patterns in A-generated content that distinguish it from human-authored content. Using AI to detect AI, so far as I can tell.
Media Literacy and Public Awareness
There is no substitute for an informed and educated citizenry when it comes to dealing with something like artificial intelligence, which is sophisticated, complex, and beyond what most people understand. There are several avenues for pursuing the kind of knowledge the public will need. I gotta tell ya, I don’t have a lot of hope for this one.
Citizens need to Understand AI – how it works, its capabilities, and its limitations is the essential first level of media literacy. This doesn’t mean that the general public has to understand the technical elements of creating artificially generated content; it simply means that they need to know what’s possible.
Regular folks can recognize AI-generated content, but it does take time and attention that regular folks don’t usually have. Mainstream journalism will employ these tools to keep from spreading deep fakes, but social media platforms will be rife with viral AI-generated content. I think this will be very hard to combat – people will share fake content without evil intent, because they have themselves been led to believe that what they’re seeing or reading is true.
An informed public can advocate for responsible AI policies and regulations
Regulatory Frameworks
National governments and international organizations have the funding and the reach to try to bring order into the artificial intelligence environment. Whether they have the political will to act is a bigger question, I think. As Representative Katie Porter said on a podcast this week, the motto for Congress should be “Solving Yesterday’s Problems Tomorrow.” Our geriatric legislature still doesn’t quite understand the interwebs, much less AI.
The European Commission has proposed a legal framework focused on the specific utilization of AI systems and associated risks. This framework aims to provide clear requirements and obligations for AI developers, deployers, and users.
The National Institute of Standards and Technology (NIST) in the U.S. has developed a framework to manage risks associated with AI. It includes guidelines for trustworthy and responsible AI design, development, use, and evaluation.
Google has outlined principles for responsible AI regulatory design, emphasizing the need for balanced, fact-based analyses of AI’s opportunities and challenges
Collaboration
Any effort made to address the problems inherent in AI will work better if all of the stakeholders are involved. Identifying the evolving set of stakeholders and getting them to buy into this effort will be the biggest challenge.
The first part of collaboration has to be knowledge sharing – not only knowledge but also “best practices.”
International collaboration can lead to standardization of AI principles and policies; in a digital world that pays no attention to national borders, consistency will be critical.
Collaborative efforts can drive innovation in AI by pooling resources, expertise, and data, leading to more advanced and ethical AI solutions.
AI has the potential to address global challenges like climate change and pandemics. Collaboration ensures that AI solutions are developed and applied on a scale that matches these challenges.
Collaborative initiatives can help educate the public about AI, demystifying the technology and promoting informed engagement.
Ethical Guidelines
Ethical guidelines ensure that AI systems support human decision-making and respect human autonomy. We were warned about this in 1968: “I’m sorry Dave, I’m afraid I can’t do that.”
These guidelines emphasize the need for AI to be secure, reliable, and robust against manipulation or errors.
Guidelines need to protect personal data and privacy rights in AI operations
Guidelines need to promote the development of AI systems that are inclusive and non-discriminatory. I’m not sure how humans can design these systems when most of human history has involved one group of humans denying this need to other groups of humans.
These guidelines encourage the consideration of AI’s impact on society and the larger environment.
Guidelines stress the importance of accountability and liability for the actions taken by AI systems
I hoped that exploring this issue a bit would reassure me that the grownups were in charge. Let’s just say I’m not optimistic about this.
Confession time
I used Copilot, Microsoft’s new “everyday AI companion” to assist me in writing this essay. I asked it a series of questions about artificial intelligence and it gave me first-level responses that I then built on.
Here’s the question I posed:
What are some technological solutions for dealing with artificial intelligence?
Copilot gave me a relatively brief answer, which provided me with the vocabulary to ask more questions. Then it gave me the sources it used to generate the answer – including its “conversation” with Bing, a search engine owned and operated by Microsoft.
Source: Conversation with Bing, 3/21/2024
(1) Our quick guide to the 6 ways we can regulate AI | MIT Technology Review. https://www.technologyreview.com/2023/05/22/1073482/our-quick-guide-to-the-6-ways-we-can-regulate-ai/.
(2) How to manage AI's risks and benefits | World Economic Forum. https://www.weforum.org/agenda/2018/01/how-to-manage-ais-risks-and-benefits/.
(3) We know the risks of AI — here's how we can mitigate them. https://www.weforum.org/agenda/2023/06/10c45559-5e47-4aea-9905-b87217a9cfd7/.
What do you all think about the fact that I used AI to write about AI? Were you disappointed? Pissed? Do you feel like I somehow “cheated?” I kind of have that feeling, but I can get over it. I’m just glad I’m not a teacher or professor trying to grade student research papers.
I asked AI to create an outline for me on a paper I was writing recently. I’d already written the paper but I wanted to see if AI worked. Silly me. In the blink of an eye, the outline appeared. Both AI and I agreed on points to be addressed in the paper so I was relieved. But also a little shocked.
I’m just happy you wrote the article. Thanks! 😎