The Revolution Will Be Programmed; addressing ethics issues in artificial intelligence

Gil Scott Heron said The Revolution Will Not Be Televised. And while cameras may capture the bullets firing or the bottles smashed, they won’t capture the change within a human heart – in the mind of someone who has been changed and awoken to the reality around them.

Gil Scott Heron album cover

But here we are in an age of rapid technological changes being made within generative artificial intelligence programs and leading us to questions of how they will change the world around us. Some see it and foretell a future of darkness much like the Matrix or Terminator when the machines finally wrest control from our hands. Others speak of the wonderful egalitarian utopia created when anyone can access the content they need to be created without the gatekeeping of those in certain legal, creative, or educational fields.

Are the hearts and minds of those guiding those rapid changes awake to the genuine needs of our society and ALL its people? I have no skill at predicting the future or what it may look like, but artificial intelligence (AI) has been around for a while in various forms and I can tell you about the current issues and how, if the past is in any way predictive, what we must do to create a better future and be part of programming a more inclusive world by considering the ethical implications of the actions we’re taking today.

Abusive Overreach

It has long been the nature of technology to be initially democratizing from the printing press to the automobile to the internet which seems like the noble goal of all inventors. But at some point, as it becomes more common, it becomes rifer for abuse. It was once said (by whom I can’t recall), that any new invention will initially have its natural use which will be followed by an accidental harm and then a premeditated harm.

This is clear in objects like a knife which can cut our meat, accidentally cut our finger, or purposefully cut someone else. It’s extreme in the smashing of atoms or recklessly unintended with our cars speeding. Sometimes the inventions have the best intentions like moving us forward making our cars more efficient with leaded gasoline or cooling us with chlorofluorocarbons (CFCs) or with the unintended effect of poisoning generations. So while I am excited about so many of the new tools being brought forth with generative AI, I ponder the unintended implications of this jump forward.

Military & Law Enforcement

As I said, we often don’t anticipate the unintended effects. And I don’t know what those are for AI for good or evil. But I do know human nature and here are ways the technology is already being abused by the corruption so common in those in power (or those seeking it). Even now facial recognition systems that help us wonderfully with our vast photo collection are also being used by law enforcement to track criminals. And for some that may seem helpful.

drone flying over mountains

But we don’t have to imagine a scenario akin to the images in The Minority Report to see the impact. The issue lay in the common abuses by police which shouldn’t be difficult to imagine. It allows for even more intense discrimination and tracking of all people. From the surveillance abuses of the FBI to data tracking through the Patriot Act, agencies have already shown their penchant for abuse in the name of “public safety” that only made certain groups less safe. More AI without restriction will only make that targeting easier. And that harkens back to the lantern laws that required enslaved people to make themselves always visible.

The tracking can be used in ways most wouldn’t think possible. AI systems were trained on WiFi and video data to now be able to accurately use just wireless signals to now determine accurate human locations and physical positioning. To put it plainly, every router can now be used to essentially act as a visual heat map for finding people.

Even now, Ukraine is demonstrating the speed AI gives them in firing off rocket attacks as it pulls in real-time data about troop movements. And while the drones at Euro Disney are enchanting, it’s not a big leap between that and a coordinated drone attack tracking people and making immediate decisions about the use of lethal force on a number of people without human input especially as there are indications those tests are already happening.

Data Pull

Many of these generative AI systems that are capable of replicating our speech, writing, and mannerisms can do so because of all of the data they have culled from our existing online data. It is one of the multitudes of reasons I’ve questioned the value of continuing to write and publish here for free if it will just be captured for the use of some corporately-controlled bot. And whereas I don’t recall getting a check for that data, here I am adding fuel to the fire.

Some folks though certainly did not consent to that data culling and have some legal protections. Sarah Silverman is one of a few copyrighted content creators who are currently taking legal action against tech companies like OpenAI and Meta who pulled their writing without consent. What the results of those cases will be in the midst of these warp-speed leaps ahead is hard to say, but the genie has already been released from the lamp.

Sarah silverman at a podium

And as those limitations are determined, we can see what some of those negotiations look like first-hand in relation to the SAG/AFTRA & WGA strikes. It is as entertainment companies making massive profits for their CEOs and shareholders are seeking to pay the content creators less and drive the ever-widening income gap further. And now AI can function as means for future content creation without the creatives involved at all if the producers and studios get their way. They want to own the likeness of actors and other creators in perpetuity without having to ever again pay those artists. So the outcome of these strikes may hold sway in the AI universe.

Beyond that which already exists on the screen or stage though AI has made it possible to pull data directly from our brains. That may sound fascinating when applied to young children, coma patients, or cute dogs like in Up or it may sound to you like the premise of a dystopian novel. But it exists right now! By pulling data from MRI scans it has learned to map what you are seeing or talking to yourself about just through the scan. Again that’s an amazing leap forward in accessibility for people with disabilities that cause them to be non-verbal. But I can at the same time imagine it as a method for interrogation. Add AI tools to cull all the existing DNA information now online, we have our whole selves caught in this web. So whether it’s your internet data or still in your head, it may one day be part of the AI ecosystem in ways that benefit or harm us.

Misinformation & Disinformation

I’m not that worried about the fancy filters as much though it could make catfishing easier and drive even more insecurity, especially in young people struggling with their self-worth. And while the accuracy of the information provided by those systems is questionable at times, it isn’t the misinformation that’s very concerning. We can teach students to check the accuracy of their generated content that might have been pulled from questionable resources and not to trust the sometimes biased and inaccurate content. It may cost some journalists or lawyers their jobs in the interim as they learn how to fact-check better. But beyond that is the deliberate disinformation.

fake news road sign

And the deep fakes aren’t that scary yet when the goal is bringing Paul Walker back to like for another Fast & Furious movie or pretending Joe Rogan endorsed your product. And while it may be quite a while before deep fakes of political figures cause political upheaval or a false image of a famous person ruins a career, I can imagine quite easily how some faked images could impact some average person’s career or relationship if they were scandalous enough and difficult to prove false.
It is the rate though at which tools like ChatGPT or Bard can churn out information that is most disconcerting. The demonstrably false information being pushed across social media that picks up traction is already moving at a breakneck pace and will only increase. It will almost certainly play a role in the 2024 election and is likely to be a tool in the arsenal of many aspiring dictators. 

Microsoft’s precursor to ChatGPT was a Chatbot that originally pulled information from Twitter. It took less than a day for that bot to begin spouting racist anti-semitic tropes. Now that may be a statement about the existing darkness on the internet created by actual horrible humans. We know that content already exists made by people with twisted ideas and motivations, but when the scale becomes as prolific as generative AI allows it becomes nearly impossible to stop especially in an age of instant gratification. And as much as objective truth seems lost in discussions now, it will become even harder to find.

To aid you and your student’s ongoing learning, The News Literacy Project has a range of tools like Checkology to help us become better at spotting false information. 

AI Bias

This is where a lot of the conversations with our students have begun on the topic of ethics and equity in AI. There are a number of troubling examples in AI systems because the data brought in is not reflective of the population at large. It’s easy to imagine if I’m just pulling images from certain social media sites my information will be skewed towards the people who frequent those environments. In that vein, there are so many examples of incomplete datasets that lead to problematic results.

in Facial Recognition

with Screeners

in Medicine

And now as other technology advances the concerns and dangers become greater when our self-driving cars don’t see black people and can become the most deadly example of AI bias according to a Georgia Tech study.  That facial recognition system bias and other AI bias can enhance existing inequities as evidenced in the conversation between AI equity expert Joy Buolamwini and Congresswoman Ocasio-Cortez.

Lessons + Learning

For help in educating students on these topics, CS4All NYC’s equity work has led to a number of lessons and resources that you can use immediately. You can explore AI bias through the Most Likely Machine. I highly recommend exploring and training your own AI datasets with Teachable Machine. Victor Hicks has had students program an HBCU with Scratch and others have built accessible and inclusive apps with Mad Learn.

  • slide deck
  • slide deck

So What Next

One of the easiest ways to help some of the worst possibilities of AI from becoming a reality begins with ensuring an inclusive workforce and building a more equitable facial recognition landscape. We need transparency in the data and the spaces where AI is making important decisions. 

abstract AI generated artwork

The lawsuits that were mentioned are slow and likely to be an insignificant disincentive if it’s a monetary punishment in the face of mass profits being produced. But pursuing legal protections for content creators, especially those that will likely result from the current entertainment strikes might be the beginning of the larger cultural protections needed for the working class being made by the wealthy in regards to AI. 

Our government is often even slower to act in the face of technological needs. We’re still waiting for the guardrails for data breaches like the Facebook Cambridge Analytica scandal connected to elections back in 2016. I don’t expect any movement soon especially since most American politicians, for good or bad, see allowing free exploration in AI spaces as necessary to keep pace with other nations like China. So maybe we must continue the protests and strikes and also the vigilance for truth and collective good. And while we appreciate the ease it brings to our personal and work lives and the pretty things we can make more easily, we need to consider the ethical ramifications of AI elsewhere.

Leave a Reply