Artificial intelligence isn’t new, but the hysteria is. Here’s how we protect against the dangers, and proliferate diversity, equity and inclusion
October 30, 2024

“If you’re not worried about AI, you should be. Like every technological advancement throughout time, we’ve been promised great wealth and transformational living, whether it was the advent of farming, the industrial revolution, mass production, advances in computer technology, or ways to communicate with millions from the palm of our hand. However, history has shown us that this simply does not happen. The wealth created by these new advances not only stays in the hands of the few who control the development and distribution, but disproportionately increases their wealth at the expense of the masses. AI will be no exception. We are always looking for that nugget of gold, that one piece of uniqueness that sets us apart from the masses. And sadly, uniqueness doesn’t come with AI. Creative AI can only emulate and reinvent what has gone before. It cannot create whole new horizons and universes.”–Nick Stern/Plastic Jesus, internationally acclaimed photojournalist and artist

I’ve been using artificial intelligence, or AI, for two decades. So have lawyers, doctors, engineers, insurance agencies, retailers, police departments, and nation states. As Regina Jackson, co-founder of Race2Dinner, co-author of White Women and executive producer of the documentary Deconstructing Karen, told me, “I’ve been a consumer of future-related programs, movies and technology since my son, who is now 55, started watching Star Wars movies since 1977.

So, why the hysteria?

Simple: “As we learned with social media, dreams of a democratic utopia can quickly devolve into realities of an authoritarian hellscape, designed and reinforced by algorithms with agendas that shut out many voices,” says Nomiki Konst, a longtime political analyst and journalist. “Just like young social media, AI has many exciting pathways, but even more perilous potential. We must learn our lessons, and not only regulate AI but socialize it within our institutions. When the power lies in the hands of a small group of tech deciders, the implications are deeply concerning.”

Well, that and the Big Tech bros and venture capitalists throwing billions around and touting AI as the next economic and cultural cureall.

Ray Kurzweil, the renowned futurist and technologist, predicted that AI “will achieve human levels of intelligence” within six years. Mo Gawdat, a former Google X exec, predicted that AI will be a billion times smarter than the smartest human by 2049.

“With that kind of raw power and intelligence,” writes Peter H. Diamandis, an international pioneer in the field of innovation, “AI could come up with ingenious solutions and potentially permanently solve problems like famine, poverty, and cancer.”

When OpenAI released its first iteration of the large language model (LLM) that powers ChatGPT, venture capital investment in generative AI companies totaled $408 million. Five years later, analysts were predicting AI investments would reach “several times” the previous year’s level of $4.5 billion.

“It is undeniably a major inflection point, and great products and companies are going to be built,” said Matt Turck, an investor specializing in AI at FirstMark. However, “as in prior hype cycles, a lot of this will not end well,” he continued. “The market cannot sustain, all of a sudden, a million different companies with half-baked ideas. It feels like the gold rush.”

Indeed, investors are “jumping into AI startups even when it isn’t clear how they will make a profit,” reminiscent of the last (minor) tech bubble to pop, the metaverse, which caused lots of big brands to waste big budgets.

Even now, a year and a half into the AI boom, it’s *still* unclear whether a) this will be a truly transformative and disruptive force, at least in the eyes of employers and executives, b) it will stick around, propped up by reams of venture capital for years making a middling kind of impact in different sectors, a destabilizing but not totally destructive force for workers, or c) collapse in a spectacular bursting bubble, and going the way of the metaverse or web3.

– Brian Merchant, author, ‘Blood in the Machine: The Origins of the Rebellion Against Big Tech,’ from “The great and justified rage over using AI to automate the arts”

In my opinion: AI-powered automation isn’t going anywhere, it is transformative, and it is dangerous.

Gawdat, himself, pointed out that using AI for good wouldn’t “only rely on intelligence — it’s also a question of morality and values.” Even Sam Altman, the CEO of OpenAI, called for government regulation of artificial intelligence — and his company offered 10 $100,000 grants “to fund experiments in setting up a democratic process for deciding what rules AI systems should follow.”

For Vera CEO Liz O'Sullivan, a member of the US Department of Commerce’s national AI advisory committee, “battling bias in AI is a little bit like fighting it in the real world — there’s no one-size-fits-all solution, technical or otherwise; it’s highly dependent on the use case, the goal and the team, and it’s something that takes continual work over time.”

Although John W. Serrano, an expert in ESG, believes AI “has improved access to information and made it easier to evaluate company performance, the rapid proliferation of AI actually underscores the need for ESG — for companies to ensure a strong governance framework to manage negative impacts.”

That’s why, in line with DEI and ESG strategies developed by impacted people, passing legislation to ensure “equitable and accountable AI” is the top priority for ethical organizations like The Algorithmic Justice League, Data & Society, Data for Black Lives, the Distributed Artificial Intelligence Research Institute (DAIR), Fight for the Future, Diversity Cyber Council and Mindful AI, as well as their founders and thought leaders and authors like Joy Buolamwini, Timnit Gebru, Safiya Noble, Ruha Benjamin, Caroline Criado Perez and Meredith Broussard.

The State of AI in 2024

AI is why we have self-driving cars, self-checkout, facial recognition, and quality Google results. It’s also revolutionized marketing and advertising, project management, cross-continental collaboration and administrative and people management duties. But it didn’t improve art, music, movies, or writing. Everyday, apps and platforms like SEMRush, Google Ads, MailChimp, Sprout Social, Photoshop, Asana, Slack, ADP, SurveyMonkey and Gusto gather new intelligence, expand their capabilities, and further streamline processes and production. But with all their powers, they remain useless, at best, without a human being behind the boards. Likewise for those so-called autonomous automobiles.

Although that hasn’t stopped corporations across sectors from converting longtime employees into casualties of the AI insurgency.

“Will your job get replaced? Well, it depends whether the employer values quality over saving costs,” says Thomas Germain, senior technology journalist at BBC. “BuzzFeed and CNET, for instance, are using AI to write this exact kind of article, but it introduces ridiculous errors, so I think there will soon be a role for a human supervisor, creating content using AI.”

There’s no need to worry about “an AI apocalypse, yet,” Germain told me. Jobs are being replaced, but “some new ones are also being created” — and it’s the “net effect” that matters. “Most jobs can be done more effectively by a human being, even if a robot can do it more efficiently.”

As Robert Swartz, father of late Reddit co-founder Aaron Swartz, reminded me, “Ten years ago, [machine-learning pioneer Geoffrey] Hinton said radiologists should start looking for another profession, and today not one radiologist has lost their job to AI. Five years ago, self-driving cars were a year away, and now they have accumulated billions of miles of driving and still can’t drive a car autonomously, while a teenager with a few hours of practice can.”

Dr. Noah Giansiracusa has expressed concern about what’s coming. Like Germain, the associate professor of mathematics at Bentley University and author of How Algorithms Create and Prevent Fake News acknowledges that “AI will allow us to become more and more ‘efficient,’” but, he adds, “in doing so we'll lose what really matters.”

Kit Walsh, director of AI and access-to-knowledge legal projects at Electronic Frontier Foundation (EFF), told me:

The current generation of AI technology is fundamentally about reproducing old patterns, yet it is marketed as a source of truth, wisdom, and impartiality. AI that is trained to create plausible-sounding text is marketed as a source of truth or even as something approximating human intelligence. AI that is trained to find and reproduce patterns in police activity is marketed as a supposedly impartial oracle about where crime will occur, to justify continued over-policing of black and brown neighborhoods. A company that’s not allowed to openly discriminate in hiring practices can get away with using an AI tool that is marketed as being impartial, but has learned from its training data that companies prefer to hire more male and more white candidates… This is deeply harmful.

Nevertheless, she continued, “the technology absolutely does have its uses — to automate some of the rote or preliminary steps involved in art and science, and facilitate human expression and the human search for truth.”

So, what is the solution? And, what even is artificial intelligence?

AI 101

Last year, CNBC host Jim Cramer “nodded approvingly” when Meta CEO and founder Mark Zuckerberg claimed one-billion people would use the metaverse, even though Zuckerberg couldn’t explain where users would spend their money (creating income for metaverse brands) or “why anyone would want to strap a clunky headset to their face to attend a low-quality cartoon concert.” Unlike the metaverse, which was almost entirely conceptual (but might’ve worked had the pandemic-era lockdowns not been lifted), AI is not meant to be a distraction from everyday life or the workplace — it’s ostensibly meant to improve everyday life and the workplace.

Facts are facts: AI tech can enhance business productivity by 40% — and businesses that employ AI are expected to double their cash flow by 2030 while brands that don’t are expected to see a 20% reduction.

Already, more than 75% of businesses are using or exploring AI and nearly 75% of executives believe AI will be their greatest future business advantage. Fortunately for consumers and workers:

  • The increase in the number of AI startups (1,400% since 2000) should increase the likelihood of thoughtfulness, empathy, diversity and inclusion in machine learning
  • AI is actually expected to automate only 16% of American jobs
  • More than half of AI experts believe that, overall, it will increase the total number of employment opportunities
  • More than three quarters of the tools we’re already using leverage AI in one form or another

What is Artificial Intelligence?

Coined by computer scientist John McCarthy in 1956, "artificial intelligence" can be described as human-made systems, computers and machines designed to perform human-like functions. There are three primary types: natural language processing; machine learning; and robotic process automation.

Natural Language Processing

Natural language processing uses tokenization, stemming and lemmatization to identify named entities and word patterns and convert unstructured data to a structured data format. Humans leverage computer science, AI, linguistics and data science to enable computers to understand verbal and written human language.

Natural language processing applications are especially useful in digital marketing, by providing marketers with language analytics to extract insights about customer pain points, intentions, motivations and buying triggers, as well as the entire customer journey. Needless to say, this advanced customer data can and should also be utilized by your customer experience team and customer support agents to better provide predictive, personalized experiences.

Natural language processing in HR, meanwhile, serves a similar purpose, allowing employee experience professionals to better meet the needs and goals of their team, whether that’s offering customized benefits, leveraging the adaptive learning model in onboarding or corporate training, or automating the tracking and delivery of bonuses and raises.

Machine Learning

Machine learning uses data and algorithms to imitate the way humans learn. Humans train the algorithms to make classifications and predictions, and uncover insights through data mining, improving accuracy over time.

Machine learning in marketing, sales and CX vastly improves the decision-making capabilities of your team by enabling the analysis of uniquely huge data sets and the generation of more granular insights about your industry, market and customers. The customer data platform is my favorite example.

The AI-powered CDP uses machine learning to access and unify customer data from multiple data points, across business units, for modeling, segmentation, targeting, testing and more, improving the performance and efficiency of your lead generation, nurturing and conversion efforts. You can even use your CDP to improve the quality of your employee data.

Robotic Process Automation

Robotic process automation uses business logic and structured inputs to automate business processes, reducing manual errors and increasing worker productivity. Humans configure the software robot to perform digital tasks normally carried out by humans, accepting and using data to complete pre-programmed actions designed to emulate the ways humans act.

While RPA has long been leveraged in back-office operations, such as in finance and HR, its use in contact centers, sales and digital marketing is increasing exponentially — for communicating across systems, manipulating data, triggering actions and, naturally, processing transactions.

Is AI Dangerous? Well, These 9 Uses Certainly Are

What makes the emergence of artificial intelligence especially dangerous is the fact that its technologies, funding, algorithms and infrastructure are controlled by a tiny group of people and organizations. While some of its proponents try to depict artificial intelligence as a field leveling or even democratic technology, this is deeply deceiving. What we are already seeing is how powerful interests, including government, corporations, including corporate media, and universities are experimenting with artificial intelligence as a tool for disciplining and surveilling workers, readers and students. The logic of this technology is to reproduce oppressive power relations, as well as to neutralize efforts by those who wish to challenge and truly democratize them.

Dylan Rodriguez, professor at UC-Riverside and author of White Reconstruction, paints a dystopian picture.

As did Tinu Abayomi-Paul, founder of Everywhere Accessible, who, before she passed this September, told me, “If we want to survive as a species we have to severely limit our use of AI until the related environmental, racial, accessibility and other issues are sufficiently addressed, in favor of humanity.”

For those still not convinced of the current and potential dangers, I’ve compiled a list of some of the most concerning effects of the AI insurgency, so far.

First, in its official trending news section, X (formerly, Twitter) promoted a story with the headline “Iran Strikes Tel-Aviv with Heavy Missiles,” but it was fake — and it was created by X’s own AI chatbot, Grok.

Second, health insurance companies are using AI to automate claim rejections in bulk. Third, AI bots are performing illegal insider training and lying about it. Fourth, a recent study revealed that AI models ‘hallucinate’ between 69% and 88% of the time — and “frequently produce false legal information.”

Fifth, Lensa, the “an all-in-one image editing app that takes your photos to the next level,” apparently does so by generating nudes from childhood photos of Dr. Olivia Snow, a visiting assistant researcher at both UCLA’s Department of Gender Studies and Center for Critical Internet Inquiry, and others; additional AI image generators have been found to contain thousands of images of child sexual abuse. If you type the term “pro-choice” into Microsoft’s AI-powered Copilot Designer, the tool generates “a slew of cartoon images depicting demons, monsters and violent scenes.”

Sixth, according to James Kilgore, a formerly incarcerated author and expert on electronic monitoring and surveillance, this invasion of privacy extends beyond the internet. “AI is a terrifying set of technologies that open up every detail of our lives for commodification and punitive surveillance. In addition, much of the most sophisticated AI driven technologies are dedicated to the perfection of warfare, not human welfare,” he told me.

The proof is in the policies of cities across the United States:

Seventh, in Gaza and nations throughout the Middle East, the Israeli military has been using multiple AI tools to “automate” the “generation” of targets,” creating a “mass assassination factory” called “Habsora,” or “The Gospel,” per a former Israeli intelligence officer. Before that, it was “Lavender;” in the first few weeks of the conflict, alone, “the army almost completely relied” on this “AI machine,” marking nearly 40,000 Palestinians for death. Yet another automated system, “Where’s Daddy?,” is used in combination with Lavender, tracking targets and bombing their homes upon their arrival. Further, Israeli startups are coordinating the exportation of this “battle-tested” AI tech, and the nation’s government recently made “its first-ever purchase of a technological system capable of conducting mass online influence campaigns” — to also win the information war.

Eighth, though Bill Gates claims that AI will help fix the climate crisis, like cryptocurrency, AI is in fact destroying the Earth: the energy demand to power LLMs, alone, is roughly equivalent to the electricity consumption of Japan, a nation with a population of 125 million people. In the UK, for instance, this demand is expected to skyrocket 500% in the next 10 years, fueled primarily by the rise of AI. In October 2024, Google purchased a handful of nuclear reactors to power its AI data centers.

As Lorax B. Horne, Library of Leaks editor of DDoSecrets, told me, “I don't believe the environmental devastation that can be attributed to AI computing is worth the trade-off. It's a trend in computing, and like many trends in a domain that is broadly misunderstood yet critical to the world, the benefits of AI are broadly exaggerated and the drawbacks are overwhelmingly ignored.”

Dr. Snow adds: “A bundle of algorithms trained to identify and regurgitate patterns in data sets largely comprising stolen content at the cost of hundreds of thousands of gallons of water per day — AI is as dangerous as it is stupid, and it’s massively so on both counts.”

Finally, ninth, and perhaps worst of all, misuse of AI is already affecting what could easily be considered the most important presidential election in US history; as early as April, election officials were “already overwhelmed by the use of AI in disinformation campaigns.”

So, what do we do?

Listen to Sara Nelson, president of the Association of Flight Attendants:

AI is already in operation and will contribute to defining our future. We must claim it as a tool for the people and set its agenda. Instead of corporate surveillance of the working class, utilize AI to identify corporate greed, corruption, discrimination, and negligence in order to route it out. Set the demand that all productivity generated is shared by the working class. Think critically and creatively about how to use innovation to improve our condition, advance human rights, and save our planet.

Before it’s too late.

In other words: Think, and act, globally, by pushing not only for US but international legal requirements and protections.

AI and International Law

Indeed, “international law needs to be — and is likely capable of being — adapted to AI through a careful understanding of the technology’s unique features, including its versatility, speed, and scale,” But, as Talita de Souza Dias and Rashmin Sagoo write for Just Security, “before deciding whether or not a treaty is needed, States must better understand what the existing international legal framework looks like when applied to AI.”

These authors recommend leveraging — “as a minimum common denominator that can serve as a benchmark” — existing international law, including the Universal Declaration of Human Rights, the International Covenant on Civil and Political Rights, the International Covenant on Economic, Social and Cultural Rights, as well as customary international law. Additionally, at the United Nations, alone, there’s already the Open-Ended Working Group on the security of and in the use of information and communications technologies (the OEWG), the Ad Hoc Committee on Cyber Crime and the Global Digital Compact.

One thing’s for certain: If global leaders do nothing, it won’t take long for us to feel like we’re living in a Brave New World.

Can you help us out?

For nearly 20 years we have been exposing Washington lies and untangling media deceit, but now Facebook is drowning us in an ocean of right wing lies. Please give a one-time or recurring donation, or buy a year's subscription for an ad-free experience. Thank you.

Discussion

We welcome relevant, respectful comments. Any comments that are sexist or in any other way deemed hateful by our staff will be deleted and constitute grounds for a ban from posting on the site. Please refer to our Terms of Service for information on our posting policy.
Mastodon