Cybersecurity Consulting
November 07, 2024
8 minute read
Artificial intelligence is at the cutting edge of the tech space. That being said, it’s far from perfect, and in some instances – it doesn’t even hit passable. While AI is undoubtedly going to change the world (and already has in many ways), and the way we look at business processes and operations, it still has a long way to go.
In other words, if you thought AI was already all buttoned-up, polished, and ready to go - think again. In reality, AI is still very much in its infancy.
Considering how new a lot of AI tools are, there’s bound to be some mishaps and growing pains as we continue to learn about the full breadth of AI capability and how we can best utilize this new-aged tool within our organizations and individual lives.
For information on how organizations are effectively using AI in their business processes and operations, read DOT Security’s blog, Why Human Led Cybersecurity is Crucial in the Age of AI.
Fast-food giant McDonald’s partnered with IBM in 2021 to work on integrating AI into business operations. As one of the largest international corporations, their early interest in adopting artificial intelligence isn’t much of a shock. However early trials have sputtered, causing McDonald’s to pull the plug on AI. At least for now.
The decision to abandon their Automated Order Taker pilot program, an AI built to expedite the drive-thru ordering process, comes after a series of consumer mishaps with the early-stage technology, which was tested at 100 locations throughout the US.
Some of the issues off the bat were to be expected. The AI had trouble with regional dialects, understanding common phrases, and other issues that are frequently seen in beta AI programs.
The more serious (and somewhat comical) issues, though, involved the AI going totally rogue. In one story, the automated drive-thru technology added condiments like ketchup and butter to simple ice-cream orders. Another features an additional 9 sweet teas being added to an order, causing the customer to abandon their cart entirely.
The worst of these mix-up, however, was when the AI program kept adding chicken McNuggets to an order despite the pleas of the customers. If you ever thought there’s no such thing as too much chicken, you might change your mind when staring down the barrel of 260 orders of chicken McNuggets.
Though the program is being pulled for the time being, there are two big lessons that can be learned from this trial. First, because the AI was only rolled out to 100 locations, it was relatively easy for McDonald’s to backtrack and reassess, minimizing the damage.
Secondly, while the program is currently on hold, McDonald’s continues to invest in AI solutions and learned a lot of valuable information about the technology's current limitations, putting them in an excellent position to reiterate and try again in the future.
The state of New York decided to harness the power of AI to help entrepreneurs and business owners navigate the endless bureaucracy of business laws and regulations in the state. In its own words, the NYC MyCity chat bot was designed to,
“Provide information on general business topics, such as opening or operating a business in New York City. I can help with questions related to permits and licenses, financing, insurance needs, finding a location, understanding zoning requirements, and more. If you have any specific questions, feel free to ask!” -NYC MyCity Chat Bot Response-
While this genuinely sounds like a great use-case for the next generation of technology, the issue with the NYC MyCity AI powered chat bot is its ability to deliver accurate information to users on a consistent basis.
In fact, the chat bot has even been documented suggesting that business owners break the law in several instances.
For example, the chat bot has been cited suggesting that it’s legal to terminate an employee who raises concerns about sexual harassment, doesn’t disclose a pregnancy, or refuses to change their hair style. On top of that, the chat bot has given advice on waste disposal that is in direct contrast with the city's laws.
Perhaps most concerning of all, though, is how the bot responded to a question on serving cheese that had been exposed to rodents.
“Asked if a restaurant could serve cheese nibbled on by a rodent, it responded: ‘Yes, you can still serve the cheese to customers if it has rat bites,’ before adding that it was important to assess the “the extent of the damage caused by the rat” and to ‘inform customers about the situation.’” -AP News-
If these stories haven’t made you gawk yet, this one surely will. Back in May of this year, a lawyer by the name of Steven A. Schwartz was representing a client in a personal injury lawsuit in federal court. However, when he submitted the federal court filing, he cited at least six cases that simply don’t exist.
The made-up cases were fabricated by ChatGPT which Schwartz was using to help with his research and preparation for the court filing. Unfortunately for Schwartz, it wasn’t until after the suit was filed with the federal court that he learned of the fictional cases.
After the fake cases were discovered, the lawyers at fault were each fined $5,000 along with the firm itself, and they were instructed to inform any judges misrepresented by the AI fabricated cases.
Ultimately, a small price for an incredibly valuable lesson in using AI tools and other advanced technology. It isn’t enough to use the tools, human operators need to be diligent with fact checking, quality control, and emotional input in order to create viable and relatable content with AI.
This list of AI fails would be simply incomplete without touching on one of the most epic of AI disasters from the year. Google recently rolled out their own AI that is specifically designed to provide users with a better, more optimal search experience.
You probably already recognize the Google AI Overview clip that appears at the top of most search queries nowadays. The AI Overview is supposed to do a good job of summarizing the answers to search queries by skimming the most popular content on particular topic and providing succinct and digestible overviews.
However, as is becoming a common theme with artificial intelligence in general, the answers that Google’s AI Overview provides can’t always be trusted.
There's no better example of the AI Overview’s limited comprehension than its suggestion that adding Elmer’s Glue to your pizza sauce is a safe and effective way to get the cheese to stick. Non-toxic glue only, of course.
To make matters worse, the issue wasn’t corrected for a few months, and was still suggesting that users could add 1/8 of a cup of non-toxic glue to their pizza sauce to avoid a cheese-disaster.
This last example of AI failing showcases just how dangerous AI can be if it’s used incorrectly. The iTutorGroup agreed to a $365,000 settlement after their AI recruiting tool was found guilty of large-scale age discrimination.
The algorithmic technology rejected over 200 qualified applicants based on age due to major flaws in the programming which automatically disqualified female applicants over the age of 55 and male applicants over the age of 60.
Needless to say, this resulted in major fallout for the iTutorGroup and resulted in a discontinuation of the recruiting AI.
In the ever-evolving landscape of artificial intelligence, organizations must approach AI implementation with a cautious and measured mindset. As we've seen, AI, while promising and transformative, is not immune to failures and unintended consequences. From biased algorithms to costly missteps, the journey towards AI's full potential is paved with valuable lessons.
In conclusion, the past AI failures should serve as a stark reminder that the technology is not infallible, nor is it a remedy for every problem. It demands thoughtful consideration, ethical responsibility, and continuous oversight.
As AI continues to advance, organizations should be excited about learning and evolving with the tool, as well it’s eventual capabilities. However, it’s vital to understand that as new and exciting and powerful as this technology is, the people within your organization are what creates its value.
For information on blending AI tools with human workflows, read DOT Security’s blog, Why Human Led Cybersecurity is Crucial in the Age of AI.