Increasingly, major corporations are revealing that artificial intelligence systems can fail in spectacular and costly ways. These failures are costing companies billions of dollars and damaging their reputations.
IBM’s Watson for Oncology stands as one of the biggest AI disasters in healthcare. The company spent $4 billion developing the system before shutting it down in 2023. Watson gave unsafe treatment recommendations and couldn’t adjust to different hospital practices. It suggested cancer therapies that were inappropriate or unavailable in certain countries.
After $4 billion in development costs, IBM’s Watson for Oncology delivered unsafe treatment recommendations before being shut down in 2023.
MD Anderson Cancer Center abandoned Watson after spending tens of millions with minimal results. IBM sold the entire Watson Health division in 2022 for far less than its original investment.
Amazon found that its AI hiring system discriminated against women in technical roles. The company built the algorithm in 2014 using ten years of resume data. The system assumed Amazon wasn’t interested in female candidates and downgraded applications containing words like “women’s chess club captain.”
Amazon scrapped the biased model after internal reviews exposed the problems.
Google faced embarrassing computer vision failures when its Photos app labeled Black people as gorillas in 2015. Google Nest cameras also misidentified dark-skinned people as animals. These errors happened because training datasets contained too few photos of Black people.
Zillow’s AI-powered home-buying program created a $304 million loss. The company launched the service in 2018 to predict home prices. By September 2021, Zillow had bought 27,000 homes but sold only 17,000. The algorithm consistently overpaid for properties.
Zillow shut down the program immediately and laid off 25% of its staff.
Air Canada’s chatbot gave a passenger incorrect refund information. A tribunal ruled the airline responsible for all information on its website, including automated responses. The company acknowledged the chatbot contradicted official policies but initially refused to honor the lower rate.
This ruling set a precedent for corporate liability of AI-generated communications.
ShotSpotter’s AI-powered gunshot detection technology contributed to a wrongful conviction when inaccurate data was used as evidence in court. The case demonstrated how flawed AI systems can threaten individual liberties and due process rights.
These failures show that rushing AI implementation without proper testing creates expensive problems. AI trading algorithms have caused trading sessions to halt, creating market volatility and shaking investor confidence in automated financial systems.
