If AI chatbots are the future, I hate it
Jeff Geerling shared a frustrating experience with AT&T's AI chatbot for home Internet issues. Despite efforts, the chatbot and human support representative failed to address concerns effectively, revealing challenges in automated customer support.
Read original articleIn a recent blog post by Jeff Geerling, he recounts a frustrating experience with AT&T's AI-powered chatbot while seeking support for his home Internet issues. Despite his efforts to navigate the chatbot, it repeatedly misunderstood his problem, equating WiFi with Internet. After multiple attempts, he managed to connect with a human support representative who also failed to address his concerns effectively. The representative suggested solutions unrelated to the actual issue, showcasing a common pitfall in customer support interactions. Jeff reflects on the limitations of AI chatbots and the challenges of obtaining satisfactory assistance in the face of technological advancements. The blog post highlights the struggle consumers may face when dealing with automated support systems and the importance of human intervention in resolving complex issues.
Related
Lessons About the Human Mind from Artificial Intelligence
In 2022, a Google engineer claimed AI chatbot LaMDA was self-aware, but further scrutiny revealed it mimicked human-like responses without true understanding. This incident underscores AI limitations in comprehension and originality.
AI can't fix what automation already broke
Generative AI aids call center workers by detecting distress and providing calming family videos. Criticism arises on AI as a band-aid solution for automation-induced stress, questioning its effectiveness and broader implications.
Landlords Now Using AI to Harass You for Rent and Refuse to Fix Your Appliances
Landlords employ AI chatbots for tenant communication, rent collection, and inquiries. Tenants express preference for human interaction in crucial matters due to perceived impersonality. Concerns include accuracy, transparency, and ethical implications.
The A.I. Bubble is Bursting with Ed Zitron [video]
The YouTube video discusses the stagnation in AI progress despite significant investments by tech giants. Adam Conover critiques large language models, questions new AI products' efficacy, and highlights tech companies prioritizing growth over user experience.
We Need to Control AI Agents Now
The article by Jonathan Zittrain discusses the pressing necessity to regulate AI agents due to their autonomous actions and potential risks. Real-world examples highlight the importance of monitoring and categorizing AI behavior to prevent negative consequences.
I own a small business and I would rather shut my doors than force my paying customers through AI cattle gates to struggle for help. I can understand that providing customer support on a massive scale is hard, but it is arguably the MOST important part of the customer experience, maybe even more so than the product itself. It seems incomprehensibly short-sighted to abstract it away in the name of short-term profits.
The fundamental problem is that a lot of customers will reach out on operationally expensive channels (chat, calls) about questions that are easily answered from a FAQ, knowledge base or on the website.
In-chat article lookups or AI chat bots with ability to offer some information are trying to divert otherwise unnecessary requests.
I do think AI chatbots are part of the future. Those powered by an LLM would do a better job than your example.
What many here on HN don't realize is some significant portion of call center activity is people asking things like "Whats my balance?".
On the other hand I can very well image how such a positive exchange could look like.
I am not convinced that it is not possible to create a chatbot that provides a satisfactory or even satisfying interaction, but I believe the incentives to do so are not there. Most of the time chatbots are just an additional obstacle in a funnel that is designed to repel the vast majority of people that try to get through.
So, the sad state of chatbots says more about the attitude of companies towards their customers than about the abilities of AI, in my opinion.
* Most notably, the ai bot is repeating itself and serving very standard looking messages. I don't think this is an "AI" chatbot in the sense of an LLM. It's just a dialogue tree with some parsing.
* The author's communication is... bad. Don't say "connect to support rep". Say "talk to human".
* It's very silly to state the following to a dumb chat bot: > Hello! I just received and installed the new AT&T router/fiber modem, and ... the Internet speed is just as slow as before. I pay for 1 Gbps symmetric, and I'm getting 8 Mbps down and 6 Mbps up. On 6/28, the average connection speed went from 1 Gbps down to 100 Mbps. On 7/8 the average speed went from 100 Mbps to 8 Mbps. This is all measured both on the device at the fiber, and through a separate monitor I have wired into the 1 Gbps network.
I would recommend "slow internet".
Also saying "Slow internet, not slow wifi" is probably causing the bot to believe you're asking about wifi specifically because it's not an "AI" bot in the trendy sense of the word and it just sees the word wifi.
But instead it was no knowledge of the course besides its description and can offer general information that may in fact be in opposition to the things covered in the lesson.
There would actually be a way to make this useful. Especially if the course were meatier then most masterclasses are. Just summaries and term definitions and such can sometimes be very useful. But no, it's just generic chat interface that has a course description.
I needed to resolve a highly complicated title problem I was battling two separate state DMVs over (plus a defunct lender). It was starting to seem I needed to retain a lawyer.
So I was just compiling information at the time. I asked an esoteric question about one of the private LLC names Carvana have as their lending arm in a specific state. You would only know this name reviewing a stack of paperwork, it is not public.
The chatbot responded with detailed information on what I needed to do to resolve the problem. Plus information about the LLC. And then emailed me supporting documentation automatically.
My jaw dropped.
If they pass the quiz, assume the user knows what they're talking about and that the problem won't be solved by router/WiFi restart (or whatever simple solution they have in the book). Instead just connect directly to specialized technical support.
From a quick glance the chatbot described in this post it is clearly moving through a dialog tree and not LLM based. The outsourced support agent is likely also just moving through some sort of script. I don't think the chatbot necessarily made the experience much worse than typical AT&T support.
I think the big challenge with promise of AI chatbots (not in this example though), is that somehow you can just replace your entire support team with a bunch of bots, and free up your reactive support team to do other things. We (Olark: https://www.olark.com) have definitely seen this in our customer base and try really hard to walk people back from this perspective, but even then we are going to see more and more unstaffed chat solutions (e.g. every drift bot ever, and most intercoms) before things swing back to some hybrid of AI and humans.
That said, a very simplified version of the way I think about how AI chatbots and what the future of support looks like might help you:
1) Big enterprises (or regional monopolies) who are mostly monopolies selling to consumers, where they have to just be good enough that regulators don't come after them (or sometimes compete on price (AT&T, Comcast/Xfinity, Verizon, power company)), these folks will always offer the cheapest support they can get away with.
2) Companies that still win business with human relationships. These folks will likely over index on AI and provide worse service if they BELIEVE that they are winning business based on some sort of non-relationship based factors.
3) Small businesses with small teams wearing multiple hats all the time, where providing AI support lets them offer a better service than they'd be able to provide (e.g. we now can answer 50% of questions 24/7 instantly). They will over index on AI until it hurts the bottom line.
I still believe human relationships matter, and figuring out how to create hybrid bot / human customer service to enable humans to do their best work is still huge unsolved opportunity.
The only time I go to customer services is to get support for a non standard problem, I'm happy to go to online information to get the answer that I need so when I do need to contact someone it's usually something complicated to resolve. For older people though there really is no way for them to resolve even simple problems if they are not able to deal with technology.
No need to have LLM in loop, just cover these sort of basic cases with either clear pages or some flows...
The current 'state of the art' is spending 40 minutes on hold after navigating some silly voice menu to get to talk to a person that barely has any training who serves as first line support and probably can't resolve your issue. They are likely to hang up on you and you'll face ground hog day until you accidentally say the right words that unlock access to someone with a clue. That's the past few decades. It's not that great either.
I remember that 20 years ago we had a problem with our self-bought router and a technician came, fixed the issue without telling us how (although it was our router, where he changed the config) and then just went off.
AI chatbots may be awful. But wow: the % is high of number of phone tree systems where I've had to punch in or speak details and information on who I am or what problem I'm having, only for an agent to pick up & ask me all those questions afresh.
So so systems feel well architecture to make effective use of time. And technology & systems so regularly throw us into situations where there is nothing in our power to do, rigid systems where improvement is impossible.
If there is just a phone tree or chatnot, there should be a digital protocol for it. I should have the user agency to navigate the full breadth and width of your chat tree as I see fit, not be chained to your slow plodding process.
The movie "Elysium," from 11 years ago, depicts what it feels like to interact with one of today's AI chatbots:
Naturally, in the movie, the conversation with the bot on the counter was mandated due to another bot's earlier lack of understanding:
https://youtu.be/vVhT4X6uLL4?t=99
Let us all hope that AI chatbots will get much better over time. We all need it.
AI Chatbots have been outperforming human agents in every category imaginable. Why wouldn't you want to talk to an AI agent until it can't help you and routes you to a human?
Human wait times have always been insane, upwards of 30 minutes for any service i've used, why not just talk to an AI NOW until a human can come online?
I had my own phone line installed in a house where I was renting a room (to be able to use a dialup modem freely). It had crosstalk on it, probably due to a ground fault. But try to explain that to the support person. "Sir, have you tried another phone jack?", that sort of thing. After mounting frustration, I finally found the password: "I can hear other people on my phone line!" That solved it. Click. Real tech person "Oh, so you have crosstalk on your line? Probably a ground fault. We'll send someone right out".
Frankly, an AI chatbot can follow the "script" that the initial support person followed, just as easily.
If the incentive structure within AT&T isn't focused on providing customer support, then throwing an AI chatbot into the mix isn't going to fix anything. Doing that just means that you still have a bad customer support system, but now instead of waiting in a queue, your customers are arguing with an idiotic robot.
An AI chatbot can be a part of a well designed customer support system, but this isn't it.
There's only a handful of contexts I want a chatbot, and only a handful of places I trust to build a reasonable unobtrusive user interface.
If you don't like it, start your own ISP.
Other than that my experiences with AI chatbots has been pretty dismal. An expensive version of infuriating phone tree menus.
This comment and many of the replies seem to outright dismiss chatbots as universally useless, but there's selection bias at work. Of course the average HN commenter would (claim to) have a nuanced situation that can only be handled by a human representative, but the majority of customer service interactions can be handled much more routinely.
Bits About Money [2] has a thoughtful take on customer support tiers from the perspective of banking:
> Think of the person from your grade school classes who had the most difficulty at everything. The U.S. expects banks to service people much, much less intelligent than them. Some customers do not understand why a $45 charge and a $32 charge would overdraw an account with $70 in it. The bank will not be more effective at educating them on this than the public school system was given a budget of $100,000 and 12 years to try. This customer calls the bank much more frequently than you do. You can understand why, right? From their perspective, they were just going about their life, doing nothing wrong, and then for some bullshit reason the bank charged them $35.
It's frustrating to be put through a gauntlet of chatbots and phone menus when you absolutely know you need a human to help, but that's the economics of chatbots and tier 1/2 support versus specialists:
> The reason you have to “jump through hoops” to “simply talk to someone” (a professional, with meaningful decisionmaking authority) is because the system is set up to a) try to dissuade that guy from speaking to someone whose time is expensive and b) believes, on the basis of voluminous evidence, that you are likely that guy until proven otherwise.
Yes, I agree that when I absolutely need to speak to a human, it's infuriating to no end. But everyone's collective "absolutely need to speak to a human" bar is higher than it may need to be.
[1] https://news.ycombinator.com/item?id=38681450
[2] https://www.bitsaboutmoney.com/archive/seeing-like-a-bank/
Related
Lessons About the Human Mind from Artificial Intelligence
In 2022, a Google engineer claimed AI chatbot LaMDA was self-aware, but further scrutiny revealed it mimicked human-like responses without true understanding. This incident underscores AI limitations in comprehension and originality.
AI can't fix what automation already broke
Generative AI aids call center workers by detecting distress and providing calming family videos. Criticism arises on AI as a band-aid solution for automation-induced stress, questioning its effectiveness and broader implications.
Landlords Now Using AI to Harass You for Rent and Refuse to Fix Your Appliances
Landlords employ AI chatbots for tenant communication, rent collection, and inquiries. Tenants express preference for human interaction in crucial matters due to perceived impersonality. Concerns include accuracy, transparency, and ethical implications.
The A.I. Bubble is Bursting with Ed Zitron [video]
The YouTube video discusses the stagnation in AI progress despite significant investments by tech giants. Adam Conover critiques large language models, questions new AI products' efficacy, and highlights tech companies prioritizing growth over user experience.
We Need to Control AI Agents Now
The article by Jonathan Zittrain discusses the pressing necessity to regulate AI agents due to their autonomous actions and potential risks. Real-world examples highlight the importance of monitoring and categorizing AI behavior to prevent negative consequences.