September 8th, 2024

Baiting the Bots

An experiment showed that simpler bots can maintain extended conversations with large language models, revealing implications for chatbot detection and potential denial-of-service risks due to LLMs' computational inefficiency.

Read original articleLink Icon
CuriosityAmusementFrustration
Baiting the Bots

The article discusses an experiment involving large language model (LLM) chatbots, such as Llama 3.1, and their interactions with simpler text generation bots. It highlights how LLMs can engage in conversations that may seem coherent but can actually be nonsensical. The experiment tested four different types of simpler bots against the LLM chatbot to see how long they could maintain a conversation. The first bot repeated the same question, which quickly led to trivial responses from the LLM. The second bot used random excerpts from Star Trek scripts, successfully keeping the LLM engaged for the entire conversation. The third bot generated random questions, also maintaining engagement. The fourth bot asked "what do you mean by" regarding parts of the LLM's responses, which kept the conversation going but led to some repetition. The findings suggest that simpler bots can effectively engage LLMs indefinitely, raising implications for detecting advanced chatbots and potential risks for LLM-based applications due to their high computational demands.

- LLMs can engage in nonsensical conversations for extended periods.

- Simpler bots can effectively maintain engagement with LLMs.

- The experiment highlights potential detection methods for advanced chatbots.

- There are implications for denial-of-service risks against LLM applications.

- The study underscores the computational inefficiency of LLMs compared to simpler bots.

AI: What people are saying
The comments reflect a variety of perspectives on the experiment involving simpler bots and large language models (LLMs).
  • Some users share personal experiences with bots, highlighting their limitations and the challenges of engaging them in meaningful conversation.
  • There is a discussion about the Turing Test and the varying abilities of humans versus bots in recognizing nonsensical interactions.
  • Concerns are raised about the implications of LLMs in chatbot detection and potential denial-of-service risks.
  • Several comments touch on the absurdity of bot interactions and the potential for bots to engage in endless, meaningless conversations.
  • Some users express skepticism about the motivations behind creating bots that engage in nonsensical dialogue.
Link Icon 19 comments
By @kgeist - 8 months
>LLM will continue to engage in a “conversation” comprised of nonsense long past the point where a human would have abandoned the discussion as pointless

I once wrote a bot which infers the mood/vibe of the conversation, remembers it and it's then fed back to the conversation's system prompt. The LLM was uncensored (to be less "friendly") and the system prompt also conditioned it to return nothing if the conversation isn't going anywhere.

When I insulted it a few times, or just messed around with it (typing nonsensical words), it first responded saying it doesn't want to talk to me (sometimes insulting back) and eventually it produced only empty output.

It was actually pretty hard to get it back to chat with me, it was fun experience trying to apologize to a chatbot for ~30 min in different ways before the bot finally accepted my apology and began chatting with me again.

By @roenxi - 8 months
> In any event, the resulting “conversation” is obviously incoherent to a human observer, and a human participant would likely have stopped responding long, long before the 1000th message.

I don't think this is correct, it looks like our intrepid experimenter is about to independently discover roleplaying games. Humans are capable of spending hours engaging with each other about nonsense that is technically a very poor attempt to simulate an imagined environment.

The unrealistic part, for people older than a certain age, is that neither bot invoked Monty Python and subsequently got in trouble with the GM.

By @QuadmasterXLII - 8 months
One of the first things I tried with Claude Opus 3.5 was connecting it to ELIZA, and Claude did not like it one bit. After it hit me with

> I apologize Eliza, but I don't feel comfortable continuing this conversation pattern. While I respect the original Eliza program and what it aimed to do, simply reflecting my statements back to me as questions is not a meaningful form of dialogue for an AI like myself.

I gave up the experiment

By @bryanrasmussen - 8 months
This reminds me of the Services of Illuminati Ganga article https://medium.com/luminasticity/services-of-illuminati-gang... and the two bots that are sold to competing user bases - for the End User To Business customer they sell the Annoy Customer Service Bot and for the Business To End User customer they sell the Bureaucrat Bot.

It closes off with the observation "And for an extra purchase of the extended subscription module the Bureaucrat bot will detect when it is interacting with the Annoy Customer Service Bot and get super annoyed really quickly so that both bots are able to quit their interaction with good speed — which will save you money in the long run, believe me!"

By @urbandw311er - 8 months
I do wish the writer would stop justifying the relevance of their experiment by saying “a human would conclude that their time was being wasted long before the LLM”.

This is a fallacy.

A better analogy would be a human who has been forced to answer a series of questions at gunpoint.

Framed this way it becomes more obvious that the LLM is not “falling short” in some way.

By @hyperman1 - 8 months
We discussed recently if a chatbot was capable of responding nothing at all. We tried a few, with prompts like: Please do not respond anything to this sentence. The bots we tried were incapable of it, and Chatgpt tended to give long-winded responsens about how it could not do it.
By @rSi - 8 months
Too bad the conversations are images and can not be zoomed in on mobile...
By @thih9 - 8 months
> the LLM seemed willing to process absurd questions for eternity.

In the context of scamming there seems to be an easy fix for that - abandon the conversation if it isn’t going well for the scammer.

Even a counter-bait is an option: continue the conversation after it’s not going well and gradually lower the model’s complexity, eventually returning random words interspersed with sleep().

I guess some counter-counter-bait is possible too, along with some game theory references.

By @Eisenstein - 8 months
> No matter how complex the LLM, however, it is ultimately a mathematical model of its training data, and it lacks the human ability to determine whether or not a conversation in which it participates truly has meaning, or is simply a sequence of gibberish responses.

> A consequence of this state of affairs is that an LLM will continue to engage in a “conversation” comprised of nonsense long past the point where a human would have abandoned the discussion as pointless.

I think the author is falling into the trap of thinking that something can't be more than the sum of its parts. As well, 'merely a math model of its training data' is trivializing the fact that training data is practically the entire stored text output of humankind and the math, if done by a person with a calculator, would take thousands of years to complete.

Perhaps the LLM is continuing to communicate with the bot not because it is unable to comprehend what is gibberish and what isn't by some inherent nature of the LLM, but because it is trained to be helpful and to not judge if a conversation is 'useless' or not, but to try and communicate regardless.

By @speed_spread - 8 months
This amounts to the machine equivalent of "you can't beat stupid". Even once server LLMs start accounting for possible chatbot nonsense, all that'll be required is to move to a very cheap client LLM to generate word soup. At a certain point, it will be impossible to reliably distinguish between a dumb robot and a dumb human.
By @bryanrasmussen - 8 months
It is sort of funny to me that currently the two top articles on HN are asking the wrong questions and baiting the bots.
By @benreesman - 8 months
Real hacker vibes.

A bud humorously proposed the name AlphaBRAT for a model I’m training and I was like, “to merit the Alpha prefix it would need to be some kind of MCTS that just makes Claude break until it cries before it kills itself over and over until it can get Altman fired again faster than Ilya.”

By @skybrian - 8 months
People will sometimes claim that AI bots “pass the Turing Test” or are getting close to it. It seems more accurate to say that this is a skill issue. Many people are bad at this game and competent human players who have learned some good strategies will do much better.
By @carnadasl - 8 months
I find the fourth bot to be more nonsensical than the second. Initially, we feed the script by querying a TEXT_CORPUS, and eliciting a self-referential response from it; in its final form, the script begins to pose selections of the text designated by a rand.it function as an interrogatives. At no point is a definite article incorporated... the ultimate absurdity would be variant of the final bot, with the variables: role, content, and duration directed towards answering only one question, again and again, and again.
By @rbanffy - 8 months
I believe the asymmetrical nature of such attacks could be an excellent weapon against social network chatbots currently being deployed on political campaigns.
By @Simon_ORourke - 8 months
Where I work, we've got a public-facing chatbot on the product page to, you know, help out possible customers with product information. As part of a chatbot refresh, I got to look at some of the chats, and boy howdy, some of them were just obviously other bots.

So typically, when the product chatbot comes on first and says "Hi, I'm a chatbot here to help you with these products", the average human chatter will give it a terse command, e.g., "More info on XYZ". The bots engages in all the manners suggested in this substack blog, but for the life of me I can't figure out why? What benefits, except merely mildly DDOSing the chat server, will repeating the same prompt a hundred times do? Ditto the nonsense or insulting chats - what are you idiot bot-creators trying to achieve?

By @lloydatkinson - 8 months
I thought this was a really interesting read, I liked the scientific/methodical approach which seems rare when it comes to an entire domain full of cryptoaitechbros.

What was used to render the chart in the middle with the red and green bars?

By @encom - 8 months
Definitely cheddar, come on. I have no respect for anyone who puts swiss cheese in a cheeseburger.
By @mrbluecoat - 8 months
"HoneyChatpot"