September 9th, 2024

Questions about LLMs in Group Chats

The article explores the use of large language models in group chats, analyzing various frameworks like MUCA to improve conversational dynamics and bot agency through tunable features.

Read original articleLink Icon
Questions about LLMs in Group Chats

The article discusses the exploration of using large language models (LLMs) in group chat environments, particularly within the xenocog community. The author expresses interest in how these models can interact naturally in group settings, raising questions about their operational mechanics, such as message visibility, response triggers, context management, and the ability to maintain ongoing conversations. Various platforms and frameworks are examined, including Shapes, which allows for tuning bot behaviors but lacks the desired conversational flow, and AutoGen, which focuses on task-oriented agents rather than true group interactions. The author also references the Generative Agents paper, which simulates human behavior in a town, suggesting that a similar memory system could be beneficial for group chats. The Multi-User Chat Assistant (MUCA) framework is highlighted for its focus on the complexities of multi-user interactions, emphasizing the need for content, timing, and recipient intelligence. The author aims to compile a list of tunable features from these methodologies to create a more effective framework for LLMs in group chats, ultimately seeking to enhance the conversational dynamics and agency of the bots involved.

- The author is exploring how LLMs can interact in group chats.

- Various frameworks and platforms for LLM interactions are analyzed.

- The Multi-User Chat Assistant (MUCA) framework addresses complex group chat dynamics.

- The author seeks to compile tunable features for improved LLM group interactions.

- The exploration aims to enhance conversational flow and bot agency in group settings.

Link Icon 7 comments
By @theturtle32 - 8 months
I've been contemplating these exact same thoughts and ideas, and indeed have been very surprised how little exploration there seems to be around these nuances!
By @vintro - 8 months
given the mechanics of language models, i think it's really interesting to consider them in group settings. how do you create an environment where they can "decide" to respond? how do they make that decision?

this is something that's talked a lot about in education -- how to foster a productive group discussion: https://www.teacher.org/blog/what-is-the-harkness-discussion...

By @knowaveragejoe - 8 months
This is very interesting and thought provoking. On its face it sounds so simple - just put the bots in a room together and let them go at it. But of course it's much more complicated than that, at least if you want good and/or interesting results.
By @rasengan - 8 months
I did a small test a year and a half ago (very basic) and it was already funny xD

https://github.com/realrasengan/chatgpt-groupchat-test

By @skeptrune - 8 months
I'm interested. Could you use a LLM function call to decide whether or not to respond instead of randomness so it feels more intelligent?

Also, I have no idea what the use case for this would be but making it work sounds cool and kinda fruitful.

By @internetter - 8 months
What does this achieve?