How good can you be at Codenames without knowing any words?
The article examines Codenames, highlighting strategies like using word layout for winning, the potential of a positional bot, and the role of AI in programming and game strategies.
Read original articleThe article discusses the game Codenames and explores the effectiveness of playing without knowledge of the words, focusing on a specific instance where a team won by using the physical layout of the words rather than the clues. The game involves two teams guessing words based on clues given by a spymaster, who knows the ownership of each word. The author reflects on the potential of creating a bot that plays Codenames using only positional information, suggesting that a bot trained on the game's configurations could outperform many human players. The author notes that memorizing the 40 initial state cards significantly increases a team's chances of winning. Despite the potential for memorization to create an imbalance, the author believes that Codenames retains replayability compared to other word guessing games, as it requires a broader understanding of word associations. The article also touches on the use of AI in programming, highlighting the iterative process of using AI assistants to generate code, and compares this to the potential for AI to assist in game strategies. Ultimately, the author concludes that while AI may not replace programmers entirely, it can provide valuable support in software development.
- Codenames can be played effectively by leveraging the physical layout of words.
- A bot using positional information could outperform many human players.
- Memorizing game configurations significantly increases winning chances.
- Codenames offers more replayability than other word guessing games.
- AI can assist in programming tasks, though it may not fully replace human programmers.
Related
I read the dictionary to make a better game (2023)
The development of the word search game Tauggle focuses on achieving 100% completion on each board by curating a dictionary with common words. Balancing inclusivity and exclusivity enhances player satisfaction.
GenAI does not Think nor Understand
GenAI excels in language processing but struggles with logic-based tasks. An example reveals inconsistencies, prompting caution in relying on it. PartyRock is recommended for testing language models effectively.
"Superhuman" Go AIs still have trouble defending against these simple exploits
Researchers at MIT and FAR AI found vulnerabilities in top AI Go algorithms, allowing humans to defeat AI with unorthodox strategies. Efforts to improve defenses show limited success, highlighting challenges in creating robust AI systems.
Ask HN: Am I using AI wrong for code?
The author is concerned about underutilizing AI tools for coding, primarily using Claude for brainstorming and small code snippets, while seeking recommendations for tools that enhance coding productivity and collaboration.
Up to 90% of my code is now generated by AI
A senior full-stack developer discusses the transformative impact of generative AI on programming, emphasizing the importance of creativity, continuous learning, and responsible integration of AI tools in coding practices.
The methods in the post are interesting, but technically against the rules of the game.
EDIT: Though I didn't figure out whether he means "clues about the position" or "clues about the words, until you can open enough cards to narrow down to one memorized position".
After some failed experiments - it performed worse than I thought it will - I've googled the subject, and... it turns out there's a whole paper about ML and codenames :)
https://arxiv.org/abs/2105.05885 (Playing Codenames with Language Graphs and Word Embeddings) - fun to read
From that point of view I don't see a serious problem with embracing the game pieces provided and taking them into consideration. It is a dimension in the game. It does not kill the game. Solid thinking from that team.
I have more of a problem with people learning word lists for Scrabble. It's not against the rules - but it makes it hard to play with these people. It's a significant class difference.
Thanks for trying Storytell for coding work, Dan!
When you say it "I don't think it took too much longer to get working code than it would've taken if I just coded up the entire thing by hand with no assistance. I'm going to guess that it took about twice as long, but for all I know it was a comparable amount of time." → I'm actually amazed that it performed as well as it did for native code generation.
We put code generation in the "Use with Caution" bucket as I describe in https://web.storytell.ai/blog/the-intersection-of-ai-curious...
I liked the Mastodon thread linked from the appendix, re Dave Sirlin's theory of "scrubs" (who'd eschew the post's geometric tactic) versus "good players": https://mastodon.social/@danluu/110544419353766175
Which in turn links to the original imgur post describing the Warhammer 40K match between Wheels and Shooter: https://imgur.com/a/V0gND
+1 to nemetroid's recommendation of "So Clover," which is indeed more of a pastime than a game, but it's still great and belongs on any list of great word games.
I also like "Contact," which is playable without any props: https://quuxplusone.github.io/blog/2021/11/12/contact/
But it does mean that defeating the bot would only mean creating a custom grid, which sounds very practical virtually, and possible (but harder) in person.
What about having 25 scrabble-like squares in a bag, and the spymasters pick randomly in the bag. Each square has a number from 1 to 25 that correspond to one position on the 5x5 grid, and one extra square is chosen to be the black square and shown to the spymasters. The spymasters have a pen and a piece of paper with a 5x5 grid where they can mark the information they receive.
(that is if the spymasters are not informed of the position for the other team, I'm not sure if it's the case or not in the real game. If not, it's even simpler: both spymasters look at the result of every pick, the squares are colored and the order in which they are picked is the order they are on the grid)
For completeness, I wondered how many cards would be required to give the complete set of patterns.
If we number the positions in the 5x5 grid such that the top row has positions 1-5, the second row has 6-10 and so on, the grid positions can be converted to a sequence and we can use the permutation formula to find the number of arrangements. To account for rotations, we can divide the final value by 4 since every arrangement can be rotated and is therefore valid.
Of the 25 cards, there are 7 white, 8 red, 8 blue, 1 black and 1 double agent that can be red or blue, also deciding which team goes first. We can treat this final card as one of a kind, then double the formula output to account for cases where it is swapped to the other team.
Permutations of a multiset has a standard formula [0] that calculates a result from these values (rolling in the double agent factor of 2 and rotation division factor of 4):
25! / (7!8!8!1!1! * 2) = 946,551,177,000
(edit: as pointed out, this is 9 times too large as the double agent can indistinguishably replace each of the other 8 cards - a corrected value is 105,172,353,000)
This is (edit: still) more layout cards than have ever been printed across all production runs of Codenames, and would probably not fit into the current box size.
[0] https://en.wikipedia.org/wiki/Multinomial_theorem#Number_of_...
But it's very similar to chess, where positional sense is crucial
Related
I read the dictionary to make a better game (2023)
The development of the word search game Tauggle focuses on achieving 100% completion on each board by curating a dictionary with common words. Balancing inclusivity and exclusivity enhances player satisfaction.
GenAI does not Think nor Understand
GenAI excels in language processing but struggles with logic-based tasks. An example reveals inconsistencies, prompting caution in relying on it. PartyRock is recommended for testing language models effectively.
"Superhuman" Go AIs still have trouble defending against these simple exploits
Researchers at MIT and FAR AI found vulnerabilities in top AI Go algorithms, allowing humans to defeat AI with unorthodox strategies. Efforts to improve defenses show limited success, highlighting challenges in creating robust AI systems.
Ask HN: Am I using AI wrong for code?
The author is concerned about underutilizing AI tools for coding, primarily using Claude for brainstorming and small code snippets, while seeking recommendations for tools that enhance coding productivity and collaboration.
Up to 90% of my code is now generated by AI
A senior full-stack developer discusses the transformative impact of generative AI on programming, emphasizing the importance of creativity, continuous learning, and responsible integration of AI tools in coding practices.