My search for the mysterious missing secretary who shaped chatbot history
Rebecca Roach's research highlights a "missing secretary" who significantly influenced early chatbots like Eliza, emphasizing the overlooked contributions of women in computing and the need for recognition of user input.
Read original articleRebecca Roach's research delves into the historical significance of a "missing secretary" who played a pivotal role in the development of chatbots, particularly the early program Eliza, created by MIT professor Joseph Weizenbaum in the 1960s. While Eliza was designed to simulate conversation, Weizenbaum's reflections on a female secretary who interacted with the program reveal a deeper narrative about the often-overlooked contributions of women in computing history. Despite her crucial involvement, her identity remains unknown, highlighting a broader issue of how institutions like MIT have historically marginalized the voices of women and low-status workers. Roach's search for this secretary in MIT's archives underscores the importance of recognizing user contributions in the evolution of technology, especially in the context of generative AI. The absence of her voice in the historical record raises questions about the value placed on user input and the implications for copyright and recognition in the digital age. As Roach concludes her research amidst a blizzard, the silence surrounding the secretary's story serves as a poignant reminder of the need to acknowledge and honor the often-unseen labor that shapes technological advancements.
- The "missing secretary" significantly influenced the development of early chatbots like Eliza.
- Joseph Weizenbaum's reflections on her interaction with Eliza highlight the overlooked contributions of women in computing.
- Roach's research emphasizes the importance of recognizing user input in technology's evolution.
- The absence of the secretary's voice raises questions about the historical documentation of contributions in tech.
- The search for her identity reflects broader issues of marginalization in the history of computing.
Related
The origins of ELIZA, the first chatbot
The paper delves into the origins of ELIZA, the first chatbot by Joseph Weizenbaum in the 1960s. It clarifies misconceptions about its creation, emphasizing its role in AI history and human-machine interaction.
The AI we could have had
In the late 1960s, a secret US lab led by Avery Johnson and Warren Brodey aimed to humanize computing, challenging the industry's focus on predictability. Their legacy underscores missed opportunities for diverse digital cultures.
What LLM models can or can't do – Society of Catholic Scientists [video]
The video discusses large language models, AI reactions, the Eliza effect, and AI history. It explores artificial general intelligence potential, Eliza program creation, pre-programmed AI responses, and IBM's Deep Blue defeating Gary Kasparov in 1997.
Don't disrespect Alan Turing by reanimating him with AI
Plans to create an AI chatbot of Alan Turing at Bletchley Park have raised ethical concerns, with critics arguing it oversimplifies his legacy and trivializes the experiences of historical figures.
An Age of Hyperabundance
Laura Preston's article discusses her role as the contrarian speaker at the Project Voice conference, addressing ethical concerns of conversational AI, including its impact on vulnerable populations and human interaction.
[1]. https://en.wikipedia.org/wiki/Los_Alamos_chess#Los_Alamos_tr...
>The blizzard is worsening. The announcement rings out that the campus is closing early due to the weather. The missing secretary’s voice still eludes me. For now, the history of talking machines remains one sided. It’s a silence that haunts me as I trudge home through the muffled, snowbound streets.
> My secretary watched me work on this program over a long period of time. One day she asked to be permitted to talk with the system. Of course, she knew she was talking to a machine. Yet, after I watched her type in a few sentences she turned to me and said: ‘Would you mind leaving the room, please?’
—and wonders whether this story is true, and if so, what was Weizenbaum's secretary's name. It's not immediately clear to me that we should assume there was a secretary at all; Weizenbaum might have anonymized not only the participant's name but also her profession (and maybe her sex).
The author says she made an effort to find out who was Weizenbaum's secretary circa 1966, but was (completely?) unsuccessful:
> I work my way through Weizenbaum’s yellowed papers. Surely, among the transcripts, code print outs, letters and notebooks there will be evidence? There are some clues, reference to a secretary in letters to and from Weizenbaum. But no name.
> I broaden my hunt to administrative records. I look in department papers and the collections of Weizenbaum’s workplace, Project MAC – the hallowed centre of computing innovation at MIT. No luck. I contact the HR office and MIT’s alumni group. I stretch the patience of the ever-generous archivists. As my last day arrives, I still hear only silence.
The Weizenbaum archives are partially online. On page 149 of this 150-page collection labeled "SLIP, 1963 - 1967" ( https://dome.mit.edu/handle/1721.3/201706 ), it's indicated that on November 5, 1963, someone with the initials "jep" was taking dictation from JW.
Now, a single set of initials isn't remotely "identification" of JW's secretary (let alone identifying the participant from Weizenbaum's story, year unknown). But I feel like as a reward for reading all that, at least the author could have mentioned that she'd found those initials, and worked that into the tale she wanted to spin. As it is, it feels like she cared strictly more about spinning the tale than about finding the identity of the secretary. And if she doesn't care, why should the reader?
Related
The origins of ELIZA, the first chatbot
The paper delves into the origins of ELIZA, the first chatbot by Joseph Weizenbaum in the 1960s. It clarifies misconceptions about its creation, emphasizing its role in AI history and human-machine interaction.
The AI we could have had
In the late 1960s, a secret US lab led by Avery Johnson and Warren Brodey aimed to humanize computing, challenging the industry's focus on predictability. Their legacy underscores missed opportunities for diverse digital cultures.
What LLM models can or can't do – Society of Catholic Scientists [video]
The video discusses large language models, AI reactions, the Eliza effect, and AI history. It explores artificial general intelligence potential, Eliza program creation, pre-programmed AI responses, and IBM's Deep Blue defeating Gary Kasparov in 1997.
Don't disrespect Alan Turing by reanimating him with AI
Plans to create an AI chatbot of Alan Turing at Bletchley Park have raised ethical concerns, with critics arguing it oversimplifies his legacy and trivializes the experiences of historical figures.
An Age of Hyperabundance
Laura Preston's article discusses her role as the contrarian speaker at the Project Voice conference, addressing ethical concerns of conversational AI, including its impact on vulnerable populations and human interaction.