Mozilla.ai did what? When silliness goes dangerous
Mozilla.ai, a Mozilla Foundation project, faced criticism for using biased statistical models to summarize qualitative data, leading to doubts about its scientific rigor and competence in AI. The approach was deemed ineffective and compromised credibility.
Read original articleMozilla.ai, a project by the Mozilla Foundation, aimed to democratize open-source AI to solve real user problems. However, their approach to summarizing qualitative data from conversations with organizations using language models raised concerns. By employing biased statistical models to reduce bias in their notes, they inadvertently compounded biases and potentially introduced fabrications. The resulting insights on evaluating LLMs, data privacy, and reusability were criticized for being obvious and not worth the compromised integrity of the data. This led to doubts about Mozilla.ai's scientific rigor and competence in the field, portraying them as lacking critical distance and potentially following trends without substantial expertise. The use of language models in this context was deemed ineffective and counterproductive, undermining the credibility of the organization's work and its understanding of AI.
Related
Lessons About the Human Mind from Artificial Intelligence
In 2022, a Google engineer claimed AI chatbot LaMDA was self-aware, but further scrutiny revealed it mimicked human-like responses without true understanding. This incident underscores AI limitations in comprehension and originality.
Apple Wasn't Interested in AI Partnership with Meta Due to Privacy Concerns
Apple declined an AI partnership with Meta due to privacy concerns, opting for OpenAI's ChatGPT integration into iOS. Apple emphasizes user choice and privacy in AI partnerships, exploring collaborations with Google and Anthropic for diverse AI models.
Hackers 'jailbreak' powerful AI models in global effort to highlight flaws
Hackers exploit vulnerabilities in AI models from OpenAI, Google, and xAI, sharing harmful content. Ethical hackers challenge AI security, prompting the rise of LLM security start-ups amid global regulatory concerns. Collaboration is key to addressing evolving AI threats.
Mozilla roll out first AI features in Firefox Nightly
Mozilla is enhancing Firefox with AI features like local alt-text generation for images in PDFs. Users can access various AI services for tasks, promoting user choice, privacy, and a personalized internet experience.
Not all 'open source' AI models are open: here's a ranking
Researchers found large language models claiming to be open source restrict access. Debate on AI model openness continues, with concerns over "open-washing" by tech giants. EU's AI Act may exempt open source models. Transparency and reproducibility are crucial for AI innovation.
Sussman attains enlightenment
In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.
“What are you doing?”, asked Minsky.
“I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied.
“Why is the net wired randomly?”, asked Minsky.
“I do not want it to have any preconceptions of how to play”, Sussman said.
Minsky then shut his eyes.
“Why do you close your eyes?”, Sussman asked his teacher.
“So that the room will be empty.”
At that moment, Sussman was enlightened.
If you read the blog post it's pretty obvious what happened. Someone at Mozilla.ai had an extra day on their hands and ran a bunch of text they'd collected through a few models. They thought "hey, this is kind of cool, let's make a blog post about it". Then they wrote one stupid line about their motivations (likely made up to justify playing around with local models) and get completely lambasted for that one stupid line.
I'd rather live in a world where people are comfortable throwing together a quick blog post detailing a fun/stupid project they did than one in which they do that anyway but are hesitant to share because people will rake them over the coals for being "unserious".
Man, modzilla really do the most useless things. I'm really surprised just how bad they're at generating any profit, the silly ideas and products are wild.
Can't they just try and make Firefox the best, ubiquitous and one day content actually against Chrome?
https://hachyderm.io/@inthehands/112006855076082650
> You might be surprised to learn that I actually think LLMs have the potential to be not only fun but genuinely useful. “Show me some bullshit that would be typical in this context” can be a genuinely helpful question to have answered, in code and in natural language — for brainstorming, for seeing common conventions in an unfamiliar context, for having something crappy to react to.
> Alas, that does not remotely resemble how people are pitching this technology.
Yes, people get bamboozled because LLMs are trained to bamboozle them, Raskin didn't call them "a zero day vulnerability for the operating system of humanity" for nothing -- but that's all there is.
If this is true, then I’m a court jester, because none of my projects started as serious work by a serious organization. And ML wasn’t lame until everyone started taking it so seriously.
The key with ML is to have fun. Even the most serious researcher has this motivation, even if they won’t admit it. If you could somehow scan their brain and look, you’d see that all the layers of seriousness are built around the core drive to have fun. Winning a dota 2 tournament was serious work, but I’ll wager any sum of money they picked dota because it seemed like a fun challenge.
If the author is looking for a serious AI organization, they should start one. Otherwise they’re not really qualified to say whether the work is bad. I have no opinion on Mozilla’s project here, but at a glance it looks well-presented with an interesting hypothesis. All of my work started with those same objectives, and it’s mistaken to discourage it.
The more people doing ML, the better. It’s not up to us to say what someone should or shouldn’t work on. It’s their own damn decision, and people can decide for themselves whether the work is worth supporting. Personally, I think summarizing a corporation’s knowledge is one of the more interesting unsolved problems, and this seems like a step towards it. Any step towards an interesting objective is de facto good.
Bias has become such an overrated concern. Yes, it matters. No, it’s not the number one most important problem to solve. I say this as someone raising a daughter. The key is to make interesting things while giving some thought ahead of time on how to make it more inclusive. Then pay close attention when you discover that some group of users doesn’t like it, and why. Then think of ways to fix it, and decide whether the cost is low enough.
There is always a cost. Choosing to focus on bias means that you’re not focusing on building new things. It’s a cost I try not to shy away from. But the author seems to feel that it’s the single most important priority, rather than, say, getting a useful summary of 16,ooo words. I think I’ll agree to disagree.
Is it groundbreaking? No. But the author's overwrought political rant about Mozilla, AI, the internet, and probably capitalism seems unwarranted based on a small blog post. From the "about" page of tante.cc seems like they are some kind tech/political/leftist/"luddite" commentator.
Related
Lessons About the Human Mind from Artificial Intelligence
In 2022, a Google engineer claimed AI chatbot LaMDA was self-aware, but further scrutiny revealed it mimicked human-like responses without true understanding. This incident underscores AI limitations in comprehension and originality.
Apple Wasn't Interested in AI Partnership with Meta Due to Privacy Concerns
Apple declined an AI partnership with Meta due to privacy concerns, opting for OpenAI's ChatGPT integration into iOS. Apple emphasizes user choice and privacy in AI partnerships, exploring collaborations with Google and Anthropic for diverse AI models.
Hackers 'jailbreak' powerful AI models in global effort to highlight flaws
Hackers exploit vulnerabilities in AI models from OpenAI, Google, and xAI, sharing harmful content. Ethical hackers challenge AI security, prompting the rise of LLM security start-ups amid global regulatory concerns. Collaboration is key to addressing evolving AI threats.
Mozilla roll out first AI features in Firefox Nightly
Mozilla is enhancing Firefox with AI features like local alt-text generation for images in PDFs. Users can access various AI services for tasks, promoting user choice, privacy, and a personalized internet experience.
Not all 'open source' AI models are open: here's a ranking
Researchers found large language models claiming to be open source restrict access. Debate on AI model openness continues, with concerns over "open-washing" by tech giants. EU's AI Act may exempt open source models. Transparency and reproducibility are crucial for AI innovation.