June 26th, 2024

Mozilla.ai did what? When silliness goes dangerous

Mozilla.ai, a Mozilla Foundation project, faced criticism for using biased statistical models to summarize qualitative data, leading to doubts about its scientific rigor and competence in AI. The approach was deemed ineffective and compromised credibility.

Read original articleLink Icon
Mozilla.ai did what? When silliness goes dangerous

Mozilla.ai, a project by the Mozilla Foundation, aimed to democratize open-source AI to solve real user problems. However, their approach to summarizing qualitative data from conversations with organizations using language models raised concerns. By employing biased statistical models to reduce bias in their notes, they inadvertently compounded biases and potentially introduced fabrications. The resulting insights on evaluating LLMs, data privacy, and reusability were criticized for being obvious and not worth the compromised integrity of the data. This led to doubts about Mozilla.ai's scientific rigor and competence in the field, portraying them as lacking critical distance and potentially following trends without substantial expertise. The use of language models in this context was deemed ineffective and counterproductive, undermining the credibility of the organization's work and its understanding of AI.

Related

Lessons About the Human Mind from Artificial Intelligence

Lessons About the Human Mind from Artificial Intelligence

In 2022, a Google engineer claimed AI chatbot LaMDA was self-aware, but further scrutiny revealed it mimicked human-like responses without true understanding. This incident underscores AI limitations in comprehension and originality.

Apple Wasn't Interested in AI Partnership with Meta Due to Privacy Concerns

Apple Wasn't Interested in AI Partnership with Meta Due to Privacy Concerns

Apple declined an AI partnership with Meta due to privacy concerns, opting for OpenAI's ChatGPT integration into iOS. Apple emphasizes user choice and privacy in AI partnerships, exploring collaborations with Google and Anthropic for diverse AI models.

Hackers 'jailbreak' powerful AI models in global effort to highlight flaws

Hackers 'jailbreak' powerful AI models in global effort to highlight flaws

Hackers exploit vulnerabilities in AI models from OpenAI, Google, and xAI, sharing harmful content. Ethical hackers challenge AI security, prompting the rise of LLM security start-ups amid global regulatory concerns. Collaboration is key to addressing evolving AI threats.

Mozilla roll out first AI features in Firefox Nightly

Mozilla roll out first AI features in Firefox Nightly

Mozilla is enhancing Firefox with AI features like local alt-text generation for images in PDFs. Users can access various AI services for tasks, promoting user choice, privacy, and a personalized internet experience.

Not all 'open source' AI models are open: here's a ranking

Not all 'open source' AI models are open: here's a ranking

Researchers found large language models claiming to be open source restrict access. Debate on AI model openness continues, with concerns over "open-washing" by tech giants. EU's AI Act may exempt open source models. Transparency and reproducibility are crucial for AI innovation.

Link Icon 13 comments
By @cancerhacker - 5 months
I am reminded of this immortal koan:

Sussman attains enlightenment

In the days when Sussman was a novice, Minsky once came to him as he sat hacking at the PDP-6.

“What are you doing?”, asked Minsky.

“I am training a randomly wired neural net to play Tic-Tac-Toe” Sussman replied.

“Why is the net wired randomly?”, asked Minsky.

“I do not want it to have any preconceptions of how to play”, Sussman said.

Minsky then shut his eyes.

“Why do you close your eyes?”, Sussman asked his teacher.

“So that the room will be empty.”

At that moment, Sussman was enlightened.

By @tbrownaw - 5 months
Yes this was very silly to the degree that it provides more reason to worry a bit about the future of their side projects (I kinda like Firefox), but I don't really see anything that sounds dangerous?
By @gepardi - 5 months
Why is this author so mad?
By @lolinder - 5 months
I'm as sick of AI nonsense as anyone and I'm a chronic Mozilla critic (I mostly wish they'd just accept my donations for Firefox and focus exclusively on that). That said, this post is over the top.

If you read the blog post it's pretty obvious what happened. Someone at Mozilla.ai had an extra day on their hands and ran a bunch of text they'd collected through a few models. They thought "hey, this is kind of cool, let's make a blog post about it". Then they wrote one stupid line about their motivations (likely made up to justify playing around with local models) and get completely lambasted for that one stupid line.

I'd rather live in a world where people are comfortable throwing together a quick blog post detailing a fun/stupid project they did than one in which they do that anyway but are hesitant to share because people will rake them over the coals for being "unserious".

By @langsoul-com - 5 months
Blog article is a bit edgy. It's more like modzilla.ai is an AI consultancy, that just uses AI for The buzz word effect.

Man, modzilla really do the most useless things. I'm really surprised just how bad they're at generating any profit, the silly ideas and products are wild.

Can't they just try and make Firefox the best, ubiquitous and one day content actually against Chrome?

By @rainonmoon - 5 months
The mention of objectivity in connection with LLM output is obviously farcical, but I'm curious about the motivation behind the experiment. Surely the value of speaking to organisations already deploying LLMs in live workplaces is identifying specific, solvable issues (aligned with Mozilla.ai's stated objectives) e.g. Project Zero's recent post about trying to make LLMs work in security research. Generalising those claims doesn't seem like a meaningful action, and as OP pointed out, doesn't provide any revelations to anyone with even a cursory view of the landscape. Mozilla's blog post ultimately seems more like marketing than a genuine attempt at research so from that lens it doesn't get me as heated as OP. But that is a tension Mozilla should be aware of if they're actually trying to build credibility with their blog if they're pushing SEO alongside whatever research they do end up publishing.
By @chx - 5 months
How could anything related to LLMs be serious?

https://hachyderm.io/@inthehands/112006855076082650

> You might be surprised to learn that I actually think LLMs have the potential to be not only fun but genuinely useful. “Show me some bullshit that would be typical in this context” can be a genuinely helpful question to have answered, in code and in natural language — for brainstorming, for seeing common conventions in an unfamiliar context, for having something crappy to react to.

> Alas, that does not remotely resemble how people are pitching this technology.

Yes, people get bamboozled because LLMs are trained to bamboozle them, Raskin didn't call them "a zero day vulnerability for the operating system of humanity" for nothing -- but that's all there is.

By @protocolture - 5 months
The tiniest storm in the smallest teacup.
By @sillysaurusx - 5 months
> Mozilla.ai is not a serious organization. It seems to be just another “AI” clown car.

If this is true, then I’m a court jester, because none of my projects started as serious work by a serious organization. And ML wasn’t lame until everyone started taking it so seriously.

The key with ML is to have fun. Even the most serious researcher has this motivation, even if they won’t admit it. If you could somehow scan their brain and look, you’d see that all the layers of seriousness are built around the core drive to have fun. Winning a dota 2 tournament was serious work, but I’ll wager any sum of money they picked dota because it seemed like a fun challenge.

If the author is looking for a serious AI organization, they should start one. Otherwise they’re not really qualified to say whether the work is bad. I have no opinion on Mozilla’s project here, but at a glance it looks well-presented with an interesting hypothesis. All of my work started with those same objectives, and it’s mistaken to discourage it.

The more people doing ML, the better. It’s not up to us to say what someone should or shouldn’t work on. It’s their own damn decision, and people can decide for themselves whether the work is worth supporting. Personally, I think summarizing a corporation’s knowledge is one of the more interesting unsolved problems, and this seems like a step towards it. Any step towards an interesting objective is de facto good.

Bias has become such an overrated concern. Yes, it matters. No, it’s not the number one most important problem to solve. I say this as someone raising a daughter. The key is to make interesting things while giving some thought ahead of time on how to make it more inclusive. Then pay close attention when you discover that some group of users doesn’t like it, and why. Then think of ways to fix it, and decide whether the cost is low enough.

There is always a cost. Choosing to focus on bias means that you’re not focusing on building new things. It’s a cost I try not to shy away from. But the author seems to feel that it’s the single most important priority, rather than, say, getting a useful summary of 16,ooo words. I think I’ll agree to disagree.

By @adithyassekhar - 5 months
Mozilla.ai website feels so sluggish on chrome on android that I thought I was using firefox. Not trying to be snarky, I genuinely had to recheck which browser I was on.
By @pipeline_peak - 5 months
Mozilla? Isn’t that the company that made an HtML GUI toolkit back in the day?
By @woah - 5 months
Here is the blog post the author is attacking: https://blog.mozilla.ai/uncovering-genai-trends-using-local-...

Is it groundbreaking? No. But the author's overwrought political rant about Mozilla, AI, the internet, and probably capitalism seems unwarranted based on a small blog post. From the "about" page of tante.cc seems like they are some kind tech/political/leftist/"luddite" commentator.