OpenAI Acquires Multi
Multi, a platform merging multiplayer desktops and OpenAI, will cease operations on July 24, 2024. Users can access the app until then, export data, and contact the team for assistance or alternatives.
Read original articleMulti, a platform exploring the concept of multiplayer desktop computers, has announced its integration with OpenAI. This move signifies a shift towards working with AI to redefine how people interact with computers. As a result, Multi will be discontinued, with new team signups closed and existing users able to access the app until July 24, 2024, after which all user data will be deleted. Users seeking alternatives or extensions can contact the team for assistance. The Multi team expressed gratitude to users for their support and feedback, hinting at future endeavors. Users are advised to export their session notes before the deletion date and can request data deletion or extensions by contacting the team. This transition marks the end of Multi's journey and the beginning of a new chapter with OpenAI.
Related
Optimizing AI Inference at Character.ai
Character.AI optimizes AI inference for LLMs, handling 20,000+ queries/sec globally. Innovations like Multi-Query Attention and int8 quantization reduced serving costs by 33x since late 2022, aiming to enhance AI capabilities worldwide.
We no longer use LangChain for building our AI agents
Octomind switched from LangChain due to its inflexibility and excessive abstractions, opting for modular building blocks instead. This change simplified their codebase, increased productivity, and emphasized the importance of well-designed abstractions in AI development.
LibreChat: Enhanced ChatGPT clone for self-hosting
LibreChat introduces a new Resources Hub, featuring a customizable AI chat platform supporting various providers and services. It aims to streamline AI interactions, offering documentation, blogs, and demos for users.
Lessons About the Human Mind from Artificial Intelligence
In 2022, a Google engineer claimed AI chatbot LaMDA was self-aware, but further scrutiny revealed it mimicked human-like responses without true understanding. This incident underscores AI limitations in comprehension and originality.
Microsoft shelves its underwater data center
Microsoft has ended its underwater data center experiment, noting improved server longevity underwater. Despite success, Microsoft shifts focus to other projects like AI supercomputers and nuclear ambitions, discontinuing further underwater endeavors.
I presume that Multi actually did something useful, perhaps some sort of virtualization. But this description doesn't tell me anything useful about the company, nor does it even make sense. Is a desktop computer a video game? Why would it "play" let alone be multiplayer? Why would the OS be on equal footing to the apps? It doesn't make sense let alone be useful.
If you grant it this control, you stand to lose any shred of privacy you have left, and become a complete slave to them. This is different from using assistants with granular access to data. It also is different from running your own private AI.
I totally understand free users getting a shorter notice period, but there's almost no instance where a paid user should get <60 days notice before being forced to migrate off a product they're using.
This sort of thing leads to a lot of mistrust in trying / using a startup for anything remotely important.
To power that experience, an app will need to feel "multiplayer" like someone else is working with you. They'll probably bundle this in the API and have "agent mode" that developers can embed in any app or website, or just let consumers give OAI access to control their desktop. It'll also likely work async, so you can assign tasks, walk away or go to sleep for a few hours, and see the results.
This is speculation. But it feels like the interface that we'll look back on in ten years and say "that seemed obvious in hindsight."
The thing that is hard to achieve is not to get your LLM to operate your computer but to make it do the right thing. And by right thing I mean, the thing you expect it to do in the way you want. Example I always show to people when I give a demo of what I built: Say you want to buy a new Macbook. If you ask the computer to buy you one, it might:
A) Go to Google and search for a Macbook B) Go to Apple and search for a Macbook C) Go to Amazon and search for a Macbook
Now, depending on your implementation it might then ask you to give you more details about the Macbook you need (pro? air? how big?)
The tough part is to wrap all this together in a way that doesn't frustrate the user. In a way it's like when you do a Google search: the first result is the best guess of what you want. But how likely is it that it's not? Pretty much 50% of the time you look at the top 2-3 results right? Well it's the same with having the AI control your computer, at this point in time, you want to have "multiple sessions" - sort of like a parallel universe where the AI has done 2-3 things and lets you pick from A, B ... or C and then go from there.
Another approach would be to collate a library of most used "prompts" - i.e. if you are trying to buy a laptop and someone else managed to get the AI to go through a very thorough process where you're asked all the important details that lead to purchasing the Macbook, then you should be able to re-use that one workflow from the library, rather than have the AI start from scratch.
Anyway, it's very very tough - more so from a UX experience rather than an AI perspective.
Is this like EA where they have to make a trillion dollars first before getting to the actual mission?
The key must be somewhere in the statement "we’ve been increasingly asking ourselves how we should work with computers. Not on or using computers, but truly with computers."
I wish I had some experience using Multi so I could picture this better. But is there a chance this is for a sandboxed execution environment that would allow models to interact with software alongside a human counterpart?
Still waiting for the "PhD level" Q*.
Totally unprofessional.
I'm rather surprised that Keybase is still running after the team joined Zoom.
What an incredible journey.
I have criticisms of OpenAI, but on the whole I really hope this doesn't mean they are losing the war and turning to diversification and other classic "big tech" moves. GPT still is (IME) the best at not assuming/inferring things in the prompt that aren't there (especially then mangling the prompt *cough* Gemini *cough*). For those of us who try to be very precise with our prompts, that is a big deal.
What is it with OpenAI acquisitions reaming their user bases on short notice?
Is OpenAI already locked in such a dominant position that they don't have to pretend to care what anyone in industry thinks about them?
Everyone savvy knows that Microsoft and Oracle don't care, and are just waiting until they can stab you again. Even Google, who routinely pulls the rug out from under its customers, is gentler about it. Even the usual serial-founder brogrammer startups often have a semi-reasonable migration.
Sustaining engineering for legacy customers doesn't have to be expensive, and you can think of it as an obligation to pay back for your success, and maybe a foot in the door for sales.
Seems a little abrupt.
OpenAI on an M&A spree.
Interesting.
Related
Optimizing AI Inference at Character.ai
Character.AI optimizes AI inference for LLMs, handling 20,000+ queries/sec globally. Innovations like Multi-Query Attention and int8 quantization reduced serving costs by 33x since late 2022, aiming to enhance AI capabilities worldwide.
We no longer use LangChain for building our AI agents
Octomind switched from LangChain due to its inflexibility and excessive abstractions, opting for modular building blocks instead. This change simplified their codebase, increased productivity, and emphasized the importance of well-designed abstractions in AI development.
LibreChat: Enhanced ChatGPT clone for self-hosting
LibreChat introduces a new Resources Hub, featuring a customizable AI chat platform supporting various providers and services. It aims to streamline AI interactions, offering documentation, blogs, and demos for users.
Lessons About the Human Mind from Artificial Intelligence
In 2022, a Google engineer claimed AI chatbot LaMDA was self-aware, but further scrutiny revealed it mimicked human-like responses without true understanding. This incident underscores AI limitations in comprehension and originality.
Microsoft shelves its underwater data center
Microsoft has ended its underwater data center experiment, noting improved server longevity underwater. Despite success, Microsoft shifts focus to other projects like AI supercomputers and nuclear ambitions, discontinuing further underwater endeavors.