June 29th, 2024

Researchers at ETH Zurich develop the fastest possible flow algorithm

Researchers at ETH Zurich developed a groundbreaking algorithm for maximum flow computation, minimizing transport costs at high speed. Led by Rasmus Kyng, the algorithm handles diverse networks efficiently, advancing theoretical computer science significantly.

Read original articleLink Icon
Researchers at ETH Zurich develop the fastest possible flow algorithm

Researchers at ETH Zurich have developed a groundbreaking algorithm that computes the maximum flow in a network while minimizing transport costs at an unprecedented speed. Led by Rasmus Kyng, the team's algorithm can efficiently handle various types of networks, including rail, road, water, and the internet. This achievement marks a significant advancement in theoretical computer science, addressing a problem that has puzzled researchers for nearly a century. Kyng's algorithm not only outperforms previous methods but also paves the way for computing optimal flows in dynamically changing networks, such as those found in biology or transportation systems. By combining innovative strategies, the team has created an almost-linear-time algorithm that revolutionizes how computers tackle complex tasks. Their work has garnered recognition in the scientific community and promises a new era of fast and efficient network flow computations.

Related

Factorio Fluids 2.0

Factorio Fluids 2.0

The Factorio Friday Facts detail the game's fluid system overhaul. The new algorithm simplifies fluid flow with segments, improving efficiency, throughput, and distribution while addressing previous issues. Sacrificing realism for gameplay, the update aims to enhance player experience.

Finnish startup says it can speed up any CPU by 100x

Finnish startup says it can speed up any CPU by 100x

A Finnish startup, Flow Computing, introduces the Parallel Processing Unit (PPU) chip promising 100x CPU performance boost for AI and autonomous vehicles. Despite skepticism, CEO Timo Valtonen is optimistic about partnerships and industry adoption.

Compressing graphs and indexes with recursive graph bisection (2016)

Compressing graphs and indexes with recursive graph bisection (2016)

Graph reordering is applied to boost compression in graphs and indexes. A new compression-friendly technique using recursive graph bisection shows improved rates for large-scale graphs, enhancing storage and processing efficiency.

Researchers run high-performing LLM on the energy needed to power a lightbulb

Researchers run high-performing LLM on the energy needed to power a lightbulb

Researchers at UC Santa Cruz developed an energy-efficient method for large language models. By using custom hardware and ternary numbers, they achieved high performance with minimal power consumption, potentially revolutionizing model power efficiency.

Moving objects precisely with sound

Moving objects precisely with sound

EPFL researchers use soundwaves for precise object manipulation, advancing drug delivery. Wave momentum shaping navigates objects in dynamic environments, offering noninvasive biomedical applications and potential for micro-level cell manipulation. Nature Physics publication showcases groundbreaking research.

Link Icon 15 comments
By @nabla9 - 4 months
The algorithm is near linear asymptotically at the limit when n -> inf.

In the end of video they tell there is no way that any implementation of their algorithm gets close to beating existing algorithms in the real world.

https://cacm.acm.org/research/almost-linear-time-algorithms-...

By @okintheory - 4 months
Interestingly, the same guy also works on making 'theory-only' algorithms work well in practice [1]. But, it seems like that takes another 20 years -- [1] is building on a theory breakthrough from 2004 [2], but these algorithms are only starting to work in practice in 2024, IIUC. I guess that means there's hope for practical min-cost flow algorithms in 2044.

[1] https://arxiv.org/pdf/2303.00709

[2] https://arxiv.org/abs/cs/0310051

By @c-smile - 4 months
> Almost-Linear-Time Algorithm

From O(mn) to O(m) ... thus excluding N (number of vertices) from computation ...

Too good to be true?

By @jpster - 4 months
> A glance at the raw figures shows just how far we have come: until the turn of the millennium, no algorithm managed to compute faster than m1.5, where m stands for the number of connections in a network that the computer has to calculate, and just reading the network data once takes m time. In 2004, the computing speed required to solve the problem was successfully reduced to m1.33. Using Kyng’s algorithm, the “additional” computing time required to reach the solution after reading the network data is now negligible.

TFA didn’t describe Kyng’s breakthrough in terms of this mscore it considers so important. What’s up with that?

By @rowanG077 - 4 months
Sometimes I think we have lost the plot completely with complexity as a metric. Increasingly we are seeing algorithms which have optimized the complexity metric to an insane degree but which aren't actually useful.
By @JohnKemeny - 4 months
By @squarerootof-1 - 4 months
Where is the paper / code for this?
By @ecstrema - 4 months
Maybe someone could clarify something for me here:

o(n) seems like a stronger statement to me than O(n), since all o(n) algorithms are O(n), but the reverse is not true.

Also if o(n) applies to all n, however small, whereas O(n) applies only when n -> inf,

(From the Wikipedia page example: 2n = O(n) but 2n != o(n))

Then doesn’t that means this algorithm should be applicable to even small n’s? Then it would be the opposite of a galactic algorithm, as someone above suggested, wouldn’t it?

Or am I missing something?

By @ziofill - 4 months
damn you constant factors [shakes fist in the air]
By @Sniffnoy - 4 months
The abstract just says the time is m^(1+o(1))... anyone know if a more specific bound is stated anywhere?
By @imtringued - 4 months
I was hoping for some kind of evolutionary algorithm. Giving up optimality in exchange for being able to solve problem instances with billions of items would be worth it.
By @josefrichter - 4 months
I don’t want to spam, but I’ve been using rome2rio website/app to find complex connections. They’re definitely not using this algorithm, but I’ve always been fascinated that you get complex results almost immediately. I don’t know how they do it, but for me it’s one of the most fascinating works on the internet. Great job. [I’m not affiliated with them in any way]
By @I_am_tiberius - 4 months
Can this be used to improve the Bitcoin Lightning Network?
By @imvetri - 4 months
My words.

Solving a problem for computational efficiency is pointless.

Wy

Take a look at AI neural networks where they blast computational resources.

May be

One day this might help.

Reply to myself

Appreciate this post. And get back to writing.

Appreciation

Out of so many other less interesting post, this post caught my attention and nowhere it spoke about how it works, most importantly why it is needed.