in

Flow claims it can boost the power of any CPU 100x with its companion chip and a little elbow grease

Finnish startup called Flow computing makes one of the wildest claims we’ve ever heard in silicon engineering: by adding its proprietary companion chip, any CPU can instantly double its performance, increasing it up to 100 times with software tweaks.

If successful, it could help the industry keep up with the insatiable computing demand from AI makers.

Flow is a spinout of VTT, a Finnish state research organization that is a bit like a national laboratory. The chip technology he commercializes, which he calls the Parallel Processing Unit, is the result of research conducted at that lab (although VTT is an investor, the IP is owned by Flow).

The claim, Flow is the first to admit, is ridiculous at first glance. You can’t just magically squeeze extra performance out of CPUs across all architectures and codebases. If so, Intel or AMD or whoever would have done it years ago.

But Flow was working on something that have was theoretically possible — only no one managed to do it.

CPUs have come a long way since the early days of vacuum tubes and punched cards, but in some fundamental ways they are still the same. Their primary limitation is that as serial rather than parallel processors, they can only do one thing at a time. Sure, they switch that thing a billion times a second across multiple cores and paths – but these are all ways of accommodating the one-sided nature of the CPU. (A GPU, by contrast, does many related calculations at once, but is specialized for certain operations.)

«The CPU is the weakest link in computing,» said Flow co-founder and CEO Timo Valtonen. «He is not up to the task and that will have to change.»

Recommended Article
The votes are in: Meet the audience-chosen roundtable winners for Disrupt 2024

CPUs have become very fast, but even with nanosecond-level responsiveness, there is a tremendous amount of waste in the way instructions are executed simply because of the basic constraint that one task needs to finish before the next can begin. (I’m simplifying here, I’m not a chip engineer myself.)

What Flow claims to have done is remove this limitation, turning the CPU from a single-lane street into a multi-lane highway. The CPU is still limited to doing one task at a time, but Flow’s PPU, as they call it, essentially performs traffic management at the nanosecond level to move tasks in and out of the processor faster than previously possible.

Think of the CPU as a chef working in the kitchen. A chef can only work so fast, but what if that person had a superhuman assistant who changed knives and tools in and out of the chef’s hands, cleaned up prepared food and put in new ingredients, eliminating all non-actual cooking tasks? The chef still only has two hands, but now he can work ten times faster.

The chart (in the log, note) shows the improvements in the FPGA PPU enhanced chip over the unmodified Intel chips. Continuously increasing the number of PPU cores improves performance.
Image credits: Flow computing

It’s not a perfect analogy, but it gives you an idea of ​​what’s going on here, at least according to Flow’s internal tests and demos with the industry (and they talk to everyone). The PPU does not increase the clock frequency or push the system in other ways that would lead to additional heat or power; in other words, the chef is not asked to chop twice as fast. It just uses CPU cycles that are already in progress more efficiently.

This kind of thing isn’t entirely new, Valtonen says. “This has been studied and discussed at a high academic level. You can already do parallelization, but it breaks legacy code and is then useless.”

Recommended Article
Early birds get savings — 4 days left in the Disrupt sale

So it was possible. It just couldn’t be done without rewriting all the code in the world from the ground up, which kind of makes it a non-starter. He solved a similar problem another Nordic computer company, ZeroPointwhich achieved a high level of memory compression while maintaining data transparency with the rest of the system.

In other words, Flow’s big achievement isn’t handling high-speed traffic, it’s that it does so without having to modify the code on any CPU or architecture it tested. It sounds a bit reckless to say that arbitrary code can be executed twice as fast on any chip without any changes other than integrating the PPU with the die.

Therein lies the primary challenge to Flow’s success as a company: unlike a software product, Flow’s technology must be incorporated at the chip design level, meaning it doesn’t work retroactively, and the first PPU chip would necessarily be quite a ways down the road. Flow has shown that the technology works in FPGA-based test setups, but chipmakers would have to invest considerable resources to see the gains in question.

Flow’s founding team, from left: Jussi Roivainen, Martti Forsell and Timo Valtonen.
Image credits: Flow computing

The scale of those gains, and the fact that CPU improvements have been iterative and fractional over the past few years, could still have those chipmakers knocking on Flow’s door pretty urgently. If you can really double your performance in one generation with one change to the layout, it’s a no-brainer.

Further performance improvements come from refactoring and recompiling the software to work better with the PPU-CPU combination. Flow says it has seen up to a 100x increase with code that has been modified (though not necessarily completely rewritten) to take advantage of its technology. The company is working on offering recompilation tools to make this task easier for software developers who want to optimize for Flow-enabled chips.

Recommended Article
Light presents its latest minimalist phone, now with an OLED screen, but still without addictive apps

Analyst Kevin Krewell from Exploring Tiriaswho was familiar with Flow’s technology and called an outside perspective on the matter, was more concerned about industry acceptance than the fundamentals.

He pointed out, rightly so, that AI acceleration is currently the biggest market, something that can be targeted with special silicon like Nvidia’s popular H100. While a PPU-accelerated CPU would lead to gains across the board, chip makers may not want to rock the boat too much. And there’s a simple question of whether these companies are willing to invest significant resources in a largely unproven technology when they likely have a five-year plan that would be disrupted by that choice.

Will Flow’s technology become a must-have component for every chipmaker, catapulting it to riches and prominence? Or will the expensive chipmakers decide to stay the course and continue to extract rent from the ever-growing computer market? Probably somewhere in between — but he says that even if Flow has achieved a major engineering feat here, like all startups, the company’s future depends on its customers.

Flow is just coming out of stealth, with €4 million (about $4.3 million) in pre-seed funding led by Butterfly Ventures, with participation from FOV Ventures, Sarsia, Stephen Industries, Superhero Capital and Business Finland.

#Flow #claims #boost #power #CPU #100x #companion #chip #elbow #grease

What do you think?

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

GIPHY App Key not set. Please check settings

Paris-based AI startup Mistral AI has raised $640 million

Which Premier League club has the most players at Euro 2024 as Brentford are fifth on the list with one team having ZERO representatives