Is Python That Slow?
Recent benchmarks indicate Python 3.11 improves performance, but Rust is up to 80 times faster. PyPy also outperforms CPython. Python's ease of use and optimized libraries help mitigate speed concerns.
Read original articlePython is often criticized for its performance, but many developers find that its ease of use and productivity outweigh the speed drawbacks. Recent benchmarks conducted by the author compared various Python versions (from CPython 2.7 to 3.13) and alternative implementations like PyPy, as well as other languages such as Node.js and Rust. The results showed that while Python 3.11 significantly improved performance, Rust was notably faster, achieving speeds up to 80 times faster than Python 3.8 in certain tests. PyPy also demonstrated impressive speed, being over 12 times faster than Python 3.8. The benchmarks included tests for calculating Fibonacci numbers and implementing a bubble sort algorithm, revealing that performance can vary based on the specific task and Python version. The author emphasized that real-world performance may differ due to the use of optimized libraries in production code. Additionally, misconceptions about Python's asynchronous capabilities and the impact of the new JIT compiler in Python 3.13 were addressed, clarifying that while asyncio improves concurrency, it does not inherently speed up execution. Overall, the findings suggest that while Python may not be the fastest language, its development speed and the availability of optimized libraries can mitigate performance concerns.
- Python's ease of use often compensates for its slower execution speed.
- Python 3.11 shows significant performance improvements over earlier versions.
- Rust outperforms Python in speed benchmarks, with differences up to 80 times.
- PyPy offers substantial speed advantages over standard CPython.
- Real-world performance can vary significantly based on the use of optimized libraries.
Related
Recent Performance Improvements in Function Calls in CPython
Recent CPython updates have improved function call performance, reducing overhead in loops and built-in functions, with notable speed increases, making Python more efficient for developers.
Ousterhout's Dichotomy
Ousterhout's Dichotomy highlights the trade-off between programming language usability and performance. Rust is presented as a solution, balancing high-level usability with low-level performance and allowing incremental optimization.
Python 3.12 vs. Python 3.13 – performance testing
Python 3.13 outperforms Python 3.12 with a 1.08x improvement in benchmarks, especially in async tests, though some areas like coverage showed decreased performance. It is recommended for better performance.
Python 3.13, what didn't make the headlines
Python 3.13 features improvements in PDB, bug fixes in shutil, and new annotation syntax for comprehensions and lambdas, but overall performance gains are minimal and some deprecated features may disrupt existing code.
State of Python 3.13 Performance: Free-Threading
Python 3.13 introduces free-threading without the GIL, enhancing performance for parallel applications. However, current slowdowns due to interpreter limitations are expected to improve in future releases.
Would an equivalent amount of effort on Python have ended in similar performance? Or do python things like "everything's an object", etc, set a hard ceiling?
how can it be possible that nodejs is so fast, or that rust isn't multiple orders of magnitude faster (again, for the OPs benchmark it's about 8-10x faster in one case, and only twice as faster in another, which is much faster, but I expected it to be more like 100x faster).
is v8 really that fast? I'd be curious to see a multi threaded version of this implemented (some arbitrary multi-threaded benchmark). there I would hope that rust is multiple orders of magnitude faster, given that node is single threaded (with some caveats, of course as anyone who knows node knows). surely the rust written wasn't written optimally.
given how easy it is to write typescript compared to Rust and Python imho, maybe the JS community isn't as crazy as I thought
The article takes a tiny slice of Python: two exceptionally simplistic algorithms that aren't representative of any sort of real-world workload an actual Python program might be tasked with, and then adds a bunch of unknowns s.a. how long does it take the system to load the text of the program, anything about the load of the system running the test etc.
I feel like this is another one of those "politically" motivated articles (i.e. the author likes Python and wants to say something nice about it, regardless of the actual state of events in Python). But then also tries to serve this opinion as "well, this is my take, yours could be different, nobody knows who's right, but nobody else rose to the task of figuring it out, so you might as well go with what I say", as in:
> In the title of this article I asked if Python is really that slow. There isn't an objective answer to this question, so you will need to extract your own conclusions based on the data I presented here, or if interested, run your own benchmarks and complement my results with your own.
Well, no, there is an objective way, but it's hard! It's a lot of work to figure out what's the minimum required time is for the task, or the minimum space etc. and then see if your program, that was proved to be optimal for the task in the given language approaches that limit, and what are the contributing circumstances.
Just adding @numba.njit above the factorial function gives me a 36X speedup, putting it ahead of PyPy/Node.js and just a couple of times slower than Rust.
Of course, people already had.
Of course, NIH, so instead of references to those more comprehensive benchmarks it's recursive fib again.
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Is PyPy used a lot? Why don't we hear more about it? Does it support the same large ecosystem of packages? And does it have a GIL?
Everyone said python was slow, learn C++. So i did. I never taught myself python until much later. Huge regrets. All of my open source projects are now in python.
I've discovered there's literally only 1 thing in python that causes it to be slow.
Global Interpreter Lock.
Use multiprocessing and avoid this problem? Now python is as fast as I ever need it to be.
That’s as much as I can say as an anecdote.
Edit: Cython not CPython
If you're doing anything even mildly related to ML/stat, Python is essential. It's close to impossible to replicate the eco-system that has been built out in the past 10 years (the Torch people were still in Lua back then).
It's amazing what he can do when not operating at full strength.
Huh? I mean, author's own benchmarks prove that Python is, indeed, that slow. Not everyone is going to be running their python code through PyPy. There are areas in which Python is invaluable and irreplaceable, but, let's accept things for what they are - it's slow. It only keeps up through libraries that leverages C code, and for those things - it is very acceptable.
And:
"Python is by far the fastest to write, cleanest, more maintainable programming language I know ... I rarely feel Python slows me down, and on the other side I constantly marvel at how fast I code with it compared to other languages."
In other words, "I don't know much about anything else other than Python."
What a pointless article.
Python is by far the fastest to write, cleanest, more maintainable programming language I know, and that a bit of a runtime performance penalty is a small price to pay when I'm rewarded with significant productivity gains. I rarely feel Python slows me down, and on the other side I constantly marvel at how fast I code with it compared to other languages.
Personally I prefer Raku [https://raku.org] to do the same.
Here is the Python code from the article:
import sys
def fibo(n):
if n <= 1:
return n
else:
return fibo(n-1) + fibo(n-2)
if __name__ == '__main__':
for n in sys.argv[1:]:
print(n, fibo(int(n)))
And here is my Raku code: sub MAIN(*@n) {
say (0, 1, *+* ... *)[@n]
}
So, even better on the "quick to code and easy to maintain" imo.The performance comparison is interesting too...
CPython 9.72 - 22.10 secs
PyPy 1.65 secs
Node 1.76 secs
Rust 0.25 secs
Raku 0.12 secs
> time ./fibo.raku 10 20 30 40
(55 6765 832040 102334155)
./fibo.raku 10 20 30 40 0.12s user 0.02s system 114% cpu 0.124 total
Haha (admittedly I am on an M1 not an i5 so this is a bit Apples vs. Oranges ... but CPUs getting faster all the time and Raku has no GIL)> (…)
> Before you ask, no, I did not include Ruby, PHP or any other of the "slow" languages in this benchmark, simply because I don't currently use any of these languages and I wouldn't start using them even if they showed great results on a benchmark.
Why bother with benchmarks, then? Clearly you don’t care for them and just want to continue using Python because that’s what you’re familiar with and like. That’s fine, it’s your prerogative, you don’t have to justify that choice to anyone who doesn’t have the same priorities you do.
Personally I dislike Python and find it to be far from “the fastest to write, cleanest, more maintainable programming language” and I wouldn’t want to use it even if its performance were amazing. But that doesn’t matter, if we all liked the same things we wouldn’t have so many programming languages. Or operating systems. Or brands of sneakers. Or…
When I run benchmarks, I do so in real use cases, to test different algorithms for a task that will be in production.
Casey Muratori has a real in-depth course on why Python is slow and how to make it fast, among many other things why software can be slow.
Related
Recent Performance Improvements in Function Calls in CPython
Recent CPython updates have improved function call performance, reducing overhead in loops and built-in functions, with notable speed increases, making Python more efficient for developers.
Ousterhout's Dichotomy
Ousterhout's Dichotomy highlights the trade-off between programming language usability and performance. Rust is presented as a solution, balancing high-level usability with low-level performance and allowing incremental optimization.
Python 3.12 vs. Python 3.13 – performance testing
Python 3.13 outperforms Python 3.12 with a 1.08x improvement in benchmarks, especially in async tests, though some areas like coverage showed decreased performance. It is recommended for better performance.
Python 3.13, what didn't make the headlines
Python 3.13 features improvements in PDB, bug fixes in shutil, and new annotation syntax for comprehensions and lambdas, but overall performance gains are minimal and some deprecated features may disrupt existing code.
State of Python 3.13 Performance: Free-Threading
Python 3.13 introduces free-threading without the GIL, enhancing performance for parallel applications. However, current slowdowns due to interpreter limitations are expected to improve in future releases.