As a quick compare and contrast between py-spy and pyinstrument it looks like py-spy has the advantage of being able to attach to an already running process which is super useful when your program is stuck and you don't know why. I haven't used pyinstrument yet but I do like the fact that it can do its flame graph in the console, sometimes I find saving down an svg file and opening up the browser a bit arduous. Excited to give it a try.
[1] https://github.com/benfred/py-spy
[2] https://jvns.ca/blog/2018/09/08/an-awesome-new-python-profil...
I am also curious to try out the on-demand profiling integration with Flask, seems like a cool thing to have running in the background for my side projects
Note that you can use something like gprof2dot to convert pstats dump from cProfile to a visual callgraph: https://github.com/jrfonseca/gprof2dot#python-cprofile-forme...
Not saying that solution’s better than pyinstrument — I haven’t use this one before so I’ll have to evaluate. Also, the lower overhead is undeniable.
---
Edit: Another thing I noticed in "How is it different to profile or cProfile?":
> 'Wall-clock' time (not CPU time)
> Pyinstrument records duration using 'wall-clock' time. ...
Seems misleading as cProfile uses time.perf_counter unless you supply your own timer, and time.perf_counter does measure wall clock time. See
https://github.com/python/cpython/blob/ec42789e6e14f6b6ac135...
https://docs.python.org/3/library/time.html#time.perf_counte...
The biggest problem with the standard profiler is that the reported times are not split by code path. For example, if you have two parts of your code that call the same library function, and you want to know which path is the slow path...you can't. The time reported for each line is a sum of all times/paths it was called. Worse, the visualization tools don't hint that this is the case, so you end up with very incorrect plots. Pyinstrument will give you the time, by path. Super useful, and a huge time saver!
I could guess from context, but thought it might be good to point out.
Source: From the repo: "Pyinstrument is a Python profiler"
(Feel free to delete this comment after fixing the typo, or not :) )
from their github readme:
py-spy works by directly reading the memory of the python program using the process_vm_readv system call on Linux, the vm_read call on OSX or the ReadProcessMemory call on Windows.This gives nicer summarization/presentation than Django Debug Toolbar's profiler, so seems like a good one to have in the toolbox.
Because you wrote it in Python.
Seriously, Python is probably the slowest mainstream language of all. If you’re building something where performance matters, you should be using a different language.
Sure, so if I want to shave startup/a slow action from 200ms to 100ms in a non-performance-critical tool, I shouldn't use a profiler, I should rewrite the whole damn thing in Go?
Can we stop these low information, canned responses already.
And the python ecosystem understands this, with those performance critical bits being implemented in C/fortran/whatever, not pure python.