Pin adds a huge amount of weird incidental complexity to your code base - since you need to pin-project your struct fields (but which ones?). You can’t just take an &self or &mut self in functions if your value is pinned, and pin is just generally confusing, hard to use and hard to reason about.
The article ended up with Vec<Box<T>> - but that’s a huge code smell in my book. It’s much less performant than Vec<T> because every object needs to be individually allocated & deallocated. So you have orders of magitude more calls to malloc & free, more memory fragmentation and way more cache misses while accessing your data. The impact this has on performance is insane.
Vec & indexes is a lovely middle ground. In my experience it’s often (remarkably) slightly more performant than using raw pointers. You don’t have to worry about vec reallocations (since the indexes don’t change). And it’s 100% safe rust. It feels weird at first - indexes are just pointers with more steps. But I find rust’s language affordances just work better if you write your code like that. Code is simple, safe, ergonomic and obvious.
Dunno about 'safe' -- or at least not in the more general sense that you seem to intend, rather than the more limited sense of rust's safe/unsafe distinction. If you store an index into a Vec<T> as a usize, rather than a &T, very little is stopping you from invalidating that pseudo-pointer without knowing it. (Or from using it as an index into the wrong vector, etc...)
These problems are manageable and I'm not saying 'never do this' -- I've done it myself on occasion. It's just that there are more pitfalls than you're indicating here, and it is actually a meaningful tradeoff of bug potential for ease-of-use.
But honestly, I think danger from that is wildly overstated. The author isn’t talking about implementing an ECS or b-tree here. They’re just populating an array from a file when the program launches, then freeing the whole thing when the program terminates. It’s really not rocket science.
The other big advantage of this approach is that you don’t have to deal with unsafe rust. So, no unsafe {} blocks. No wrangling with rust’s frankly awful syntax for following raw pointers. No stressing about whether or not a future version of rust will change some subtle invariant you’re accidentally depending on, or worrying about if you need to use MaybeInit or something like that. I think the chance of making a mistake while interacting with unsafe code is far higher than the chance of misusing an array index. And the impact is usually worse.
The author details running into exactly that problem while coding - since they assumed memory allocated by vec would be pinned (it isn’t). And the program they ended up with still doesn’t use pin, even though they depend on the memory being pinned. That’s cause for far more concern than a simple array index.
Anyway, pprof has a fantastic interactive Flamegraph viewer that lets you narrow down to specific functions. It's really very good, I would use that.
https://github.com/google/pprof
Run `pprof -http=:` on a profile and you get a web interface with the Flamegraph, call graph, line based profiling etc.
It's demonstrated in this video.
They only show a very simple example and no zooming, but it works very well with huge flamegraphs.
I tried to find something fast and native. Saying "native" I mean something which doesn't require a browser.
Uses a browser which doesn't meet the requirements they set.But I applaud the effort to make small, native apps. I agree with the author - not everything should live in the browser.
It uses the Firefox profiler to view its recorded profiles. You can (don't have to, just can) even share them, I was looking at this profile just yesterday: https://share.firefox.dev/3PxfriB for my day job, for example.
I have a `profile` function I use.
fn profile() {
xcrun xctrace record --template 'Time Profiler' --launch -- $@
}
Then I just do: $ profile ./my-binary -a -b -c "foo bar"
or w/e and when it completes (can be one-time run or long-running / interactive) now I have a great native experience exploring the profile.All the normal bells and whistles are there and I can double click on something and see it inline in the source code with per-line (cumulative) timings.
It really isn't. It's probably the slowest profiler UI I've ever used (it loves to beachball…), it hardly has any hardware performance counters, and its actual profiling core (xctrace) is… just really buggy? After the fifth time where it told me “this function uses 5% CPU” and you optimize it away and absolutely nothing happened, because it was just another Instruments mirage. Or the time where it told me opening a file on iOS took 1000+ ms, but that was just because its end timestamps were pure fabrications.
Maybe it's better if you have toy examples, but for large applications, it's among the worst profilers I've ever seen along almost every axis. I'll give you that gprof is worse, though…
In Sysprof, it uses a single widget for the flamegraph which means in less than 150 mb resident I can browse recordings in the GB size range. It really comes down to how much data gets symbolized at load time as the captures themselves are mmap'able. In nominal cases, Sysprof even calculates those and appends them after the capture phase stops so they can be mmap'd too.
That just leaves the augmented n-ary tree key'd by instruction pointer converted to string key, which naturally deduplicates/compresses.
The biggest chunk of memory consumed is GPU shaders.
The total word count of the W3C specification catalogue is 114 million words at the time of writing. If you added the combined word counts of the C11, C++17, UEFI, USB 3.2, and POSIX specifications, all 8,754 published RFCs, and the combined word counts of everything on Wikipedia’s list of longest novels, you would be 12 million words short of the W3C specifications.
https://drewdevault.com/2020/03/18/Reckless-limitless-scope....
If you look at the scraped document list [1]:
* Most of these are not normative! They're not specifications, they're guides, recommendations, terminology explainers, and so on.
* A lot of documents are irrelevant to implementing a web browser (XSLT, XPath, RDF, XHTML, ITS, etc.).
* A lot are obsolete (e.g. SMIL, OWL).
* There are tons of duplicate versions (all of CSS 1-3 are included; multiple versions of HTML, MathML, and of course the irrelevant XML-based standards).
* Many standards are scraped both as individual section files, and as a single complete.html file. He didn't notice this, and counted both.
As a particularly egregious example, he includes every version of the Web Content Accessibility Guidelines (WCAG) standard, going back to 1999, each of which is large.
I have not done any kind of analysis myself (which should be thorough to actually be fair), but if you prune it down to the core technologies (HTML5, CSS, ECMAScript, PNG/GIF/WebP, etc.), I'll wager it's probably less than a million, or at the very least less than 2 million. The ECMAScript spec is just 356,000 words.
[1] https://paste.sr.ht/~sircmpwn/475ad10f9ff9f63cd0a03a3f998370...
Something that’s been on my mind recently is that there’s a need of a high-performance flame graph library for the web. Unfortunately the most popular flame graph as a library / component, basically the react and d3 ones, work fine but the authors don’t actively maintain them anymore and their performance with large profiles is quite poor.
Most people that care about performance either hard-fork the Firefox profiler / speedscope flame graph component or create their own.
Would be nice to have a reusable, high performance flame graph for web platforms.
I see your journey and how you ended up with Xlib. But I think that's really more of an indictment of the sorry state of GUI in Rust.
I know that's not your job, I just couldn't let this use of Xlib stand uncommented because it's really bad for the larger ecosystem.
I don't have arguments against the point about Xlib. However, I struggle to use its alternative XCB. XCB doesn't have enough documentation to understand how to use it. In fact, I even looked at the source code of Qt and GTK, but the usage doesn't explain the XCB API. I'd really appreciate if you share with me the data you have. The only thing which I found recently is the wrapper from System76 https://pop-os.github.io/libcosmic/tiny_xlib/index.html. However, it's not a documentation still. I just hope to find some usages of the wrapper and communicate with the original API.
> if you ignore the existence of Wayland
How did you conclude it? I even mentioned it in the article. I don't use it - it's true. However, I can't wait to do it. I've been trying for a couple of years now. Regrettably, I experience various technical difficulties every time. As a result, I still use my i3.
> For something like fast visualizations, you should really go with something that does offscreen rendering and then blits the result.
Do you mean double buffering?
> though obviously modern tools should use GPU rendering
Would you mind elaborating on it?
Only thing that ever takes some time is the initial load of the perf file and filtering (bit still really fast).