My personal test case will be with the UniFi 5.7/5.8 Controller web interface page. I've found consistently under the last few versions of Firefox that, while it's fine for at least an hour, if I leave it up constantly for ease of monitoring then after a day or two the Firefox process inevitably ends up pegging an entire core. There is no video whatsoever or any particularly fancy graphical usage, and while they may be doing something odd internally (I haven't had time to really dig into it) I'm not sure Firefox should end up in that state there over time. It's relatively easily repeatable though (will take a day but requires no interaction on my part) so I look forward to testing it. Although if it does resolve the problem I'll be mildly bummed whatever fix it was didn't make it into ESR, but so it goes.
I’ll try to repro myself first, already started up a test both on metal and a VM on a few systems. If it’s dealt with probably not much point investigating further. Should know within a day or two (or sooner, maybe it’s even regressed a bit). I’ll try to do a proper performance profile a report after that as well as ping a contact at Ubiquiti. They’d be the best placed to really dig into it.
I agree that if it's a leak on the Ubiquiti side they would be best placed to deal with that aspect, but there might be things we can do better on the Firefox side to handle pages leaking too...
After weeks of effort, a team mate finally figured out that a default noop function param, eg function blah( param = () => {} ), was never being gc’d. Wut?!
I recently wrote an automatic memory leak detector and debugger, which makes this a lot easier (imo) [0]. You write a short input script that drives the UI in some loop, it looks for growing things (objects, arrays, event listener lists, DOM node lists...), and then collects stack traces to find the code that grew them. While it won't find all of the leaks, I was able to eliminate an average of 94% of the live heap growth I observed in 5 web applications (which found new memory leaks in Google Analytics, AngularJS, Google Maps, etc).
More information about the technique can be found in a PLDI paper (which I presented last week :D ), which I tried to write clearly so that it is accessible to a technical audience (i.e., non-academics) [1].
I once used the Python function "gc.get_objects()", which returns a list of all objects in memory, to diagnose a memory leak. I don't suppose anything like it exists in JS tooling?
You can capture a heap snapshot in most browsers now using development tools. However, even a blank webpage (about:blank) has tens of thousands of objects allocated for the default JavaScript/DOM APIs. It's challenging to manually grok a JavaScript heap.
The approach I used was to take a snapshot, count up all the different object types (making a hash table mapping "name of type of object" => "count of objects whose type has that name", then discarding the snapshot), then take another snapshot a few minutes later, and see what the difference was, and which type of object there was suddenly a lot more of. Then I looked at various instances of that object. It turned out to be some async queue-related object that was only used in a couple of places, so that narrowed it down a lot. Even if it were something generic like a hash table or list, I suspect looking at instances of the object and breaking them down by some observable quality (e.g. number of elements, the set of keys in a table, the types of objects in a list), plus the differential approach, will take you fairly far.
There are some browser-specific tools that will do something like that out-of-band (e.g. in Firefox about:memory has a "Save GC & CC logs" option that outputs data about what the GC and cycle collector heap graphs look like). But interpreting those graphs is not easy, sadly.
For me it's the Microsoft Azure Portal that would cause Firefox to idle around 50% CPU utilization. For the moment it seems to be behaving. I opened up the UniFi Controller interface too just to test and it caused it to spike briefly but it settled back down to under 6% across 6 cores. So it's improved IMO.