Micro Optimizing Your JavaScript Probably Isn’t Worth It, But…

If you’re doing it, here’s how I go about it

Siniša Nimčević
JavaScript in Plain English

--

Photo by Charles Deluvio on Unsplash

If you’re looking to optimize a function which is often called in your application, you’d probably want to make some measurements, set benchmarks, and go from there. Let’s say you’ve looked at your browser’s waterfall chart and recognized that a long-running function may be stressing your RAIL model, or to be more precise, a long-running function disassociates the user from a performed action (an unnecessary lag in your UI is over 100ms strictly speaking). You sort of want to achieve the following:

Image taken from here.

Before we dive in — the full example code can be seen here. Feel free to use that code as a template for your future measurements.

Performance API

In order to get started with measuring the performance of our functions, we need to get acquainted with the Performance API, it’s a readily available javascript Web API that allows you to interact with the Performance interface, which in turn brings us the ability to measure our client-side application performance in what is known as High Resolution Time. While there’s a lot to it, what we’re interested in here is setting marks, measuring the time between them, and logging those entries in our console for further inspection.

The API is quite straightforward in its use so I’ll just quickly walk us through an example scenario.

Two competing functions

Let’s say we have a function which checks whether two words are anagrams. Two ways come to mind. Perhaps we want to treat the two strings as arrays, and since we’re interested in whether both words have the same letters, we transform them to arrays, sort them, and compare the results.

Treating words as character arrays allows for easy sorting.

Another way would be, knowing that the for loop is ridiculously performant, try to build logic around that. We can disqualify words which aren’t of equal lengths straight away (this is a performance bonus on its own), and we can then loop over each character in one word, and use indexOf to check if it’s present in the other word. Also, we would incrementally need to slice letters out of the second word (or a copy of it) to avoid false positives.

A clumsy but possibly more performant solution.

Implementing the Performance API

Setting marks is quite easy, in fact, it’s as easy as just typing performance.mark('myCustomMarkerString'); at an appropriate place in your code. Since we need to use this marker name later, I would actually save it as const (for example const A), and then use it to leave a mark at the start of a function, performance.mark(A) and at the end of a function performance.mark(B) . We then use performance.measure("my measurement #1", A, B) with specified marks to get the timing between points A and B (for example). I believe the example code below is quite clear.

Setting and measuring marks in practice.

It’s worth noting here — running two functions one after the other just once will rarely give you conclusive results. It’s best to run the two functions 100, 1000, or more times to get a sense of what the performance gap would be if the function was called many times in succession.

This will create two entries which can later be retrieved by using performance.getEntriesByType(“measure”). The result of this will be an array of PerformanceEntry objects. I find that I’m most interested in the duration values of the entries, so I tend to display them using console.table with the duration column filtered through. That would roughly translate to:

console.table(performance.getEntriesByType('measure'), ['duration']);

The resulting log in our browser console.

The full code can be seen in this CodePen.

Conclusion

Obviously, the example here is lacking a few things. Although it shows a rough comparison of the two functions, real word functions need to be tested side by side with more than one set of arguments because a real-world app will probably call a function in all sorts of scenarios. You could also complicate the logic by dividing the results to get averages, and automatically rerunning the tests a couple of times… But all in all, it really depends on your use case. I hope this is enough to get you started, provided that you’ve decided to micro optimize your code.

Also, if you think you have some insights I may have left out or misinterpreted feel free to discuss in the comments — the subject is broad and can be approached in a myriad of ways. If you want to reach out or just grow your network — you can find me on Twitter and LinkedIn.

Sources

More content at plainenglish.io

--

--