This is part of my weekly C++ posts based on the daily C++ tips I make at my work. I strongly recommend this practice. If you don't have it in your company start it.
List of all weekly posts can be found here.
1. std::accumulate
std::accumulate is a fundamental STL algorithm that uses operator+ on the elements of a range or applies a custom provided operator:
Sounds simple. Then you watch Ben Deane's talk from CppCon 2016 - "How to rewrite the majority of the STL algorithms using only std::accumulate and then some".
The talk is great and you should watch it. He uses condition in the custom operator, captured variables and takes advantage of the difference in the types of the elements in the range and the object we are accumulating into. By the way, std::accumulate is only half of the talk.
2. Object lifetime
Object lifetime is one of those basic topics that starts very chill with "a object lifetime starts with its declaration and ends when it exits scope" but suddenly escalates into "in some situations temporary objects are created when a prvalue is materialized so that it can be used as a glvalue" (Temporary materialization, yes - there is such thing. Excellent stuff to throw at interviewees).
What we need to know is that: Every object has a well-defined lifetime. That is one of the core C++ features. This is central too RAII and RRID(or DIRR, I admit I haven't heard RRID until now - I've found it at MSDN Object lifetime page) and you should aways be aware of the lifetime of your objects.
A couple of details I find important. Object lifetime is tightly coupled to storage duration. Destruction of objects in a scope, in a translation unit, member objects, etc is done in LIFO style - last defined are first to be destroyed. The order of creation and destruction of global objects in different translation units is undefined. Globals are evil, right? Beware the temporaries - if they are not referenced or assigned they are destructed at the end of the expression (blocking std::async) and they often get optimized away by the compiler - [guaranteed] copy elision.
3. Virtual function call
We all probably know that virtual functions call are slower. What a virtual function call does more than a direct function call is that it has to find the needed function pointer in the virtual table. The virtual tables are at the beginning of the object and the function pointers are found at offset from this. That offset is hard coded at compile time so basically the virtuality consists of a pointer plus an offset as opposed to a fixed memory address for normal member functions. Deeply under the cover normal member functions usually are converted to a global function with additional T* as the first argument so they have one instance with fixed address in memory.
Since the exact vtable is not known at compile time determining which one is the proper one at run-time obviously has to come with some overtime. In the wonderful Big-O world where processors work with unlimited number of registers, compilers are not optimizing away half the code base (watch clang devirtualizing my example) or everything is performance critical this overhead probably matters.
In reality, this overhead probably is neglectable unless it is it that ultra high-frequency performance critical section of your code. But as usual with performance - never guess about performance. Measure but micro-benchmarking such things is usually more art than science so good luck.
No comments:
Post a Comment