Exercise | Name | Learnings |
---|---|---|
ex00 | 🐾 Animal | - Subtype polymorphism |
ex01 | 🧠 Animal | - Deep copy vs. shallow copy |
ex02 | 💭 Animal | - Abstract classes |
ex03 | 🔮 Materia | - Interfaces, - Value-initialization of arrays, - Copy-and-swap idiom, - Move semantics, - Templates and template specialization, - Reference counting, - Overloading new and delete , - Dynamic dispatch, vTable and vPtr, - Dynamic cast |
The copy-and-swap idiom elegantly assists the copy assignment operator in achieving two things: avoiding code duplication, and providing a strong exception guarantee.
Class operator=(Class other);
- Take the argument by value (which creates a copy using the copy constructor).
- Swap the contents of
*this
with the parameter (which mustn't throw). - Let the destructor clean up the old data when the parameter goes out of scope.
The main benefit of the copy-and-swap idiom is that it provides strong exception guarantee.
In the copy assignment operator, this means that all operations, which could throw an exception (the copying), are made before the object being assigned to gets changed.
It also helps to reuse code between the copy constructor and copy assignment operator.
Since assignment essentially boils down to deletion of the original value and copying over the new value, this idiom allows us to reuse the copy constructor and the destructor.
-
Pure virtual functions, abstract base classes, and interface classes
-
When
__cxa_pure_virtual
is just a different flavor of SEGFAULT
https://isocpp.org/wiki/faq/strange-inheritance#calling-virtuals-from-ctors
First, let's have a class A
which prints something in the constructor, and something in the destructor.
When we new A[42]
, we see the constructor print 42 times. So far so normal.
But how is it that when we pass that memory address around and eventually delete[]
it, the program knows at runtime how many destructors have to be called?
They not only allocate memory for N
objects, but also for 1 extra size_t
.
That size_t
stores the amount of objects that follow, and is stored at the very beginning of the block of memory that gets allocated.
And with that size_t
the program knows how many destructors to call.
The pointer that new[]
returns though starts from the first actual object, so not including the size_t
. That's important so we can do array[0]
to access the first element.
But let's keep in mind that the start of the actually allocated memory block starts 1 size_t
before the pointer that we got!
That is why we have the delete[]
.
The main difference between delete[]
and delete
is that delete[]
frees the memory 1 size_t
before the address that we give it, while delete
frees exactly the address we give it.
That explains why we get an invalid free when we pass a pointer that was new[]
ed to delete
instead of delete[]
.
delete[]
assumes the actual memory address to free is at an offset, while delete
does not.
It is exactly the same thing as doing this in C:
char *ptr = malloc(10);
ptr++;
free(ptr);
Note
This is called "over-allocation", and it is a common technique to implement all sorts of memory tracking mechanisms, like reference-counting.
Source: How do compilers use “over-allocation” to remember the number of elements in an allocated array?
#include <cstddef>
#include <cstdlib>
#include <iostream>
#include <new>
#include <sstream>
class A {
public:
A() : _ptr(new char[42]) {}
~A() { delete[] _ptr; }
private:
char* _ptr;
};
int main(int argc, char* argv[])
{
int amount = 0;
if (argc > 1) {
std::istringstream(argv[1]) >> amount;
}
A* array = new A[amount];
std::cout << "newed size: " << *(reinterpret_cast<size_t*>(array) - 1)
<< '\n';
/* Correct */
delete[] array;
/* Invalid free */
// delete array;
/* Segmentation fault (no object with destructor at that address) */
// delete (reinterpret_cast<size_t*>(array) - 1);
/* No segmentation fault (pure free, no destructor call) */
// operator delete(reinterpret_cast<size_t*>(array) - 1);
}