diff --git a/README.md b/README.md index 95f63a4..faec616 100644 --- a/README.md +++ b/README.md @@ -10,7 +10,7 @@ objects before giving them to another caller. We avoid false sharing by keeping a high amount of work per thread. This should lead to cache lines not being shared between threads. While this pool uses a hashmap -and a pivot to make `returnPtr(ptr)` extremely fast, the construction's bottleneck is +and a pivot to make `returnPtr(ptr)` extremely fast, the construction's main bottleneck is in the locking and unlocking of the hashmap's mutex. We need to do this since we cannot write in a `std::unordered_map` at different hashes concurrently. @@ -40,7 +40,8 @@ Time (milliseconds) required for real allocations when constructing pool: 9 ``` This trivial example shows some performance improvements that would be much more -important should the allocation and construction of the objects be more complex. +important should the allocation and construction/destruction of the objects be more +complex. ## Safety AddressSanitizer, LeakSanitizer and ThreadSanitizer have been used to ensure the safety diff --git a/allocPool.hpp b/allocPool.hpp index 1425656..e542c58 100644 --- a/allocPool.hpp +++ b/allocPool.hpp @@ -43,7 +43,7 @@ public: void returnPtr(T *ptr) { size_t pos = positionMap[ptr]; - (vec[pos])->reset(); + ptr->reset(); std::swap(vec[pos], vec[pivot]); positionMap[vec[pos]] = pos; positionMap[vec[pivot]] = pivot;