The latest tweets from @MemoryJournal. For multiple objects, prefer using standard containers like vector and unorderedmap as they manage memory for their elements better than you could without disproportionate effort. Consider writing this without the help of string and vector: #include. For more information about cgroups and memory in general, see the documentation for Memory Resource Controller.-memory-swap details-memory-swap is a modifier flag that only has meaning if -memory is also set. Using swap allows the container to write excess memory requirements to disk when the container has exhausted all the RAM that is available to it.
9m Posts - See Instagram photos and videos from ‘memory’ hashtag. #miss #yangmadam recalls her virgin days. I miss those days.
How do I deal with memory leaks?
By writing code that doesn’t have any. Clearly, if your code has
delete operations, and pointer arithmetic all over the place, you are going to mess up somewhere and get leaks, stray pointers, etc. This is true independently of how conscientious you are with your allocations: eventually the complexity of the code will overcome the time and effort you can afford.
Memory Foam Mattress Topper
It follows that successful techniques rely on hiding allocation and deallocation inside more manageable types: For single objects, prefer
make_shared. For multiple objects, prefer using standard containers like
unordered_map as they manage memory for their elements better than you could without disproportionate effort. Consider writing this without the help of
What would be your chance of getting it right the first time? And how would you know you didn’t have a leak?
Note the absence of explicit memory management, macros, casts, overflow checks, explicit size limits, and pointers. By using a function object and a standard algorithm, the code could additionally have eliminated the pointer-like use of the iterator, but that seemed overkill for such a tiny program.
These techniques are not perfect and it is not always easy to use them systematically. However, they apply surprisingly widely and by reducing the number of explicit allocations and deallocations you make the remaining examples much easier to keep track of. As early as 1981, Stroustrup pointed out that by reducing the number of objects that he had to keep track of explicitly from many tens of thousands to a few dozens, he had reduced the intellectual effort needed to get the program right from a Herculean task to something manageable, or even easy.
If your application area doesn’t have libraries that make programming that minimizes explicit memory management easy, then the fastest way of getting your program complete and correct might be to first build such a library.
Templates and the standard libraries make this use of containers, resource handles, etc., much easier than it was even a few years ago. The use of exceptions makes it close to essential.
If you cannot handle allocation/deallocation implicitly as part of an object you need in your application anyway, you can use a resource handle to minimize the chance of a leak. Here is an example where you need to return an object allocated on the free store from a function. This is an opportunity to forget to delete that object. After all, we cannot tell just looking at pointer whether it needs to be deallocated and if so who is responsible for that. Using a resource handle, here the standard library
unique_ptr, makes it clear where the responsibility lies:
Think about resources in general, rather than simply about memory.
If systematic application of these techniques is not possible in your environment (you have to use code from elsewhere, part of your program was written by Neanderthals, etc.), be sure to use a memory leak detector as part of your standard development procedure, or plug in a garbage collector.
Can I use
new just as in Java?
Sort of, but don’t do it blindly, if you do want it prefer to spell it as
make_shared, and there are often superior alternatives that are simpler and more robust than any of that. Consider:
The clumsy use of
z3 is unnecessary and slow compared with the idiomatic use of a local variable (
z2). You don’t need to use
new to create an object if you also
delete that object in the same scope; such an object should be a local variable.
Should I use
You should use
nullptr as the null pointer value. The others still work for backward compatibility with older code.
A problem with both
0 as a null pointer value is that
0 is a special “maybe an integer value and maybe a pointer” value. Use
0 only for integers, and that confusion disappears.
delete p delete the pointer
p, or the pointed-to-data
The keyword should really be
delete_the_thing_pointed_to_by. The same abuse of English occurs when
freeing thememory pointed to by a pointer in C:
free(p) really means
Is it safe to
delete the same pointer twice?
No! (Assuming you didn’t get that pointer back from
new in between.)
For example, the following is a disaster:
delete p line might do some really bad things to you. It might, depending on the phase of the moon,corrupt your heap, crash your program, make arbitrary and bizarre changes to objects that are already out there on theheap, etc. Unfortunately these symptoms can appear and disappear randomly. According to Murphy’s law, you’ll be hit thehardest at the worst possible moment (when the customer is looking, when a high-value transaction is trying to post,etc.).
Note: some runtime systems will protect you from certain very simple cases of double
delete. Depending on thedetails, you might be okay if you happen to be running on one of those systems and if no one ever deploys your codeon another system that handles things differently and if you are deleting something that doesn’t have a destructorand if you don’t do anything significant between the two
deletes and if no one ever changes your code to dosomething significant between the two
deletes and if your thread scheduler (over which you likely have no control!)doesn’t happen to swap threads between the two
deletes and if, and if, and if. So back to Murphy: since it cango wrong, it will, and it will go wrong at the worst possible moment.
Do NOT email me saying you tested it and it doesn’t crash. Get a clue. A non-crash doesn’t prove the absence of abug; it merely fails to prove the presence of a bug.
Trust me: double-
delete is bad, bad, bad. Just say no.
free() pointers allocated with
new? Can I
delete pointers allocated with
No! In brief, conceptually
new allocate from different heaps, so can’t
delete each other’s memory. They also operate at different levels – raw memory vs. constructed objects.
You can use
new in the same program. But you cannot allocate an object with
malloc() and free it using
delete. Nor can you allocate with
free() or use
realloc() on an array allocated by
The C++ operators
delete guarantee proper construction and destruction; where constructors or destructors need to be invoked, they are. The C-style functions
realloc() don’t ensure that. Furthermore, there is no guarantee that the mechanism used by
delete to acquire and release raw memory is compatible with
free(). If mixing styles works on your system, you were simply “lucky” – for now.
If you feel the need for
realloc() – and many do – then consider using a standard library
vector. For example
vector expands as needed.
See also the examples and discussion in “Learning Standard C++ as a New Language”, which you can download from Stroustrup’s publications list.
What is the difference between
make_shared) are nearly always superior to both
malloc() and completely eliminate
Having said that, here’s the difference between those two:
malloc() is a function that takes a number (of bytes) as its argument; it returns a
void* pointing to unitialized storage.
new is an operator that takes a type and (optionally) a set of initializers for that type as its arguments; it returns a pointer to an (optionally) initialized object of its type. The difference is most obvious when you want to allocate an object of a user-defined type with non-trivial initialization semantics. Examples:
Note that when you specify a initializer using the “(value)” notation, you get initialization with that value. Often, a
vector is a better alternative to a free-store-allocated array (e.g., consider exception safety).
Whenever you use
malloc() you must consider initialization and conversion of the return pointer to a proper type. You will also have to consider if you got the number of bytes right for your use. There is no performance difference between
new when you take initialization into account.
malloc() reports memory exhaustion by returning
new reports allocation and initialization errors by throwing exceptions (
Objects created by
new are destroyed by
delete. Areas of memory allocated by
malloc() are deallocated by
Why should I use
new instead of trustworthy old
make_shared) are nearly always superior to both
malloc() and completely eliminate
Having said that, benefits of using
new instead of
malloc are:Constructors/destructors, type safety, overridability.
- Constructors/destructors: unlike
Fred’s constructor. Similarly,
- Type safety:
void*which isn’t type safe.
new Fred()returns a pointer of the right type (a
operatorthat can be overridden by a class, while
malloc()is not overridable on aper-class basis.
Can I use
realloc() on pointers allocated via
realloc() has to copy the allocation, it uses a bitwise copy operation, which will tear many C++ objects toshreds. C++ objects should be allowed to copy themselves. They use their own copy constructor or assignment operator.
Besides all that, the heap that
new uses may not be the same as the heap that
Why doesn’t C++ have an equivalent to
If you want to, you can of course use
realloc() is only guaranteed to work on arrays allocated by
malloc() (and similar functions) containing objects without user-defined copy constructors. Also, please remember that contrary to naive expectations,
realloc() occasionally does copy its argument array.
In C++, a better way of dealing with reallocation is to use a standard library container, such as
vector, and let it grow naturally.
Do I need to check for null after
p = new Fred()?
No! (But if you have an ancient, stone-age compiler, you may have to force the
new operator to
throw an exceptionif it runs out of memory.)
It turns out to be a real pain to always write explicit
nullptr tests after every
new allocation. Code like thefollowing is very tedious:
If your compiler doesn’t support (or if you refuse to use) exceptions, your code might be even moretedious:
Take heart. In C++, if the runtime system cannot allocate
sizeof(Fred) bytes of memory during
p = new Fred(), a
std::bad_alloc exception will be thrown. Unlike
newnever returns null!
Therefore you should simply write:
On the second thought. Scratch that. You should simply write:
There, there… Much better now!
However, if your compiler is ancient, it may not yet support this. Find out by checking your compiler’s documentationunder “
new”. If it is ancient, you may have to force the compiler to have this behavior.
How can I convince my (older) compiler to automatically check
new to see if it returns null?
Eventually your compiler will.
If you have an old compiler that doesn’t automagically perform the null test, you can forcethe runtime system to do the test by installing a “new handler” function. Your “new handler” function can do anythingyou want, such as
throw an exception,
delete some objects and return (in which case
operator new will retry theallocation), print a message and
abort() the program, etc.
Here’s a sample “new handler” that prints a message and
throws an exception. The handler is installed using
std::set_new_handler() line is executed,
operator new will call your
myNewHandler() if/when it runs outof memory. This means that
new will never return null:
Note: If your compiler doesn’t support exception handling, you can, as a last resort, change the line
Note: If some namespace-scope / global / static object’s constructor uses
new, it might not use the
myNewHandler()function since that constructor often gets called before
main() begins. Unfortunately there’s no convenient way toguarantee that the
std::set_new_handler() will be called before the first use of
new. For example, even if you putthe
std::set_new_handler() call in the constructor of a global object, you still don’t know if the module(“compilation unit”) that contains that global object will be elaborated first or last or somewhere inbetween. Thereforeyou still don’t have any guarantee that your call of
std::set_new_handler() will happen before any othernamespace-scope / global’s constructor gets invoked.
Do I need to check for null before
The C++ language guarantees that
delete p will do nothing if
p is null. Since you might get the testbackwards, and since most testing methodologies force you to explicitly test every branch point, you should not put inthe redundant
What are the two steps that happen when I say
delete p is a two-step process: it calls the destructor, then releases the memory. The code generated for
delete pis functionally similar to this (assuming
p is of type
p->~Fred() calls the destructor for the
Fred object pointed to by
operator delete(p) calls the memory deallocation primitive,
void operator delete(void* p). Thisprimitive is similar in spirit to
free(void* p). (Note, however, that these two are not interchangeable; e.g., thereis no guarantee that the two memory deallocation primitives even use the same heap!)
delete null out its operand?
First, you should normally be using smart pointers, so you won’t care – you won’t be writing
For those rare cases where you really are doing manual memory management and so do care, consider:
.. part doesn’t touch
p then the second
delete p; is a serious error that a C++ implementation cannot effectively protect itself against (without unusual precautions). Since deleting a null pointer is harmless by definition, a simple solution would be for
delete p; to do a
p=nullptr; after it has done whatever else is required. However, C++ doesn’t guarantee that.
One reason is that the operand of
delete need not be an lvalue. Consider:
Here, the implementation of
delete does not have a pointer to which it can null out. These examples may be rare, but they do imply that it is not possible to guarantee that “any pointer to a deleted object is null.” A simpler way of bypassing that “rule” is to have two pointers to an object:
C++ explicitly allows an implementation of
delete to null out an lvalue operand, but that idea doesn’t seem to have become popular with implementers.
If you consider zeroing out pointers important, consider using a destroy function:
Consider this yet-another reason to minimize explicit use of
delete by relying on standard library smart pointers, containers, handles, etc.
Note that passing the pointer as a reference (to allow the pointer to be nulled out) has the added benefit of preventing
destroy() from being called for an rvalue:
Why isn’t the destructor called at the end of scope?
The simple answer is “of course it is!”, but have a look at the kind of example that often accompany that question:
That is, there was some (mistaken) assumption that the object created by
new would be destroyed at the end of a function.
Basically, you should only use heap allocation if you want an object to live beyond the lifetime of the scope you create it in. Even then, you should normally use
make_shared. In those rare cases where you do want heap allocation and you opt to use
new, you need to use
delete to destroy the object. For example:
If you want an object to live in a scope only, don’t use heap allocation at all but simply define a variable:
The variable is implicitly destroyed at the end of the scope.
Code that creates an object using
new and then
deletes it at the end of the same scope is ugly, error-prone, inefficient, and usually not exception-safe. For example:
p = new Fred(), does the
Fred memory “leak” if the
Fred constructor throws an exception?
If an exception occurs during the
Fred constructor of
p = new Fred(), the C++ language guarantees that the memory
sizeof(Fred) bytes that were allocated will automagically be released back to the heap.
Here are the details:
new Fred() is a two-step process:
sizeof(Fred)bytes of memory are allocated using the primitive
void* operator new(size_t nbytes). This primitiveis similar in spirit to
malloc(size_t nbytes). (Note, however, that these two are not interchangeable; e.g.,there is no guarantee that the two memory allocation primitives even use the same heap!).
- It constructs an object in that memory by calling the
Fredconstructor. The pointer returned from the first stepis passed as the
thisparameter to the constructor. This step is wrapped in a
catchblock to handlethe case when an exception is thrown during this step.
Thus the actual generated code is functionally similar to:
The statement marked “Placement
new” calls the
Fred constructor. The pointer
p becomes the
thispointer inside the constructor,
How do I allocate / unallocate an array of things?
p = new T[n] and
Any time you allocate an array of objects via
new (usually with the
] in the
new expression), you must use
 in the
delete statement. This syntax is necessary because there is no syntactic difference between a pointer to athing and a pointer to an array of things (something we inherited from C).
What if I forget the
deleteing an array allocated via
All life comes to a catastrophic end.
It is the programmer’s —not the compiler’s— responsibility to get the connection between
new T[n] and
delete pcorrect. If you get it wrong, neither a compile-time nor a run-time error message will be generated by the compiler.Heap corruption is a likely result. Or worse. Your program will probably die.
Can I drop the
deleteing an array of some built-in type (
Sometimes programmers think that the
 in the
delete p only exists so the compiler will call the appropriatedestructors for all elements in the array. Because of this reasoning, they assume that an array of some built-in typesuch as
int can be
deleted without the
. E.g., they assume the following is valid code:
But the above code is wrong, and it can cause a disaster at runtime. In particular, the code that’s called for
delete p is
operator delete(void*), but the code that’s called for
delete p is
operator delete(void*). Thedefault behavior for the latter is to call the former, but users are allowed to replace the latter with a differentbehavior (in which case they would normally also replace the corresponding
new code in
operator new(size_t)). Ifthey replaced the
delete code so it wasn’t compatible with the
delete code, and you called the wrong one (i.e., ifyou said
delete p rather than
delete p), you could end up with a disaster at runtime.
p = new Fred[n], how does the compiler know there are
n objects to be destructed during
Short answer: Magic.
Long answer: The run-time system stores the number of objects,
n, somewhere where it can be retrieved if you onlyknow the pointer,
p. There are two popular techniques that do this. Both these techniques are in use bycommercial-grade compilers, both have tradeoffs, and neither is perfect. These techniques are:
- Over-allocate the array and put
njust to the left of the first
- Use an associative array with
pas the key and
nas the value.
Is it legal (and moral) for a member function to say
As long as you’re careful, it’s okay (not evil) for an object to commit suicide (
Here’s how I define “careful”:
- You must be absolutely 100% positively sure that
thisobject was allocated via
new, nor by placement
new, nor a local object on the stack, nor a namespace-scope / global, nor a member of another object; but by plain ordinary
- You must be absolutely 100% positively sure that your member function will be the last member function invoked on
- You must be absolutely 100% positively sure that the rest of your member function (after the
thisline) doesn’t touch any piece of
thisobject (including calling any other member functions or touching any data members). This includes code that will run in destructors for any objects allocated on the stack that are still alive.
- You must be absolutely 100% positively sure that no one even touches the
thispointer itself after the
thisline. In other words, you must not examine it, compare it with another pointer, compare it with
nullptr, print it, cast it, do anything with it.
Naturally the usual caveats apply in cases where your
this pointer is a pointer to a base class when you don’t have a virtual destructor.
How do I allocate multidimensional arrays using
There are many ways to do this, depending on how flexible you want the array sizing to be. On one extreme, if you knowall the dimensions at compile-time, you can allocate multidimensional arrays statically (as in C):
More commonly, the size of the matrix isn’t known until run-time but you know that it will be rectangular. In this caseyou need to use the heap (“freestore”), but at least you are able to allocate all the elements in one freestore chunk.
Finally at the other extreme, you may not even be guaranteed that the matrix is rectangular. For example, if each rowcould have a different length, you’ll need to allocate each row individually. In the following function,
ncols[i] isthe number of columns in row number
i varies between
Note the funny use of
matrix[i-1] in the deletion process. This prevents wrap-around of the
unsigned value when
igoes one step below zero.
Finally, note that pointers and arrays are evil. It is normally much better to encapsulate yourpointers in a class that has a safe and simple interface. The following FAQ shows how to do this.
But the previous FAQ’s code is SOOOO tricky and error prone! Isn’t there a simpler way?
The reason the code in the previous FAQ was so tricky and error prone was that it used pointers, andwe know that pointers and arrays are evil. The solution is to encapsulate your pointers in a classthat has a safe and simple interface. For example, we can define a
Matrix class that handles a rectangular matrix soour user code will be vastly simplified when compared to the the rectangular matrix code from the previousFAQ:
The main thing to notice is the lack of clean-up code. For example, there aren’t any
delete statements in the abovecode, yet there will be no memory leaks, assuming only that the
Matrix destructor does its job correctly.
Matrix code that makes the above possible:
Note that the above
Matrix class accomplishes two things: it moves some tricky memory management code from the usercode (e.g.,
main()) to the class, and it reduces the overall bulk of program. The latter point is important. Forexample, assuming
Matrix is even mildly reusable, moving complexity from the users [plural] of
Matrix itself [singular] is equivalent to moving complexity from the many to the few. Anyone who has seen Star Trek2 knows that the good of the many outweighs the good of the few… or the one.
But the above
Matrix class is specific to
Fred! Isn’t there a way to make it generic?
Yep; just use templates:
Here’s how this can be used:
Now it’s easy to use
Matrix<T> for things other than
Fred. For example, the following uses a
std::string is the standard string class):
You can thus get an entire family of classes from a template. For example,
Matrix< Matrix<std::string>>, etc.
Here’s one way that the template can be implemented:
What’s another way to build a
Use the standard
vector template, and make a
The following uses a
Note how much simpler this is than the previous: there is no explicit
new in the constructor, andthere is no need for any of The Big Three (destructor, copy constructor or assignment operator).Simply put, your code is a lot less likely to have memory leaks if you use
std::vector than if you use explicit
new T[n] and
Note also that
std::vector doesn’t force you to allocate numerous chunks of memory. If you prefer to allocate only onechunk of memory for the entire matrix, as was done in the previous, just change the type of
std::vector<T> and add member variables
ncols_. You’ll figure out the rest: initialize
data_(nrows * ncols), change
return data_[row*ncols_ + col];, etc.
Does C++ have arrays whose length can be specified at run-time?
Yes, in the sense that the standard library has a
std::vector template that provides this behavior.
No, in the sense that built-in array types need to have their length specified at compile time.
Yes, in the sense that even built-in array types can specify the first index bounds at run-time. E.g., comparing withthe previous FAQ, if you only need the first array dimension to vary then you can just ask new for an array of arrays,rather than an array of pointers to arrays:
You can’t do this if you need anything other than the first dimension of the array to change at run-time.
But please, don’t use arrays unless you have to. Arrays are evil. Use some object of some class ifyou can. Use arrays only when you have to.
How can I force objects of my class to always be created via
new rather than as local, namespace-scope, global, or
Use the Named Constructor Idiom.
As usual with the Named Constructor Idiom, the constructors are all
protected, and there are one or more
create() methods (the so-called “named constructors”), one per constructor. In this case the
create() methods allocate the objects via
new. Since the constructors themselves are not
public, there is no otherway to create objects of the class.
Now the only way to create
Fred objects is via
Make sure your constructors are in the
protected section if you expect
Fred to have derived classes.
Note also that you can make another class
Fred if you want to allow a
Wilma to havea member object of class
Fred, but of course this is a softening of the original goal, namely to force
Fred objectsto be allocated via
How do I do simple reference counting?
If all you want is the ability to pass around a bunch of pointers to the same object, with the feature that the objectwill automagically get
deleted when the last pointer to it disappears, you can use something like the following “smartpointer” class:
Naturally you can use nested classes to rename
Note that you can soften the “never
NULL” rule above with a little more checking in the constructor, copy constructor,assignment operator, and destructor. If you do that, you might as well put a
p_ != NULL check into the “
->” operators (at least as an
assert()). I would recommend against an
operator Fred*() method, since that wouldlet people accidentally get at the
One of the implicit constraints on
FredPtr is that it must only point to
Fred objects which have been allocated via
new. If you want to be really safe, you can enforce this constraint by making all of
private,and for each constructor have a
create() method which allocates the
Fred object via
new andreturns a
FredPtr (not a
Fred*). That way the only way anyone could create a
Fred object would be to get a
Fred* p = new Fred()” would be replaced by “
FredPtr p = Fred::create()”). Thus no one could accidentallysubvert the reference counting mechanism.
For example, if
Fred had a
Fred::Fred() and a
Fred::Fred(int i, int j), the changes to
Fred would be:
The end result is that you now have a way to use simple reference counting to provide “pointer semantics” for a givenobject. Users of your
class explicitly use
FredPtr objects, which act more or less like
Fred* pointers. Thebenefit is that users can make as many copies of their
FredPtr “smart pointer” objects, and the pointed-to
Fredobject will automagically get
deleted when the last such
FredPtr object vanishes.
If you’d rather give your users “reference semantics” rather than “pointer semantics,” you can use reference countingto provide “copy on write”.
How do I provide reference counting with copy-on-write semantics?
Reference counting can be done with either pointer semantics or reference semantics. The previousFAQ shows how to do reference counting with pointer semantics. This FAQ shows how to do referencecounting with reference semantics.
The basic idea is to allow users to think they’re copying your
Fred objects, but in reality the underlyingimplementation doesn’t actually do any copying unless and until some user actually tries to modify the underlying
Fred::Data houses all the data that would normally go into the
Fred::Data also has an extradata member,
count_, to manage the reference counting. Class
Fred ends up being a “smart reference” that(internally) points to a
If it is fairly common to call
Fred’s default constructor, you can avoid all those
new calls bysharing a common
Fred::Data object for all
Freds that are constructed via
Fred::Fred(). To avoid
staticinitialization order problems, this shared
Fred::Data object is created “on first use” inside afunction. Here are the changes that would be made to the above code (note that theshared
Fred::Data object’s destructor is never invoked; if that is a problem, either hope you don’t have any
staticinitialization order problems, or drop back to the approach described above):
Note: You can also provide reference counting for a hierarchy of classes if your
Fredclass would normally have been a base class.
How do I provide reference counting with copy-on-write semantics for a hierarchy of classes?
The previous FAQ presented a reference counting scheme that provided users with referencesemantics, but did so for a single class rather than for a hierarchy of classes. This FAQ extends the previous techniqueto allow for a hierarchy of classes. The basic difference is that
Fred::Data is now the root of a hierarchy ofclasses, which probably cause it to have some
virtual functions. Note that class
Fred itselfwill still not have any
The Virtual Constructor Idiom is used to make copies of the
Fred::Data objects. To select whichderived class to create, the sample code below uses the Named Constructor Idiom, but othertechniques are possible (a
switch statement in the constructor, etc). The sample code assumes two derived classes:
Der2. Methods in the derived classes are unaware of the reference counting.
Naturally the constructors and
sampleXXX methods for
Fred::Der2 will need to be implemented inwhatever way is appropriate.
Can I absolutely prevent people from subverting the reference counting mechanism, and if so, should I?
No, and (normally) no.
There are two basic approaches to subverting the reference counting mechanism:
- The scheme could be subverted if someone got a
Fred*(rather than being forced to use a
FredPtr). Someone couldget a
operator*()that returns a
FredPtr p = Fred::create(); Fred* p2 = &*p;. Yes it’s bizarre and unexpected, but it could happen. This hole couldbe closed in two ways: overload
Fred::operator&()so it returns a
FredPtr, or change the return type of
FredPtr::operator*()so it returns a
FredRefwould be a class that simulates a reference; it wouldneed to have all the methods that
Fredhas, and it would need to forward all those method calls to the underlying
Fredobject; there might be a performance penalty for this second choice depending on how good the compiler is atinlining methods). Another way to fix this is to eliminate
FredPtr::operator*()— and lose the correspondingability to get and use a
Fred&. But even if you did all this, someone could still generate a
FredPtr p = Fred::create(); Fred* p2 = p.operator->();.
- The scheme could be subverted if someone had a leak and/or dangling pointer to a
FredPtr. Basically what we’resaying here is that
Fredis now safe, but we somehow want to prevent people from doing stupid things with
FredPtrobjects. (And if we could solve that via
FredPtrPtrobjects, we’d have the same problem again withthem). One hole here is if someone created a
new, then allowed the
FredPtrto leak (worst casethis is a leak, which is bad but is usually a little better than a dangling pointer). This hole could be pluggedby declaring
private, thus preventing someone from saying
new FredPtr(). Anotherhole here is if someone creates a local
FredPtrobject, then takes the address of that
FredPtrand passed aroundthe
FredPtr*. If that
FredPtr*lived longer than the
FredPtr, you could have a dangling pointer — shudder.This hole could be plugged by preventing people from taking the address of a
private), with the corresponding loss of functionality. But even if you did all that,they could still create a
FredPtr&which is almost as dangerous as a
FredPtr*, simply by doing this:
FredPtr p; .. FredPtr& q = p;(or by passing the
FredPtr&to someone else).
And even if we closed all those holes, C++ has those wonderful pieces of syntax called pointer casts. Using apointer cast or two, a sufficiently motivated programmer can normally create a hole that’s big enough to drive aproverbial truck through. (By the way, pointer casts are evil.)
So the lessons here seem to be: (a) you can’t prevent espionage no matter how hard you try, and (b) you can easilyprevent mistakes.
So I recommend settling for the “low hanging fruit”: use the easy-to-build and easy-to-use mechanisms that preventmistakes, and don’t bother trying to prevent espionage. You won’t succeed, and even if you do, it’ll (probably) cost youmore than it’s worth.
So if we can’t use the C++ language itself to prevent espionage, are there other ways to do it? Yes. I personally useold fashioned code reviews for that. And since the espionage techniques usually involve some bizarre syntax and/or useof pointer-casts and unions, you can use a tool to point out most of the “hot spots.”
Can I use a garbage collector in C++?
If you want automatic garbage collection, there are good commercial and public-domain garbage collectors for C++. For applications where garbage collection is suitable, C++ is an excellent garbage collected language with a performance that compares favorably with other garbage collected languages. See The C++ Programming Language (4th Edition) for a discussion of automatic garbage collection in C++. See also, Hans-J. Boehm’s site for C and C++ garbage collection.
Also, C++ supports programming techniques that allows memory management to be safe and implicit without a garbage collector. Garbage collection is useful for specific needs, such as inside the implementation of lock-free data structures to avoid ABA issues, but not as a general-purpose default way of handling for resource management. We are not saying that GC is not useful, just that there are better approaches in many situations.
C++11 offers a GC ABI.
Compared with the “smart pointer” techniques, the two kinds of garbage collectortechniques are:
- less portable
- usually more efficient (especially when the average object size is small or in multithreaded environments)
- able to handle “cycles” in the data (reference counting techniques normally “leak” if the data structures can form acycle)
- sometimes leak other objects (since the garbage collectors are necessarily conservative, they sometimes see a randombit pattern that appears to be a pointer into an allocation, especially if the allocation is large; this can allowthe allocation to leak)
- work better with existing libraries (since smart pointers need to be used explicitly, they may be hard to integratewith existing libraries)
What are the two kinds of garbage collectors for C++?
In general, there seem to be two flavors of garbage collectors for C++:
- Conservative garbage collectors. These know little or nothing about the layout of the stack or of C++ objects,and simply look for bit patterns that appear to be pointers. In practice they seem to work with both C and C++code, particularly when the average object size is small. Here are some examples, in alphabetical order:
- Hybrid garbage collectors. These usually scan the stack conservatively, but require the programmer to supplylayout information for heap objects. This requires more work on the programmer’s part, but may result in improvedperformance. Here are some examples, in alphabetical order:
Since garbage collectors for C++ are normally conservative, they can sometimes leak if a bit pattern “looks like” itmight be a pointer to an otherwise unused block. Also they sometimes get confused when pointers to a block actuallypoint outside the block’s extent (which is illegal, but some programmers simply must push the envelope; sigh) and(rarely) when a pointer is hidden by a compiler optimization. In practice these problems are not usually serious,however providing the collector with hints about the layout of the objects can sometimes ameliorate these issues.
Where can I get more info on garbage collectors for C++?
For more information, see the Garbage Collector FAQ.
What is an
auto_ptr and why isn’t there an
It’s now spelled
unique_ptr, which supports both single objects and arrays.
auto_ptr is an old standard smart pointer that has been deprecated, and is only being kept in the standard for backward compatibility with older code. It should not be used in new code.
Estimated reading time: 16 minutes
By default, a container has no resource constraints and can use as much of agiven resource as the host’s kernel scheduler allows. Docker provides waysto control how much memory, or CPU a container can use, setting runtimeconfiguration flags of the
docker run command. This section provides detailson when you should set such limits and the possible implications of setting them.
Many of these features require your kernel to support Linux capabilities. Tocheck for support, you can use the
docker info command. If a capabilityis disabled in your kernel, you may see a warning at the end of the output likethe following:
Consult your operating system’s documentation for enabling them.Learn more.
Understand the risks of running out of memory
It is important not to allow a running container to consume too much of thehost machine’s memory. On Linux hosts, if the kernel detects that there is notenough memory to perform important system functions, it throws an
Out Of Memory Exception, and starts killing processes to free upmemory. Any process is subject to killing, including Docker and other importantapplications. This can effectively bring the entire system down if the wrongprocess is killed.
Docker attempts to mitigate these risks by adjusting the OOM priority on theDocker daemon so that it is less likely to be killed than other processeson the system. The OOM priority on containers is not adjusted. This makes it morelikely for an individual container to be killed than for the Docker daemonor other system processes to be killed. You should not try to circumventthese safeguards by manually setting
--oom-score-adj to an extreme negativenumber on the daemon or a container, or by setting
--oom-kill-disable on acontainer.
For more information about the Linux kernel’s OOM management, seeOut of Memory Management. Score!.
You can mitigate the risk of system instability due to OOME by:
- Perform tests to understand the memory requirements of your application beforeplacing it into production.
- Ensure that your application runs only on hosts with adequate resources.
- Limit the amount of memory your container can use, as described below.
- Be mindful when configuring swap on your Docker hosts. Swap is slower andless performant than memory but can provide a buffer against running out ofsystem memory.
- Consider converting your container to a service,and using service-level constraints and node labels to ensure that theapplication runs only on hosts with enough memory
Limit a container’s access to memory
Docker can enforce hard memory limits, which allow the container to use no morethan a given amount of user or system memory, or soft limits, which allow thecontainer to use as much memory as it needs unless certain conditions are met,such as when the kernel detects low memory or contention on the host machine.Some of these options have different effects when used alone or when more thanone option is set.
Most of these options take a positive integer, followed by a suffix of
g, to indicate bytes, kilobytes, megabytes, or gigabytes.
|The maximum amount of memory the container can use. If you set this option, the minimum allowed value is |
|The amount of memory this container is allowed to swap to disk. See |
|By default, the host kernel can swap out a percentage of anonymous pages used by a container. You can set |
|Allows you to specify a soft limit smaller than |
|The maximum amount of kernel memory the container can use. The minimum allowed value is |
|By default, if an out-of-memory (OOM) error occurs, the kernel kills processes in a container. To change this behavior, use the |
For more information about cgroups and memory in general, see the documentationfor Memory Resource Controller.
--memory-swap is a modifier flag that only has meaning if
--memory is alsoset. Using swap allows the container to write excess memory requirements to diskwhen the container has exhausted all the RAM that is available to it. There is aperformance penalty for applications that swap memory to disk often.
Its setting can have complicated effects:
--memory-swapis set to a positive integer, then both
--memory-swapmust be set.
--memory-swaprepresents the total amount ofmemory and swap that can be used, and
--memorycontrols the amount used bynon-swap memory. So if
--memory-swap='1g', thecontainer can use 300m of memory and 700m (
1g - 300m) swap.
--memory-swapis set to
0, the setting is ignored, and the value istreated as unset.
--memory-swapis set to the same value as
--memoryisset to a positive integer, the container does not have access to swap.SeePrevent a container from using swap.
--memory-swapis unset, and
--memoryis set, the container can useas much swap as the
--memorysetting, if the host container has swapmemory configured. For instance, if
--memory-swapisnot set, the container can use 600m in total of memory and swap.
--memory-swapis explicitly set to
-1, the container is allowed to useunlimited swap, up to the amount available on the host system.
Inside the container, tools like
freereport the host’s available swap, not what’s available inside the container. Don’t rely on the output of
freeor similar tools to determine whether swap is present.
Prevent a container from using swap
--memory-swap are set to the same value, this preventscontainers from using any swap. This is because
--memory-swap is the amount ofcombined memory and swap that can be used, while
--memory is only the amountof physical memory that can be used.
- A value of 0 turns off anonymous page swapping.
- A value of 100 sets all anonymous pages as swappable.
- By default, if you do not set
--memory-swappiness, the value isinherited from the host machine.
Kernel memory limits are expressed in terms of the overall memory allocated toa container. Consider the following scenarios:
- Unlimited memory, unlimited kernel memory: This is the defaultbehavior.
- Unlimited memory, limited kernel memory: This is appropriate when theamount of memory needed by all cgroups is greater than the amount ofmemory that actually exists on the host machine. You can configure thekernel memory to never go over what is available on the host machine,and containers which need more memory need to wait for it.
- Limited memory, unlimited kernel memory: The overall memory islimited, but the kernel memory is not.
- Limited memory, limited kernel memory: Limiting both user and kernelmemory can be useful for debugging memory-related problems. If a containeris using an unexpected amount of either type of memory, it runs outof memory without affecting other containers or the host machine. Withinthis setting, if the kernel memory limit is lower than the user memorylimit, running out of kernel memory causes the container to experiencean OOM error. If the kernel memory limit is higher than the user memorylimit, the kernel limit does not cause the container to experience an OOM.
When you turn on any kernel memory limits, the host machine tracks “high watermark” statistics on a per-process basis, so you can track which processes (inthis case, containers) are using excess memory. This can be seen per processby viewing
/proc/<PID>/status on the host machine.
By default, each container’s access to the host machine’s CPU cycles is unlimited.You can set various constraints to limit a given container’s access to the hostmachine’s CPU cycles. Most users use and configure thedefault CFS scheduler. You can alsoconfigure the realtime scheduler.
Configure the default CFS scheduler
The CFS is the Linux kernel CPU scheduler for normal Linux processes. Severalruntime flags allow you to configure the amount of access to CPU resources yourcontainer has. When you use these settings, Docker modifies the settings forthe container’s cgroup on the host machine.
|Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set |
|Specify the CPU CFS scheduler period, which is used alongside |
|Impose a CPU CFS quota on the container. The number of microseconds per |
|Limit the specific CPUs or cores a container can use. A comma-separated list or hyphen-separated range of CPUs a container can use, if you have more than one CPU. The first CPU is numbered 0. A valid value might be |
|Set this flag to a value greater or less than the default of 1024 to increase or reduce the container’s weight, and give it access to a greater or lesser proportion of the host machine’s CPU cycles. This is only enforced when CPU cycles are constrained. When plenty of CPU cycles are available, all containers use as much CPU as they need. In that way, this is a soft limit. |
If you have 1 CPU, each of the following commands guarantees the container atmost 50% of the CPU every second.
Which is the equivalent to manually specifying
Configure the realtime scheduler
You can configure your container to use the realtime scheduler, for tasks whichcannot use the CFS scheduler. You need tomake sure the host machine’s kernel is configured correctlybefore you can configure the Docker daemon orconfigure individual containers.
CPU scheduling and prioritization are advanced kernel-level features. Mostusers do not need to change these values from their defaults. Setting thesevalues incorrectly can cause your host system to become unstable or unusable.
Configure the host machine’s kernel
CONFIG_RT_GROUP_SCHED is enabled in the Linux kernel by running
zcat /proc/config.gz grep CONFIG_RT_GROUP_SCHED or by checking for theexistence of the file
/sys/fs/cgroup/cpu.rt_runtime_us. For guidance onconfiguring the kernel realtime scheduler, consult the documentation for youroperating system.
Configure the Docker daemon
To run containers using the realtime scheduler, run the Docker daemon withthe
--cpu-rt-runtime flag set to the maximum number of microseconds reservedfor realtime tasks per runtime period. For instance, with the default period of1000000 microseconds (1 second), setting
--cpu-rt-runtime=950000 ensures thatcontainers using the realtime scheduler can run for 950000 microseconds for every1000000-microsecond period, leaving at least 50000 microseconds available fornon-realtime tasks. To make this configuration permanent on systems which use
systemd, see Control and configure Docker with systemd.
Configure individual containers
You can pass several flags to control a container’s CPU priority when youstart the container using
docker run. Consult your operating system’sdocumentation or the
ulimit command for information on appropriate values.
|Grants the container the |
|The maximum number of microseconds the container can run at realtime priority within the Docker daemon’s realtime scheduler period. You also need the |
|The maximum realtime priority allowed for the container. You also need the |
The following example command sets each of these three flags on a
If the kernel or Docker daemon is not configured correctly, an error occurs.
Access an NVIDIA GPU
Visit the official NVIDIA drivers pageto download and install the proper drivers. Reboot your system once you havedone so.
Verify that your GPU is running and accessible.
Follow the instructions at (https://nvidia.github.io/nvidia-container-runtime/)and then run this command:
nvidia-container-runtime-hook is accessible from
Restart the Docker daemon.
Expose GPUs for use
--gpus flag when you start a container to access GPU resources.Specify how many GPUs to use. For example:
Exposes all available GPUs and returns a result akin to the following:
Memory Foam Pillow
device option to specify GPUs. For example:
Exposes that specific GPU.
Exposes the first and third GPUs.
NVIDIA GPUs can only be accessed by systems running a single engine.
Set NVIDIA capabilities
You can set capabilities manually. For example, on Ubuntu you can run thefollowing:
This enables the
utility driver capability which adds the
nvidia-smi tool tothe container.
Capabilities as well as other configurations can be set in images viaenvironment variables. More information on valid variables can be found at thenvidia-container-runtimeGitHub page. These variables can be set in a Dockerfile.
You can also utitize CUDA images which sets these variables automatically. Seethe CUDA images GitHub pagefor more information.