Erlang内存储器分布
Erlang内存分布此文来自:https://blog.heroku.com/archives/2013/11/7/logplex-down-the-rabbit-hole网站T
Erlang内存分布
此文来自:https://blog.heroku.com/archives/2013/11/7/logplex-down-the-rabbit-hole网站The amount returned by?erlang:memory/0-1
?is the amount of memory actively allocated, where Erlang terms are laid in memory; this amount does not represent the amount of memory that the OS has given to the virtual machine (and Linux doesn't actually reserve memory pages until they are used by the VM). To understand where memory goes, one must first understand the many allocators being used:
?
?
temp_alloc: does temporary allocations for short use cases (such as data living within a single C function call).- eheap_alloc: heap data, used for things such as the Erlang processes' heaps.
- binary_alloc: the allocator used for reference counted binaries (what their 'global heap' is).
- ets_alloc: ETS tables store their data in an isolated part of memory that isn't garbage collected, but allocated and deallocated as long as terms are being stored in tables.
- driver_alloc: used to store driver data in particular, which doesn't keep drivers that generate Erlang terms from using other allocators. The driver data allocated here contains locks/mutexes, options, Erlang ports, etc.
- sl_alloc: short-lived memory blocks will be stored there, and include items such as some of the VM's scheduling information or small buffers used for some data types' handling.
- ll_alloc: long-lived allocations will be in there. Examples include Erlang code itself and the atom table, which stay there.
- fix_alloc: allocator used for frequently used fixed-size blocks of memory. One example of data used there is the internal processes' C struct, used internally by the VM.
- std_alloc: catch-all allocator for whatever didn't fit the previous categories. The process registry for named process is there.
The entire list of where given data types live can be found?in the source.
By default, there will be one instance of each allocator per scheduler (and you should have one scheduler per core), plus one instance to be used by?linked-in drivers?using?async threads. This ends up giving you a structure a bit like the drawing above, but split it in?N
?parts at each leaf.