1 | #define USE_MALLOC_LOCK
|
---|
2 | #define DEFAULT_TRIM_THRESHOLD (256 * 1024)
|
---|
3 |
|
---|
4 | /* ---------- To make a malloc.h, start cutting here ------------ */
|
---|
5 |
|
---|
6 | /*
|
---|
7 | ****************************************************************
|
---|
8 | * THIS IS A PRERELEASE. It has not yet been tested adequately. *
|
---|
9 | * If you use it, please send back comments, suggestions, *
|
---|
10 | * performance reports, etc. *
|
---|
11 | ****************************************************************
|
---|
12 | */
|
---|
13 |
|
---|
14 | /*
|
---|
15 | A version (aka dlmalloc) of malloc/free/realloc written by Doug
|
---|
16 | Lea and released to the public domain. Use this code without
|
---|
17 | permission or acknowledgement in any way you wish. Send questions,
|
---|
18 | comments, complaints, performance data, etc to [email protected]
|
---|
19 |
|
---|
20 | * VERSION 2.7.0pre7 Wed Jan 10 13:33:01 2001 Doug Lea (dl at gee)
|
---|
21 |
|
---|
22 | Note: There may be an updated version of this malloc obtainable at
|
---|
23 | ftp://gee.cs.oswego.edu/pub/misc/malloc.c
|
---|
24 | Check before installing!
|
---|
25 |
|
---|
26 | * Quickstart
|
---|
27 |
|
---|
28 | This library is all in one file to simplify the most common usage:
|
---|
29 | ftp it, compile it (-O), and link it into another program. All
|
---|
30 | of the compile-time options default to reasonable values for use on
|
---|
31 | most unix platforms. Compile -DWIN32 for reasonable defaults on windows.
|
---|
32 | You might later want to step through various compile options.
|
---|
33 |
|
---|
34 | * Why use this malloc?
|
---|
35 |
|
---|
36 | This is not the fastest, most space-conserving, most portable, or
|
---|
37 | most tunable malloc ever written. However it is among the fastest
|
---|
38 | while also being among the most space-conserving, portable and tunable.
|
---|
39 | Consistent balance across these factors results in a good general-purpose
|
---|
40 | allocator for malloc-intensive programs.
|
---|
41 |
|
---|
42 | The main properties of the algorithms are:
|
---|
43 | * For large (>= 512 bytes) requests, it is a pure best-fit allocator,
|
---|
44 | with ties normally decided via FIFO (i.e. least recently used).
|
---|
45 | * For small (<= 64 bytes by default) requests, it is a caching
|
---|
46 | allocator, that maintains pools of quickly recycled chunks.
|
---|
47 | * In between, and for combinations of large and small requests, it does
|
---|
48 | the best it can trying to meet both goals at once.
|
---|
49 |
|
---|
50 | Compared to 2.6.X versions, this version is generally faster,
|
---|
51 | especially for programs that allocate and free many small chunks.
|
---|
52 |
|
---|
53 | For a longer but slightly out of date high-level description, see
|
---|
54 | http://gee.cs.oswego.edu/dl/html/malloc.html
|
---|
55 |
|
---|
56 | You may already by default be using a c library containing a malloc
|
---|
57 | that is somehow based on some version of this malloc (for example in
|
---|
58 | linux). You might still want to use the one in this file in order to
|
---|
59 | customize settings or to avoid overheads associated with library
|
---|
60 | versions.
|
---|
61 |
|
---|
62 | * Synopsis of public routines
|
---|
63 |
|
---|
64 | (Much fuller descriptions are contained in the program documentation below.)
|
---|
65 |
|
---|
66 | malloc(size_t n);
|
---|
67 | Return a pointer to a newly allocated chunk of at least n bytes, or null
|
---|
68 | if no space is available.
|
---|
69 | free(Void_t* p);
|
---|
70 | Release the chunk of memory pointed to by p, or no effect if p is null.
|
---|
71 | realloc(Void_t* p, size_t n);
|
---|
72 | Return a pointer to a chunk of size n that contains the same data
|
---|
73 | as does chunk p up to the minimum of (n, p's size) bytes, or null
|
---|
74 | if no space is available. The returned pointer may or may not be
|
---|
75 | the same as p. If p is null, equivalent to malloc. Unless the
|
---|
76 | #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a
|
---|
77 | size argument of zero (re)allocates a minimum-sized chunk.
|
---|
78 | memalign(size_t alignment, size_t n);
|
---|
79 | Return a pointer to a newly allocated chunk of n bytes, aligned
|
---|
80 | in accord with the alignment argument, which must be a power of
|
---|
81 | two.
|
---|
82 | valloc(size_t n);
|
---|
83 | Equivalent to memalign(pagesize, n), where pagesize is the page
|
---|
84 | size of the system (or as near to this as can be figured out from
|
---|
85 | all the includes/defines below.)
|
---|
86 | pvalloc(size_t n);
|
---|
87 | Equivalent to valloc(minimum-page-that-holds(n)), that is,
|
---|
88 | round up n to nearest pagesize.
|
---|
89 | calloc(size_t unit, size_t quantity);
|
---|
90 | Returns a pointer to quantity * unit bytes, with all locations
|
---|
91 | set to zero.
|
---|
92 | cfree(Void_t* p);
|
---|
93 | Equivalent to free(p).
|
---|
94 | malloc_trim(size_t pad);
|
---|
95 | Release all but pad bytes of freed top-most memory back
|
---|
96 | to the system. Return 1 if successful, else 0.
|
---|
97 | malloc_usable_size(Void_t* p);
|
---|
98 | Report the number usable allocated bytes associated with allocated
|
---|
99 | chunk p. This may or may not report more bytes than were requested,
|
---|
100 | due to alignment and minimum size constraints.
|
---|
101 | malloc_stats();
|
---|
102 | Prints brief summary statistics on stderr.
|
---|
103 | mallinfo()
|
---|
104 | Returns (by copy) a struct containing various summary statistics.
|
---|
105 | mallopt(int parameter_number, int parameter_value)
|
---|
106 | Changes one of the tunable parameters described below. Returns
|
---|
107 | 1 if successful in changing the parameter, else 0.
|
---|
108 |
|
---|
109 | * Vital statistics:
|
---|
110 |
|
---|
111 | Assumed pointer representation: 4 or 8 bytes
|
---|
112 | (Thanks to Wolfram Gloger for contributing most of the
|
---|
113 | changes supporting dual 4/8.)
|
---|
114 |
|
---|
115 | Assumed size_t representation: 4 or 8 bytes
|
---|
116 | Note that size_t is allowed to be 4 bytes even if pointers are 8.
|
---|
117 | You can adjust this by defining INTERNAL_SIZE_T
|
---|
118 |
|
---|
119 | Alignment: 2 * sizeof(size_t)
|
---|
120 | (i.e., 8 byte alignment with 4byte size_t). This suffices for
|
---|
121 | nearly all current machines and C compilers. However, you can
|
---|
122 | define MALLOC_ALIGNMENT to be wider than this if necessary.
|
---|
123 |
|
---|
124 | Minimum overhead per allocated chunk: 4 or 8 bytes
|
---|
125 | Each malloced chunk has a hidden word of overhead holding size
|
---|
126 | and status information.
|
---|
127 |
|
---|
128 | Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead)
|
---|
129 | 8-byte ptrs: 24/32 bytes (including, 4/8 overhead)
|
---|
130 |
|
---|
131 | When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte
|
---|
132 | ptrs but 4 byte size) or 24 (for 8/8) additional bytes are
|
---|
133 | needed; 4 (8) for a trailing size field and 8 (16) bytes for
|
---|
134 | free list pointers. Thus, the minimum allocatable size is
|
---|
135 | 16/24/32 bytes.
|
---|
136 |
|
---|
137 | Even a request for zero bytes (i.e., malloc(0)) returns a
|
---|
138 | pointer to something of the minimum allocatable size.
|
---|
139 |
|
---|
140 | The maximum overhead wastage (i.e., number of extra bytes
|
---|
141 | allocated than were requested in malloc) is less than or equal
|
---|
142 | to the minimum size, except for requests >= mmap_threshold that
|
---|
143 | are serviced via mmap(), where the worst case wastage is 2 *
|
---|
144 | sizeof(size_t) bytes plus the remainder from a system page (the
|
---|
145 | minimal mmap unit); typically 4096 bytes.
|
---|
146 |
|
---|
147 | Maximum allocated size: 4-byte size_t: 2^31 minus about two pages
|
---|
148 | 8-byte size_t: 2^63 minus about two pages
|
---|
149 |
|
---|
150 | It is assumed that (possibly signed) size_t values suffice
|
---|
151 | to represent chunk sizes. `Possibly signed' is due to the fact
|
---|
152 | that `size_t' may be defined on a system as either a signed or
|
---|
153 | an unsigned type. The ISO C standard says that it must be
|
---|
154 | unsigned, but a few systems are known not to adhere to this.
|
---|
155 | Additionally, even when size_t is unsigned, sbrk (which is by
|
---|
156 | default used to obtain memory from system) accepts signed
|
---|
157 | arguments, and may not be able to handle size_t-wide arguments
|
---|
158 | with negative sign bit. To be conservative, values that would
|
---|
159 | appear as negative after accounting for overhead and alignment
|
---|
160 | are rejected.
|
---|
161 |
|
---|
162 | Requests for sizes outside this range will perform an optional
|
---|
163 | failure action and then return null. (Requests may also
|
---|
164 | also fail because a system is out of memory.)
|
---|
165 |
|
---|
166 | Thread-safety: NOT thread-safe unless USE_MALLOC_LOCK defined
|
---|
167 |
|
---|
168 | When USE_MALLOC_LOCK is defined, wrappers are created to
|
---|
169 | surround every public call with either a pthread mutex or
|
---|
170 | a win32 critical section (depending on WIN32). This is not
|
---|
171 | especially fast, and can be a major bottleneck in programs with
|
---|
172 | many threads. It is designed only to provide minimal protection
|
---|
173 | in concurrent environments, and to provide a basis for
|
---|
174 | extensions. If you are using malloc in a concurrent program,
|
---|
175 | you would be far better off obtaining ptmalloc, which is
|
---|
176 | derived from a version of this malloc, and is well-tuned for
|
---|
177 | concurrent programs. (See http://www.malloc.de)
|
---|
178 |
|
---|
179 | Compliance: I believe it is compliant with the 1997 Single Unix Specification
|
---|
180 |
|
---|
181 | (See http://www.opennc.org). Probably other standards as well.
|
---|
182 |
|
---|
183 | * Limitations
|
---|
184 |
|
---|
185 | Here are some features that are NOT currently supported
|
---|
186 |
|
---|
187 | * No automated mechanism for fully checking that all accesses
|
---|
188 | to malloced memory stay within their bounds. However, there
|
---|
189 | are several add-ons and adaptations of this or other mallocs
|
---|
190 | available that do this.
|
---|
191 | * No support for compaction.
|
---|
192 |
|
---|
193 | * Synopsis of compile-time options:
|
---|
194 |
|
---|
195 | People have reported using previous versions of this malloc on all
|
---|
196 | versions of Unix, sometimes by tweaking some of the defines
|
---|
197 | below. It has been tested most extensively on Solaris and
|
---|
198 | Linux. It is also reported to work on WIN32 platforms.
|
---|
199 | People also report using it in stand-alone embedded systems.
|
---|
200 |
|
---|
201 | The implementation is in straight, hand-tuned ANSI C. It is not
|
---|
202 | at all modular. (Sorry!) It uses a lot of macros. To be at all
|
---|
203 | usable, this code should be compiled using an optimizing compiler
|
---|
204 | (for example gcc -O3) that can simplify expressions and control
|
---|
205 | paths. (FAQ: some macros import variables as arguments rather than
|
---|
206 | declare locals because people reported that some debuggers
|
---|
207 | otherwise get confused.)
|
---|
208 |
|
---|
209 | OPTION DEFAULT VALUE
|
---|
210 |
|
---|
211 | Compilation Environment options:
|
---|
212 |
|
---|
213 | __STD_C derived from C compiler defines
|
---|
214 | WIN32 NOT defined
|
---|
215 | HAVE_MEMCPY defined
|
---|
216 | USE_MEMCPY 1 if HAVE_MEMCPY is defined
|
---|
217 | HAVE_MMAP defined as 1
|
---|
218 | MMAP_AS_MORECORE_SIZE (1024 * 1024)
|
---|
219 | HAVE_MREMAP defined as 0 unless linux defined
|
---|
220 | malloc_getpagesize derived from system #includes, or 4096 if not
|
---|
221 | HAVE_USR_INCLUDE_MALLOC_H NOT defined
|
---|
222 | LACKS_UNISTD_H NOT defined unless WIN32
|
---|
223 | LACKS_SYS_PARAM_H NOT defined unless WIN32
|
---|
224 | LACKS_SYS_MMAN_H NOT defined unless WIN32
|
---|
225 |
|
---|
226 | Changing default word sizes:
|
---|
227 |
|
---|
228 | INTERNAL_SIZE_T size_t
|
---|
229 | MALLOC_ALIGNMENT 2 * sizeof(INTERNAL_SIZE_T)
|
---|
230 |
|
---|
231 | Configuration and functionality options:
|
---|
232 |
|
---|
233 | USE_DL_PREFIX NOT defined
|
---|
234 | USE_PUBLIC_MALLOC_WRAPPERS NOT defined
|
---|
235 | USE_MALLOC_LOCK NOT defined
|
---|
236 | DEBUG NOT defined
|
---|
237 | REALLOC_ZERO_BYTES_FREES NOT defined
|
---|
238 | MALLOC_FAILURE_ACTION errno = ENOMEM, if __STD_C defined, else no-op
|
---|
239 | TRIM_FASTBINS 0
|
---|
240 |
|
---|
241 | Options for customizing MORECORE:
|
---|
242 |
|
---|
243 | MORECORE sbrk
|
---|
244 | MORECORE_CONTIGUOUS 1
|
---|
245 |
|
---|
246 | Tuning options that are also dynamically changeable via mallopt:
|
---|
247 |
|
---|
248 | DEFAULT_MXFAST 64
|
---|
249 | DEFAULT_TRIM_THRESHOLD 128 * 1024
|
---|
250 | DEFAULT_TOP_PAD 0
|
---|
251 | DEFAULT_MMAP_THRESHOLD 128 * 1024
|
---|
252 | DEFAULT_MMAP_MAX 256
|
---|
253 |
|
---|
254 | There are several other #defined constants and macros that you
|
---|
255 | probably don't want to touch unless you are extending or adapting malloc.
|
---|
256 |
|
---|
257 | */
|
---|
258 | #include "xpcom-private.h"
|
---|
259 |
|
---|
260 | |
---|
261 |
|
---|
262 |
|
---|
263 | /*
|
---|
264 | WIN32 sets up defaults for MS environment and compilers.
|
---|
265 | Otherwise defaults are for unix.
|
---|
266 | */
|
---|
267 |
|
---|
268 | /* #define WIN32 */
|
---|
269 |
|
---|
270 | #ifdef WIN32
|
---|
271 |
|
---|
272 | #include <windows.h>
|
---|
273 |
|
---|
274 | /* Win32 doesn't supply or need the following headers */
|
---|
275 | #define LACKS_UNISTD_H
|
---|
276 | #define LACKS_SYS_PARAM_H
|
---|
277 | #define LACKS_SYS_MMAN_H
|
---|
278 |
|
---|
279 | /* Use the supplied emulation of sbrk */
|
---|
280 | #define MORECORE sbrk
|
---|
281 | #define MORECORE_CONTIGUOUS 1
|
---|
282 | #define MORECORE_FAILURE ((void*)(-1))
|
---|
283 |
|
---|
284 | /* Use the supplied emulation mmap, munmap */
|
---|
285 | #define HAVE_MMAP 1
|
---|
286 | #define MUNMAP_FAILURE (-1)
|
---|
287 | /* These values don't really matter in windows mmap emulation */
|
---|
288 | #define MAP_PRIVATE 1
|
---|
289 | #define MAP_ANONYMOUS 2
|
---|
290 | #define PROT_READ 1
|
---|
291 | #define PROT_WRITE 2
|
---|
292 |
|
---|
293 | /* Emulation functions defined at the end of this file */
|
---|
294 |
|
---|
295 | /* If USE_MALLOC_LOCK, use supplied critical-section-based lock functions */
|
---|
296 | #ifdef USE_MALLOC_LOCK
|
---|
297 | static int slwait(int *sl);
|
---|
298 | static int slrelease(int *sl);
|
---|
299 | #endif
|
---|
300 |
|
---|
301 | static long getpagesize(void);
|
---|
302 | static long getregionsize(void);
|
---|
303 | static void *sbrk(long size);
|
---|
304 | static void *mmap(void *ptr, long size, long prot, long type, long handle, long arg);
|
---|
305 | static long munmap(void *ptr, long size);
|
---|
306 |
|
---|
307 | static void vminfo (unsigned long *free, unsigned long *reserved, unsigned long *committed);
|
---|
308 | static int cpuinfo (int whole, unsigned long *kernel, unsigned long *user);
|
---|
309 |
|
---|
310 | #endif
|
---|
311 |
|
---|
312 |
|
---|
313 |
|
---|
314 | /*
|
---|
315 | __STD_C should be nonzero if using ANSI-standard C compiler, a C++
|
---|
316 | compiler, or a C compiler sufficiently close to ANSI to get away
|
---|
317 | with it.
|
---|
318 | */
|
---|
319 |
|
---|
320 | #ifndef __STD_C
|
---|
321 | #ifdef __STDC__
|
---|
322 | #define __STD_C 1
|
---|
323 | #else
|
---|
324 | #if __cplusplus
|
---|
325 | #define __STD_C 1
|
---|
326 | #else
|
---|
327 | #define __STD_C 0
|
---|
328 | #endif /*__cplusplus*/
|
---|
329 | #endif /*__STDC__*/
|
---|
330 | #endif /*__STD_C*/
|
---|
331 |
|
---|
332 |
|
---|
333 | /*
|
---|
334 | Void_t* is the pointer type that malloc should say it returns
|
---|
335 | */
|
---|
336 |
|
---|
337 | #ifndef Void_t
|
---|
338 | #if (__STD_C || defined(WIN32))
|
---|
339 | #define Void_t void
|
---|
340 | #else
|
---|
341 | #define Void_t char
|
---|
342 | #endif
|
---|
343 | #endif /*Void_t*/
|
---|
344 |
|
---|
345 | #if __STD_C
|
---|
346 | #include <stddef.h> /* for size_t */
|
---|
347 | #else
|
---|
348 | #include <sys/types.h>
|
---|
349 | #endif
|
---|
350 |
|
---|
351 | #ifdef __cplusplus
|
---|
352 | extern "C" {
|
---|
353 | #endif
|
---|
354 |
|
---|
355 | /* define LACKS_UNISTD_H if your system does not have a <unistd.h>. */
|
---|
356 |
|
---|
357 | /* #define LACKS_UNISTD_H */
|
---|
358 |
|
---|
359 | #ifndef LACKS_UNISTD_H
|
---|
360 | #include <unistd.h>
|
---|
361 | #endif
|
---|
362 |
|
---|
363 | /* define LACKS_SYS_PARAM_H if your system does not have a <sys/param.h>. */
|
---|
364 |
|
---|
365 | /* #define LACKS_SYS_PARAM_H */
|
---|
366 |
|
---|
367 |
|
---|
368 | #include <stdio.h> /* needed for malloc_stats */
|
---|
369 | #include <errno.h> /* needed for optional MALLOC_FAILURE_ACTION */
|
---|
370 |
|
---|
371 |
|
---|
372 | /*
|
---|
373 | Debugging:
|
---|
374 |
|
---|
375 | Because freed chunks may be overwritten with bookkeeping fields, this
|
---|
376 | malloc will often die when freed memory is overwritten by user
|
---|
377 | programs. This can be very effective (albeit in an annoying way)
|
---|
378 | in helping track down dangling pointers.
|
---|
379 |
|
---|
380 | If you compile with -DDEBUG, a number of assertion checks are
|
---|
381 | enabled that will catch more memory errors. You probably won't be
|
---|
382 | able to make much sense of the actual assertion errors, but they
|
---|
383 | should help you locate incorrectly overwritten memory. The
|
---|
384 | checking is fairly extensive, and will slow down execution
|
---|
385 | noticeably. Calling malloc_stats or mallinfo with DEBUG set will
|
---|
386 | attempt to check every non-mmapped allocated and free chunk in the
|
---|
387 | course of computing the summmaries. (By nature, mmapped regions
|
---|
388 | cannot be checked very much automatically.)
|
---|
389 |
|
---|
390 | Setting DEBUG may also be helpful if you are trying to modify
|
---|
391 | this code. The assertions in the check routines spell out in more
|
---|
392 | detail the assumptions and invariants underlying the algorithms.
|
---|
393 |
|
---|
394 | */
|
---|
395 |
|
---|
396 | #if DEBUG
|
---|
397 | #include <assert.h>
|
---|
398 | #else
|
---|
399 | #define assert(x) ((void)0)
|
---|
400 | #endif
|
---|
401 |
|
---|
402 |
|
---|
403 | /*
|
---|
404 | INTERNAL_SIZE_T is the word-size used for internal bookkeeping
|
---|
405 | of chunk sizes.
|
---|
406 |
|
---|
407 | The default version is the same as size_t.
|
---|
408 |
|
---|
409 | While not strictly necessary, it is best to define this as an
|
---|
410 | unsigned type, even if size_t is a signed type. This may avoid some
|
---|
411 | artificial size limitations on some systems.
|
---|
412 |
|
---|
413 | On a 64-bit machine, you may be able to reduce malloc overhead by
|
---|
414 | defining INTERNAL_SIZE_T to be a 32 bit `unsigned int' at the
|
---|
415 | expense of not being able to handle more than 2^32 of malloced
|
---|
416 | space. If this limitation is acceptable, you are encouraged to set
|
---|
417 | this unless you are on a platform requiring 16byte alignments. In
|
---|
418 | this case the alignment requirements turn out to negate any
|
---|
419 | potential advantages of decreasing size_t word size.
|
---|
420 |
|
---|
421 | Note to implementors: To deal with all this, comparisons and
|
---|
422 | difference computations among INTERNAL_SIZE_Ts should normally cast
|
---|
423 | INTERNAL_SIZE_T's to long or unsigned long, as appropriate, being
|
---|
424 | aware of the fact that casting an unsigned int to a wider long does not
|
---|
425 | sign-extend. (This also makes checking for negative numbers awkward.)
|
---|
426 |
|
---|
427 | */
|
---|
428 |
|
---|
429 | #ifndef INTERNAL_SIZE_T
|
---|
430 | #define INTERNAL_SIZE_T size_t
|
---|
431 | #endif
|
---|
432 |
|
---|
433 | /* The corresponding word size */
|
---|
434 | #define SIZE_SZ (sizeof(INTERNAL_SIZE_T))
|
---|
435 |
|
---|
436 |
|
---|
437 | /*
|
---|
438 | MALLOC_ALIGNMENT is the minimum alignment for malloc'ed chunks.
|
---|
439 | It must be a power of two at least 2 * SIZE_SZ, even on machines
|
---|
440 | for which smaller alignments would suffice. It may be defined as
|
---|
441 | larger than this though. (Note however that code and data structures
|
---|
442 | are optimized for the case of 8-byte alignment.)
|
---|
443 |
|
---|
444 | */
|
---|
445 |
|
---|
446 | /* #define MALLOC_ALIGNMENT 16 */
|
---|
447 |
|
---|
448 | #ifndef MALLOC_ALIGNMENT
|
---|
449 | #define MALLOC_ALIGNMENT (2 * SIZE_SZ)
|
---|
450 | #endif
|
---|
451 |
|
---|
452 | /* The corresponding bit mask value */
|
---|
453 | #define MALLOC_ALIGN_MASK (MALLOC_ALIGNMENT - 1)
|
---|
454 |
|
---|
455 |
|
---|
456 | /*
|
---|
457 | REALLOC_ZERO_BYTES_FREES should be set if a call to
|
---|
458 | realloc with zero bytes should be the same as a call to free.
|
---|
459 | Some people think it should. Otherwise, since this malloc
|
---|
460 | returns a unique pointer for malloc(0), so does realloc(p, 0).
|
---|
461 | */
|
---|
462 |
|
---|
463 | /* #define REALLOC_ZERO_BYTES_FREES */
|
---|
464 |
|
---|
465 |
|
---|
466 | /*
|
---|
467 | USE_DL_PREFIX will prefix all public routines with the string 'dl'.
|
---|
468 | This is necessary when you only want to use this malloc in one part
|
---|
469 | of a program, using your regular system malloc elsewhere.
|
---|
470 | */
|
---|
471 |
|
---|
472 | /* #define USE_DL_PREFIX */
|
---|
473 |
|
---|
474 |
|
---|
475 | /*
|
---|
476 | USE_MALLOC_LOCK causes wrapper functions to surround each
|
---|
477 | callable routine with pthread mutex lock/unlock.
|
---|
478 |
|
---|
479 | USE_MALLOC_LOCK forces USE_PUBLIC_MALLOC_WRAPPERS to be defined
|
---|
480 | */
|
---|
481 |
|
---|
482 | /* #define USE_MALLOC_LOCK */
|
---|
483 |
|
---|
484 |
|
---|
485 | /*
|
---|
486 | If USE_PUBLIC_MALLOC_WRAPPERS is defined, every public routine is
|
---|
487 | actually a wrapper function that first calls MALLOC_PREACTION, then
|
---|
488 | calls the internal routine, and follows it with
|
---|
489 | MALLOC_POSTACTION. This is needed for locking, but you can also use
|
---|
490 | this, without USE_MALLOC_LOCK, for purposes of interception,
|
---|
491 | instrumentation, etc. It is a sad fact that using wrappers often
|
---|
492 | noticeably degrades performance of malloc-intensive programs.
|
---|
493 | */
|
---|
494 |
|
---|
495 | #ifdef USE_MALLOC_LOCK
|
---|
496 | #define USE_PUBLIC_MALLOC_WRAPPERS
|
---|
497 | #else
|
---|
498 | /* #define USE_PUBLIC_MALLOC_WRAPPERS */
|
---|
499 | #endif
|
---|
500 |
|
---|
501 |
|
---|
502 |
|
---|
503 |
|
---|
504 | /*
|
---|
505 | HAVE_MEMCPY should be defined if you are not otherwise using
|
---|
506 | ANSI STD C, but still have memcpy and memset in your C library
|
---|
507 | and want to use them in calloc and realloc. Otherwise simple
|
---|
508 | macro versions are defined below.
|
---|
509 |
|
---|
510 | USE_MEMCPY should be defined as 1 if you actually want to
|
---|
511 | have memset and memcpy called. People report that the macro
|
---|
512 | versions are faster than libc versions on some systems.
|
---|
513 |
|
---|
514 | Even if USE_MEMCPY is set to 1, loops to copy/clear small chunks
|
---|
515 | (of <= 36 bytes) are manually unrolled in realloc and calloc.
|
---|
516 | */
|
---|
517 |
|
---|
518 | #define HAVE_MEMCPY
|
---|
519 |
|
---|
520 | #ifndef USE_MEMCPY
|
---|
521 | #ifdef HAVE_MEMCPY
|
---|
522 | #define USE_MEMCPY 1
|
---|
523 | #else
|
---|
524 | #define USE_MEMCPY 0
|
---|
525 | #endif
|
---|
526 | #endif
|
---|
527 |
|
---|
528 |
|
---|
529 | #if (__STD_C || defined(HAVE_MEMCPY))
|
---|
530 |
|
---|
531 | #ifdef WIN32
|
---|
532 | /*
|
---|
533 | On Win32 platforms, 'memset()' and 'memcpy()' are already declared in
|
---|
534 | 'windows.h'
|
---|
535 | */
|
---|
536 | #else
|
---|
537 | #if __STD_C
|
---|
538 | void* memset(void*, int, size_t);
|
---|
539 | void* memcpy(void*, const void*, size_t);
|
---|
540 | void* memmove(void*, const void*, size_t);
|
---|
541 | #else
|
---|
542 | Void_t* memset();
|
---|
543 | Void_t* memcpy();
|
---|
544 | Void_t* memmove();
|
---|
545 | #endif
|
---|
546 | #endif
|
---|
547 | #endif
|
---|
548 |
|
---|
549 |
|
---|
550 | /*
|
---|
551 | MALLOC_FAILURE_ACTION is the action to take before "return 0" when
|
---|
552 | malloc fails to be able to return memory, either because memory is
|
---|
553 | exhausted or because of illegal arguments.
|
---|
554 |
|
---|
555 | By default, sets errno if running on STD_C platform, else does nothing.
|
---|
556 | */
|
---|
557 |
|
---|
558 | #ifndef MALLOC_FAILURE_ACTION
|
---|
559 | #if __STD_C
|
---|
560 | #define MALLOC_FAILURE_ACTION \
|
---|
561 | errno = ENOMEM;
|
---|
562 |
|
---|
563 | #else
|
---|
564 |
|
---|
565 | #define MALLOC_FAILURE_ACTION
|
---|
566 | #endif
|
---|
567 | #endif
|
---|
568 |
|
---|
569 | /*
|
---|
570 | Define HAVE_MMAP as true to optionally make malloc() use mmap() to
|
---|
571 | allocate very large blocks. These will be returned to the
|
---|
572 | operating system immediately after a free(). Also, if mmap
|
---|
573 | is available, it is used as a backup strategy in cases where
|
---|
574 | MORECORE fails to provide space from system.
|
---|
575 |
|
---|
576 | This malloc is best tuned to work with mmap for large requests.
|
---|
577 | If you do not have mmap, allocation of very large chunks (1MB
|
---|
578 | or so) may be slower than you'd like.
|
---|
579 | */
|
---|
580 |
|
---|
581 | #ifndef HAVE_MMAP
|
---|
582 | #define HAVE_MMAP 1
|
---|
583 | #endif
|
---|
584 |
|
---|
585 | /*
|
---|
586 | MMAP_AS_MORECORE_SIZE is the minimum mmap size argument to use if
|
---|
587 | sbrk fails, and mmap is used as a backup (which is done only if
|
---|
588 | HAVE_MMAP). The value must be a multiple of page size. This
|
---|
589 | backup strategy generally applies only when systems have "holes" in
|
---|
590 | address space, so sbrk cannot perform contiguous expansion, but
|
---|
591 | there is still space available on system. On systems for which
|
---|
592 | this is known to be useful (i.e. most linux kernels), this occurs
|
---|
593 | only when programs allocate huge amounts of memory. Between this,
|
---|
594 | and the fact that mmap regions tend to be limited, the size should
|
---|
595 | be large, to avoid too many mmap calls and thus avoid running out
|
---|
596 | of kernel resources.
|
---|
597 | */
|
---|
598 |
|
---|
599 | #ifndef MMAP_AS_MORECORE_SIZE
|
---|
600 | #define MMAP_AS_MORECORE_SIZE (1024 * 1024)
|
---|
601 | #endif
|
---|
602 |
|
---|
603 |
|
---|
604 |
|
---|
605 | /*
|
---|
606 | Define HAVE_MREMAP to make realloc() use mremap() to re-allocate
|
---|
607 | large blocks. This is currently only possible on Linux with
|
---|
608 | kernel versions newer than 1.3.77.
|
---|
609 | */
|
---|
610 |
|
---|
611 | #ifndef HAVE_MREMAP
|
---|
612 | #ifdef linux
|
---|
613 | #define HAVE_MREMAP 1
|
---|
614 | #else
|
---|
615 | #define HAVE_MREMAP 0
|
---|
616 | #endif
|
---|
617 |
|
---|
618 | #endif /* HAVE_MMAP */
|
---|
619 |
|
---|
620 |
|
---|
621 | /*
|
---|
622 |
|
---|
623 | This version of malloc supports the standard SVID/XPG mallinfo
|
---|
624 | routine that returns a struct containing usage properties and
|
---|
625 | statistics. It should work on any SVID/XPG compliant system that has
|
---|
626 | a /usr/include/malloc.h defining struct mallinfo. (If you'd like to
|
---|
627 | install such a thing yourself, cut out the preliminary declarations
|
---|
628 | as described above and below and save them in a malloc.h file. But
|
---|
629 | there's no compelling reason to bother to do this.)
|
---|
630 |
|
---|
631 | The main declaration needed is the mallinfo struct that is returned
|
---|
632 | (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a
|
---|
633 | bunch of field that are not even meaningful in this version of
|
---|
634 | malloc. These fields are are instead filled by mallinfo() with
|
---|
635 | other numbers that might be of interest.
|
---|
636 |
|
---|
637 | HAVE_USR_INCLUDE_MALLOC_H should be set if you have a
|
---|
638 | /usr/include/malloc.h file that includes a declaration of struct
|
---|
639 | mallinfo. If so, it is included; else an SVID2/XPG2 compliant
|
---|
640 | version is declared below. These must be precisely the same for
|
---|
641 | mallinfo() to work.
|
---|
642 |
|
---|
643 | */
|
---|
644 |
|
---|
645 | /* #define HAVE_USR_INCLUDE_MALLOC_H */
|
---|
646 |
|
---|
647 | #ifdef HAVE_USR_INCLUDE_MALLOC_H
|
---|
648 | #include "/usr/include/malloc.h"
|
---|
649 | #else
|
---|
650 |
|
---|
651 | /* SVID2/XPG mallinfo structure */
|
---|
652 |
|
---|
653 | struct mallinfo {
|
---|
654 | int arena; /* non-mmapped space allocated from system */
|
---|
655 | int ordblks; /* number of free chunks */
|
---|
656 | int smblks; /* number of fastbin blocks */
|
---|
657 | int hblks; /* number of mmapped regions */
|
---|
658 | int hblkhd; /* space in mmapped regions */
|
---|
659 | int usmblks; /* maximum total allocated space */
|
---|
660 | int fsmblks; /* space available in freed fastbin blocks */
|
---|
661 | int uordblks; /* total allocated space */
|
---|
662 | int fordblks; /* total free space */
|
---|
663 | int keepcost; /* top-most, releasable (via malloc_trim) space */
|
---|
664 | };
|
---|
665 |
|
---|
666 | /* SVID2/XPG mallopt options */
|
---|
667 |
|
---|
668 | #define M_MXFAST 1 /* Set maximum fastbin size */
|
---|
669 | #define M_NLBLKS 2 /* UNUSED in this malloc */
|
---|
670 | #define M_GRAIN 3 /* UNUSED in this malloc */
|
---|
671 | #define M_KEEP 4 /* UNUSED in this malloc */
|
---|
672 |
|
---|
673 |
|
---|
674 | #endif
|
---|
675 |
|
---|
676 |
|
---|
677 | /* Additional mallopt options supported in this malloc */
|
---|
678 |
|
---|
679 | #ifndef M_TRIM_THRESHOLD
|
---|
680 | #define M_TRIM_THRESHOLD -1
|
---|
681 | #endif
|
---|
682 |
|
---|
683 | #ifndef M_TOP_PAD
|
---|
684 | #define M_TOP_PAD -2
|
---|
685 | #endif
|
---|
686 |
|
---|
687 | #ifndef M_MMAP_THRESHOLD
|
---|
688 | #define M_MMAP_THRESHOLD -3
|
---|
689 | #endif
|
---|
690 |
|
---|
691 | #ifndef M_MMAP_MAX
|
---|
692 | #define M_MMAP_MAX -4
|
---|
693 | #endif
|
---|
694 |
|
---|
695 |
|
---|
696 | /*
|
---|
697 | MXFAST is the maximum request size used for "fastbins", special bins
|
---|
698 | that hold returned chunks without consolidating their spaces. This
|
---|
699 | enables future requests for chunks of the same size to be handled
|
---|
700 | very quickly, but can increase fragmentation, and thus increase the
|
---|
701 | overall memory footprint of a program.
|
---|
702 |
|
---|
703 | This malloc manages fastbins very conservatively yet still
|
---|
704 | efficiently, so fragmentation is rarely a problem for values less
|
---|
705 | than or equal to the default. The maximum supported value of MXFAST
|
---|
706 | is 80. You wouldn't want it any higher than this anyway. Fastbins
|
---|
707 | are designed especially for use with many small structs, objects or
|
---|
708 | strings -- the default handles structs/objects/arrays with sizes up
|
---|
709 | to 8 4byte fields, or small strings representing words, tokens,
|
---|
710 | etc. Using fastbins for larger objects normally worsens
|
---|
711 | fragmentation without improving speed.
|
---|
712 |
|
---|
713 | MXFAST is set in REQUEST size units. It is internally used in
|
---|
714 | chunksize units, which adds padding and alignment. You can reduce
|
---|
715 | MXFAST to 0 to disable all use of fastbins. This causes the malloc
|
---|
716 | algorithm to be a close approximation of fifo-best-fit in all cases,
|
---|
717 | not just for larger requests, but will generally cause it to be
|
---|
718 | slower.
|
---|
719 |
|
---|
720 | */
|
---|
721 |
|
---|
722 | #ifndef DEFAULT_MXFAST
|
---|
723 | #define DEFAULT_MXFAST 64
|
---|
724 | #endif
|
---|
725 |
|
---|
726 |
|
---|
727 | /*
|
---|
728 | M_TRIM_THRESHOLD is the maximum amount of unused top-most memory
|
---|
729 | to keep before releasing via malloc_trim in free().
|
---|
730 |
|
---|
731 | Automatic trimming is mainly useful in long-lived programs.
|
---|
732 | Because trimming via sbrk can be slow on some systems, and can
|
---|
733 | sometimes be wasteful (in cases where programs immediately
|
---|
734 | afterward allocate more large chunks) the value should be high
|
---|
735 | enough so that your overall system performance would improve by
|
---|
736 | releasing.
|
---|
737 |
|
---|
738 | The trim threshold and the mmap control parameters (see below)
|
---|
739 | can be traded off with one another. Trimming and mmapping are
|
---|
740 | two different ways of releasing unused memory back to the
|
---|
741 | system. Between these two, it is often possible to keep
|
---|
742 | system-level demands of a long-lived program down to a bare
|
---|
743 | minimum. For example, in one test suite of sessions measuring
|
---|
744 | the XF86 X server on Linux, using a trim threshold of 128K and a
|
---|
745 | mmap threshold of 192K led to near-minimal long term resource
|
---|
746 | consumption.
|
---|
747 |
|
---|
748 | If you are using this malloc in a long-lived program, it should
|
---|
749 | pay to experiment with these values. As a rough guide, you
|
---|
750 | might set to a value close to the average size of a process
|
---|
751 | (program) running on your system. Releasing this much memory
|
---|
752 | would allow such a process to run in memory. Generally, it's
|
---|
753 | worth it to tune for trimming rather tham memory mapping when a
|
---|
754 | program undergoes phases where several large chunks are
|
---|
755 | allocated and released in ways that can reuse each other's
|
---|
756 | storage, perhaps mixed with phases where there are no such
|
---|
757 | chunks at all. And in well-behaved long-lived programs,
|
---|
758 | controlling release of large blocks via trimming versus mapping
|
---|
759 | is usually faster.
|
---|
760 |
|
---|
761 | However, in most programs, these parameters serve mainly as
|
---|
762 | protection against the system-level effects of carrying around
|
---|
763 | massive amounts of unneeded memory. Since frequent calls to
|
---|
764 | sbrk, mmap, and munmap otherwise degrade performance, the default
|
---|
765 | parameters are set to relatively high values that serve only as
|
---|
766 | safeguards.
|
---|
767 |
|
---|
768 | The default trim value is high enough to cause trimming only in
|
---|
769 | fairly extreme (by current memory consumption standards) cases.
|
---|
770 | It must be greater than page size to have any useful effect. To
|
---|
771 | disable trimming completely, you can set to (unsigned long)(-1);
|
---|
772 |
|
---|
773 | Trim settings interact with fastbin (MXFAST) settings: Unless
|
---|
774 | TRIM_FASTBINS is defined, automatic trimming never takes place upon
|
---|
775 | freeing a chunk with size less than or equal to MXFAST. Trimming is
|
---|
776 | instead delayed until subsequent freeing of larger chunks. However,
|
---|
777 | you can still force an attempted trim by calling malloc_trim.
|
---|
778 |
|
---|
779 | Also, trimming is not generally possible in cases where
|
---|
780 | the main arena is obtained via mmap.
|
---|
781 |
|
---|
782 | */
|
---|
783 |
|
---|
784 |
|
---|
785 | #ifndef DEFAULT_TRIM_THRESHOLD
|
---|
786 | #define DEFAULT_TRIM_THRESHOLD (128 * 1024)
|
---|
787 | #endif
|
---|
788 |
|
---|
789 |
|
---|
790 |
|
---|
791 | /*
|
---|
792 | M_TOP_PAD is the amount of extra `padding' space to allocate or
|
---|
793 | retain whenever sbrk is called. It is used in two ways internally:
|
---|
794 |
|
---|
795 | * When sbrk is called to extend the top of the arena to satisfy
|
---|
796 | a new malloc request, this much padding is added to the sbrk
|
---|
797 | request.
|
---|
798 |
|
---|
799 | * When malloc_trim is called automatically from free(),
|
---|
800 | it is used as the `pad' argument.
|
---|
801 |
|
---|
802 | In both cases, the actual amount of padding is rounded
|
---|
803 | so that the end of the arena is always a system page boundary.
|
---|
804 |
|
---|
805 | The main reason for using padding is to avoid calling sbrk so
|
---|
806 | often. Having even a small pad greatly reduces the likelihood
|
---|
807 | that nearly every malloc request during program start-up (or
|
---|
808 | after trimming) will invoke sbrk, which needlessly wastes
|
---|
809 | time.
|
---|
810 |
|
---|
811 | Automatic rounding-up to page-size units is normally sufficient
|
---|
812 | to avoid measurable overhead, so the default is 0. However, in
|
---|
813 | systems where sbrk is relatively slow, it can pay to increase
|
---|
814 | this value, at the expense of carrying around more memory than
|
---|
815 | the program needs.
|
---|
816 |
|
---|
817 | */
|
---|
818 |
|
---|
819 | #ifndef DEFAULT_TOP_PAD
|
---|
820 | #define DEFAULT_TOP_PAD (0)
|
---|
821 | #endif
|
---|
822 |
|
---|
823 | /*
|
---|
824 |
|
---|
825 | M_MMAP_THRESHOLD is the request size threshold for using mmap()
|
---|
826 | to service a request. Requests of at least this size that cannot
|
---|
827 | be allocated using already-existing space will be serviced via mmap.
|
---|
828 | (If enough normal freed space already exists it is used instead.)
|
---|
829 |
|
---|
830 | Using mmap segregates relatively large chunks of memory so that
|
---|
831 | they can be individually obtained and released from the host
|
---|
832 | system. A request serviced through mmap is never reused by any
|
---|
833 | other request (at least not directly; the system may just so
|
---|
834 | happen to remap successive requests to the same locations).
|
---|
835 |
|
---|
836 | Segregating space in this way has the benefit that mmapped space
|
---|
837 | can ALWAYS be individually released back to the system, which
|
---|
838 | helps keep the system level memory demands of a long-lived
|
---|
839 | program low. Mapped memory can never become `locked' between
|
---|
840 | other chunks, as can happen with normally allocated chunks, which
|
---|
841 | means that even trimming via malloc_trim would not release them.
|
---|
842 |
|
---|
843 | However, it has the disadvantages that:
|
---|
844 |
|
---|
845 | 1. The space cannot be reclaimed, consolidated, and then
|
---|
846 | used to service later requests, as happens with normal chunks.
|
---|
847 | 2. It can lead to more wastage because of mmap page alignment
|
---|
848 | requirements
|
---|
849 | 3. It causes malloc performance to be more dependent on host
|
---|
850 | system memory management support routines which may vary in
|
---|
851 | implementation quality and may impose arbitrary
|
---|
852 | limitations. Generally, servicing a request via normal
|
---|
853 | malloc steps is faster than going through a system's mmap.
|
---|
854 |
|
---|
855 | All together, these considerations should lead you to use mmap
|
---|
856 | only for relatively large requests.
|
---|
857 |
|
---|
858 | */
|
---|
859 |
|
---|
860 |
|
---|
861 | #ifndef DEFAULT_MMAP_THRESHOLD
|
---|
862 | #define DEFAULT_MMAP_THRESHOLD (128 * 1024)
|
---|
863 | #endif
|
---|
864 |
|
---|
865 | /*
|
---|
866 | M_MMAP_MAX is the maximum number of requests to simultaneously
|
---|
867 | service using mmap. This parameter exists because:
|
---|
868 |
|
---|
869 | 1. Some systems have a limited number of internal tables for
|
---|
870 | use by mmap.
|
---|
871 | 2. In most systems, overreliance on mmap can degrade overall
|
---|
872 | performance.
|
---|
873 | 3. If a program allocates many large regions, it is probably
|
---|
874 | better off using normal sbrk-based allocation routines that
|
---|
875 | can reclaim and reallocate normal heap memory.
|
---|
876 |
|
---|
877 | Setting to 0 disables use of mmap for servicing large requests. If
|
---|
878 | HAVE_MMAP is not set, the default value is 0, and attempts to set it
|
---|
879 | to non-zero values in mallopt will fail.
|
---|
880 | */
|
---|
881 |
|
---|
882 |
|
---|
883 |
|
---|
884 | #ifndef DEFAULT_MMAP_MAX
|
---|
885 | #if HAVE_MMAP
|
---|
886 | #define DEFAULT_MMAP_MAX (256)
|
---|
887 | #else
|
---|
888 | #define DEFAULT_MMAP_MAX (0)
|
---|
889 | #endif
|
---|
890 | #endif
|
---|
891 |
|
---|
892 |
|
---|
893 | /*
|
---|
894 | TRIM_FASTBINS controls whether free() of a very small chunk can
|
---|
895 | immediately lead to trimming. Setting to true (1) can reduce memory
|
---|
896 | footprint, but will almost always slow down (by a few percent)
|
---|
897 | programs that use a lot of small chunks.
|
---|
898 |
|
---|
899 | Define this only if you are willing to give up some speed to more
|
---|
900 | aggressively reduce system-level memory footprint when releasing
|
---|
901 | memory in programs that use many small chunks. You can get
|
---|
902 | essentially the same effect by setting MXFAST to 0, but this can
|
---|
903 | lead to even greater slowdowns in programs using many small chunks.
|
---|
904 | TRIM_FASTBINS is an in-between compile-time option, that disables
|
---|
905 | only those chunks bordering topmost memory from being placed in
|
---|
906 | fastbins.
|
---|
907 |
|
---|
908 | */
|
---|
909 |
|
---|
910 |
|
---|
911 | #ifndef TRIM_FASTBINS
|
---|
912 | #define TRIM_FASTBINS 0
|
---|
913 | #endif
|
---|
914 |
|
---|
915 |
|
---|
916 | /*
|
---|
917 | MORECORE-related declarations. By default, rely on sbrk
|
---|
918 | */
|
---|
919 |
|
---|
920 |
|
---|
921 | #ifdef LACKS_UNISTD_H
|
---|
922 | #if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__)
|
---|
923 | #if __STD_C
|
---|
924 | extern Void_t* sbrk(ptrdiff_t);
|
---|
925 | #else
|
---|
926 | extern Void_t* sbrk();
|
---|
927 | #endif
|
---|
928 | #endif
|
---|
929 | #endif
|
---|
930 |
|
---|
931 | /*
|
---|
932 | MORECORE is the name of the routine to call to obtain more memory
|
---|
933 | from the system. See below for general guidance on writing
|
---|
934 | alternative MORECORE functions, as well as a version for WIN32 and a
|
---|
935 | sample version for pre-OSX macos.
|
---|
936 | */
|
---|
937 |
|
---|
938 | #ifndef MORECORE
|
---|
939 | #define MORECORE sbrk
|
---|
940 | #endif
|
---|
941 |
|
---|
942 |
|
---|
943 | /*
|
---|
944 | MORECORE_FAILURE is the value returned upon failure of MORECORE
|
---|
945 | as well as mmap. Since it cannot be an otherwise valid memory address,
|
---|
946 | and must reflect values of standard sys calls, you probably ought not
|
---|
947 | try to redefine it.
|
---|
948 | */
|
---|
949 |
|
---|
950 | #ifndef MORECORE_FAILURE
|
---|
951 | #define MORECORE_FAILURE (-1)
|
---|
952 | #endif
|
---|
953 |
|
---|
954 | /*
|
---|
955 | If MORECORE_CONTIGUOUS is true, take advantage of fact that
|
---|
956 | consecutive calls to MORECORE with positive arguments always return
|
---|
957 | contiguous increasing addresses. This is true of unix sbrk. Even
|
---|
958 | if not defined, when regions happen to be contiguous, malloc will
|
---|
959 | permit allocations spanning regions obtained from different
|
---|
960 | calls. But defining this when applicable enables some stronger
|
---|
961 | consistency checks and space efficiencies.
|
---|
962 | */
|
---|
963 |
|
---|
964 |
|
---|
965 | #ifndef MORECORE_CONTIGUOUS
|
---|
966 | #define MORECORE_CONTIGUOUS 1
|
---|
967 | #endif
|
---|
968 |
|
---|
969 |
|
---|
970 | /*
|
---|
971 | The system page size. To the extent possible, this malloc manages
|
---|
972 | memory from the system in page-size units. Note that this value is
|
---|
973 | cached during initialization into a field of malloc_state. So even
|
---|
974 | if malloc_getpagesize is a function, it is only called once.
|
---|
975 |
|
---|
976 | The following mechanics for getpagesize were adapted from bsd/gnu
|
---|
977 | getpagesize.h. If none of the system-probes here apply, a value of
|
---|
978 | 4096 is used, which should be OK: If they don't apply, then using
|
---|
979 | the actual value probably doesn't impact performance.
|
---|
980 | */
|
---|
981 |
|
---|
982 | #ifndef malloc_getpagesize
|
---|
983 |
|
---|
984 | #ifndef LACKS_UNISTD_H
|
---|
985 | # include <unistd.h>
|
---|
986 | #endif
|
---|
987 |
|
---|
988 | # ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */
|
---|
989 | # ifndef _SC_PAGE_SIZE
|
---|
990 | # define _SC_PAGE_SIZE _SC_PAGESIZE
|
---|
991 | # endif
|
---|
992 | # endif
|
---|
993 |
|
---|
994 | # ifdef _SC_PAGE_SIZE
|
---|
995 | # define malloc_getpagesize sysconf(_SC_PAGE_SIZE)
|
---|
996 | # else
|
---|
997 | # if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE)
|
---|
998 | extern size_t getpagesize();
|
---|
999 | # define malloc_getpagesize getpagesize()
|
---|
1000 | # else
|
---|
1001 | # ifdef WIN32 /* use supplied emulation of getpagesize */
|
---|
1002 | # define malloc_getpagesize getpagesize()
|
---|
1003 | # else
|
---|
1004 | # ifndef LACKS_SYS_PARAM_H
|
---|
1005 | # include <sys/param.h>
|
---|
1006 | # endif
|
---|
1007 | # ifdef EXEC_PAGESIZE
|
---|
1008 | # define malloc_getpagesize EXEC_PAGESIZE
|
---|
1009 | # else
|
---|
1010 | # ifdef NBPG
|
---|
1011 | # ifndef CLSIZE
|
---|
1012 | # define malloc_getpagesize NBPG
|
---|
1013 | # else
|
---|
1014 | # define malloc_getpagesize (NBPG * CLSIZE)
|
---|
1015 | # endif
|
---|
1016 | # else
|
---|
1017 | # ifdef NBPC
|
---|
1018 | # define malloc_getpagesize NBPC
|
---|
1019 | # else
|
---|
1020 | # ifdef PAGESIZE
|
---|
1021 | # define malloc_getpagesize PAGESIZE
|
---|
1022 | # else /* just guess */
|
---|
1023 | # define malloc_getpagesize (4096)
|
---|
1024 | # endif
|
---|
1025 | # endif
|
---|
1026 | # endif
|
---|
1027 | # endif
|
---|
1028 | # endif
|
---|
1029 | # endif
|
---|
1030 | # endif
|
---|
1031 | #endif
|
---|
1032 |
|
---|
1033 |
|
---|
1034 | /* Two-phase Name mangling */
|
---|
1035 |
|
---|
1036 | #ifndef USE_PUBLIC_MALLOC_WRAPPERS
|
---|
1037 | #define cALLOc public_cALLOc
|
---|
1038 | #define fREe public_fREe
|
---|
1039 | #define cFREe public_cFREe
|
---|
1040 | #define mALLOc public_mALLOc
|
---|
1041 | #define mEMALIGn public_mEMALIGn
|
---|
1042 | #define rEALLOc public_rEALLOc
|
---|
1043 | #define vALLOc public_vALLOc
|
---|
1044 | #define pVALLOc public_pVALLOc
|
---|
1045 | #define mALLINFo public_mALLINFo
|
---|
1046 | #define mALLOPt public_mALLOPt
|
---|
1047 | #define mTRIm public_mTRIm
|
---|
1048 | #define mSTATs public_mSTATs
|
---|
1049 | #define mUSABLe public_mUSABLe
|
---|
1050 | #endif
|
---|
1051 |
|
---|
1052 | #ifdef USE_DL_PREFIX
|
---|
1053 | #define public_cALLOc dlcalloc
|
---|
1054 | #define public_fREe dlfree
|
---|
1055 | #define public_cFREe dlcfree
|
---|
1056 | #define public_mALLOc dlmalloc
|
---|
1057 | #define public_mEMALIGn dlmemalign
|
---|
1058 | #define public_rEALLOc dlrealloc
|
---|
1059 | #define public_vALLOc dlvalloc
|
---|
1060 | #define public_pVALLOc dlpvalloc
|
---|
1061 | #define public_mALLINFo dlmallinfo
|
---|
1062 | #define public_mALLOPt dlmallopt
|
---|
1063 | #define public_mTRIm dlmalloc_trim
|
---|
1064 | #define public_mSTATs dlmalloc_stats
|
---|
1065 | #define public_mUSABLe dlmalloc_usable_size
|
---|
1066 | #else /* USE_DL_PREFIX */
|
---|
1067 | #define public_cALLOc calloc
|
---|
1068 | #define public_fREe free
|
---|
1069 | #define public_cFREe cfree
|
---|
1070 | #define public_mALLOc malloc
|
---|
1071 | #define public_mEMALIGn memalign
|
---|
1072 | #define public_rEALLOc realloc
|
---|
1073 | #define public_vALLOc valloc
|
---|
1074 | #define public_pVALLOc pvalloc
|
---|
1075 | #define public_mALLINFo mallinfo
|
---|
1076 | #define public_mALLOPt mallopt
|
---|
1077 | #define public_mTRIm malloc_trim
|
---|
1078 | #define public_mSTATs malloc_stats
|
---|
1079 | #define public_mUSABLe malloc_usable_size
|
---|
1080 | #endif /* USE_DL_PREFIX */
|
---|
1081 |
|
---|
1082 | #if __STD_C
|
---|
1083 |
|
---|
1084 | Void_t* public_mALLOc(size_t);
|
---|
1085 | void public_fREe(Void_t*);
|
---|
1086 | Void_t* public_rEALLOc(Void_t*, size_t);
|
---|
1087 | Void_t* public_mEMALIGn(size_t, size_t);
|
---|
1088 | Void_t* public_vALLOc(size_t);
|
---|
1089 | Void_t* public_pVALLOc(size_t);
|
---|
1090 | Void_t* public_cALLOc(size_t, size_t);
|
---|
1091 | void public_cFREe(Void_t*);
|
---|
1092 | int public_mTRIm(size_t);
|
---|
1093 | size_t public_mUSABLe(Void_t*);
|
---|
1094 | void public_mSTATs();
|
---|
1095 | int public_mALLOPt(int, int);
|
---|
1096 | struct mallinfo public_mALLINFo(void);
|
---|
1097 | #else
|
---|
1098 | Void_t* public_mALLOc();
|
---|
1099 | void public_fREe();
|
---|
1100 | Void_t* public_rEALLOc();
|
---|
1101 | Void_t* public_mEMALIGn();
|
---|
1102 | Void_t* public_vALLOc();
|
---|
1103 | Void_t* public_pVALLOc();
|
---|
1104 | Void_t* public_cALLOc();
|
---|
1105 | void public_cFREe();
|
---|
1106 | int public_mTRIm();
|
---|
1107 | size_t public_mUSABLe();
|
---|
1108 | void public_mSTATs();
|
---|
1109 | int public_mALLOPt();
|
---|
1110 | struct mallinfo public_mALLINFo();
|
---|
1111 | #endif
|
---|
1112 |
|
---|
1113 |
|
---|
1114 | #ifdef __cplusplus
|
---|
1115 | }; /* end of extern "C" */
|
---|
1116 | #endif
|
---|
1117 |
|
---|
1118 |
|
---|
1119 |
|
---|
1120 | /* ---------- To make a malloc.h, end cutting here ------------ */
|
---|
1121 |
|
---|
1122 |
|
---|
1123 | /* Declarations of internal utilities defined below */
|
---|
1124 |
|
---|
1125 |
|
---|
1126 |
|
---|
1127 |
|
---|
1128 | #ifdef USE_PUBLIC_MALLOC_WRAPPERS
|
---|
1129 | #if __STD_C
|
---|
1130 |
|
---|
1131 | static Void_t* mALLOc(size_t);
|
---|
1132 | static void fREe(Void_t*);
|
---|
1133 | static Void_t* rEALLOc(Void_t*, size_t);
|
---|
1134 | static Void_t* mEMALIGn(size_t, size_t);
|
---|
1135 | static Void_t* vALLOc(size_t);
|
---|
1136 | static Void_t* pVALLOc(size_t);
|
---|
1137 | static Void_t* cALLOc(size_t, size_t);
|
---|
1138 | static void cFREe(Void_t*);
|
---|
1139 | static int mTRIm(size_t);
|
---|
1140 | static size_t mUSABLe(Void_t*);
|
---|
1141 | static void mSTATs();
|
---|
1142 | static int mALLOPt(int, int);
|
---|
1143 | static struct mallinfo mALLINFo(void);
|
---|
1144 | #else
|
---|
1145 | static Void_t* mALLOc();
|
---|
1146 | static void fREe();
|
---|
1147 | static Void_t* rEALLOc();
|
---|
1148 | static Void_t* mEMALIGn();
|
---|
1149 | static Void_t* vALLOc();
|
---|
1150 | static Void_t* pVALLOc();
|
---|
1151 | static Void_t* cALLOc();
|
---|
1152 | static void cFREe();
|
---|
1153 | static int mTRIm();
|
---|
1154 | static size_t mUSABLe();
|
---|
1155 | static void mSTATs();
|
---|
1156 | static int mALLOPt();
|
---|
1157 | static struct mallinfo mALLINFo();
|
---|
1158 | #endif
|
---|
1159 | #endif
|
---|
1160 |
|
---|
1161 |
|
---|
1162 |
|
---|
1163 | /* ---------- public wrappers --------------- */
|
---|
1164 |
|
---|
1165 | #ifdef USE_PUBLIC_MALLOC_WRAPPERS
|
---|
1166 |
|
---|
1167 | /*
|
---|
1168 | MALLOC_PREACTION and MALLOC_POSTACTION should be
|
---|
1169 | defined to return 0 on success, and nonzero on failure.
|
---|
1170 | The return value of MALLOC_POSTACTION is currently ignored
|
---|
1171 | in wrapper functions since there is no reasonable default
|
---|
1172 | action to take on failure.
|
---|
1173 | */
|
---|
1174 |
|
---|
1175 |
|
---|
1176 | #ifdef USE_MALLOC_LOCK
|
---|
1177 |
|
---|
1178 | #ifdef WIN32
|
---|
1179 |
|
---|
1180 | static int mALLOC_MUTEx;
|
---|
1181 |
|
---|
1182 | #define MALLOC_PREACTION slwait(&mALLOC_MUTEx)
|
---|
1183 | #define MALLOC_POSTACTION slrelease(&mALLOC_MUTEx)
|
---|
1184 |
|
---|
1185 | #else
|
---|
1186 |
|
---|
1187 | #include <pthread.h>
|
---|
1188 |
|
---|
1189 | static pthread_mutex_t mALLOC_MUTEx = PTHREAD_MUTEX_INITIALIZER;
|
---|
1190 |
|
---|
1191 | #define MALLOC_PREACTION pthread_mutex_lock(&mALLOC_MUTEx)
|
---|
1192 | #define MALLOC_POSTACTION pthread_mutex_unlock(&mALLOC_MUTEx)
|
---|
1193 |
|
---|
1194 | #endif /* USE_MALLOC_LOCK */
|
---|
1195 |
|
---|
1196 | #else
|
---|
1197 |
|
---|
1198 | /* Substitute anything you like for these */
|
---|
1199 |
|
---|
1200 | #define MALLOC_PREACTION (0)
|
---|
1201 | #define MALLOC_POSTACTION (0)
|
---|
1202 |
|
---|
1203 | #endif
|
---|
1204 |
|
---|
1205 | Void_t* public_mALLOc(size_t bytes) {
|
---|
1206 | Void_t* m;
|
---|
1207 | if (MALLOC_PREACTION != 0) {
|
---|
1208 | return 0;
|
---|
1209 | }
|
---|
1210 | m = mALLOc(bytes);
|
---|
1211 | if (MALLOC_POSTACTION != 0) {
|
---|
1212 | }
|
---|
1213 | return m;
|
---|
1214 | }
|
---|
1215 |
|
---|
1216 | void public_fREe(Void_t* m) {
|
---|
1217 | if (MALLOC_PREACTION != 0) {
|
---|
1218 | return;
|
---|
1219 | }
|
---|
1220 | fREe(m);
|
---|
1221 | if (MALLOC_POSTACTION != 0) {
|
---|
1222 | }
|
---|
1223 | }
|
---|
1224 |
|
---|
1225 | Void_t* public_rEALLOc(Void_t* m, size_t bytes) {
|
---|
1226 | if (MALLOC_PREACTION != 0) {
|
---|
1227 | return 0;
|
---|
1228 | }
|
---|
1229 | m = rEALLOc(m, bytes);
|
---|
1230 | if (MALLOC_POSTACTION != 0) {
|
---|
1231 | }
|
---|
1232 | return m;
|
---|
1233 | }
|
---|
1234 |
|
---|
1235 | Void_t* public_mEMALIGn(size_t alignment, size_t bytes) {
|
---|
1236 | Void_t* m;
|
---|
1237 | if (MALLOC_PREACTION != 0) {
|
---|
1238 | return 0;
|
---|
1239 | }
|
---|
1240 | m = mEMALIGn(alignment, bytes);
|
---|
1241 | if (MALLOC_POSTACTION != 0) {
|
---|
1242 | }
|
---|
1243 | return m;
|
---|
1244 | }
|
---|
1245 |
|
---|
1246 | Void_t* public_vALLOc(size_t bytes) {
|
---|
1247 | Void_t* m;
|
---|
1248 | if (MALLOC_PREACTION != 0) {
|
---|
1249 | return 0;
|
---|
1250 | }
|
---|
1251 | m = vALLOc(bytes);
|
---|
1252 | if (MALLOC_POSTACTION != 0) {
|
---|
1253 | }
|
---|
1254 | return m;
|
---|
1255 | }
|
---|
1256 |
|
---|
1257 | Void_t* public_pVALLOc(size_t bytes) {
|
---|
1258 | Void_t* m;
|
---|
1259 | if (MALLOC_PREACTION != 0) {
|
---|
1260 | return 0;
|
---|
1261 | }
|
---|
1262 | m = pVALLOc(bytes);
|
---|
1263 | if (MALLOC_POSTACTION != 0) {
|
---|
1264 | }
|
---|
1265 | return m;
|
---|
1266 | }
|
---|
1267 |
|
---|
1268 | Void_t* public_cALLOc(size_t n, size_t elem_size) {
|
---|
1269 | Void_t* m;
|
---|
1270 | if (MALLOC_PREACTION != 0) {
|
---|
1271 | return 0;
|
---|
1272 | }
|
---|
1273 | m = cALLOc(n, elem_size);
|
---|
1274 | if (MALLOC_POSTACTION != 0) {
|
---|
1275 | }
|
---|
1276 | return m;
|
---|
1277 | }
|
---|
1278 |
|
---|
1279 | void public_cFREe(Void_t* m) {
|
---|
1280 | if (MALLOC_PREACTION != 0) {
|
---|
1281 | return;
|
---|
1282 | }
|
---|
1283 | cFREe(m);
|
---|
1284 | if (MALLOC_POSTACTION != 0) {
|
---|
1285 | }
|
---|
1286 | }
|
---|
1287 |
|
---|
1288 | int public_mTRIm(size_t s) {
|
---|
1289 | int result;
|
---|
1290 | if (MALLOC_PREACTION != 0) {
|
---|
1291 | return 0;
|
---|
1292 | }
|
---|
1293 | result = mTRIm(s);
|
---|
1294 | if (MALLOC_POSTACTION != 0) {
|
---|
1295 | }
|
---|
1296 | return result;
|
---|
1297 | }
|
---|
1298 |
|
---|
1299 |
|
---|
1300 | size_t public_mUSABLe(Void_t* m) {
|
---|
1301 | size_t result;
|
---|
1302 | if (MALLOC_PREACTION != 0) {
|
---|
1303 | return 0;
|
---|
1304 | }
|
---|
1305 | result = mUSABLe(m);
|
---|
1306 | if (MALLOC_POSTACTION != 0) {
|
---|
1307 | }
|
---|
1308 | return result;
|
---|
1309 | }
|
---|
1310 |
|
---|
1311 |
|
---|
1312 | void public_mSTATs() {
|
---|
1313 | if (MALLOC_PREACTION != 0) {
|
---|
1314 | return;
|
---|
1315 | }
|
---|
1316 | mSTATs();
|
---|
1317 | if (MALLOC_POSTACTION != 0) {
|
---|
1318 | }
|
---|
1319 | }
|
---|
1320 |
|
---|
1321 | struct mallinfo public_mALLINFo() {
|
---|
1322 | struct mallinfo m;
|
---|
1323 | if (MALLOC_PREACTION != 0) {
|
---|
1324 | return m;
|
---|
1325 | }
|
---|
1326 | m = mALLINFo();
|
---|
1327 | if (MALLOC_POSTACTION != 0) {
|
---|
1328 | }
|
---|
1329 | return m;
|
---|
1330 | }
|
---|
1331 |
|
---|
1332 | int public_mALLOPt(int p, int v) {
|
---|
1333 | int result;
|
---|
1334 | if (MALLOC_PREACTION != 0) {
|
---|
1335 | return 0;
|
---|
1336 | }
|
---|
1337 | result = mALLOPt(p, v);
|
---|
1338 | if (MALLOC_POSTACTION != 0) {
|
---|
1339 | }
|
---|
1340 | return result;
|
---|
1341 | }
|
---|
1342 |
|
---|
1343 | #endif
|
---|
1344 |
|
---|
1345 |
|
---|
1346 |
|
---|
1347 | /* ------------- Optional versions of memcopy ---------------- */
|
---|
1348 |
|
---|
1349 |
|
---|
1350 | #if USE_MEMCPY
|
---|
1351 |
|
---|
1352 | #define MALLOC_COPY(dest, src, nbytes, overlap) \
|
---|
1353 | ((overlap) ? memmove(dest, src, nbytes) : memcpy(dest, src, nbytes))
|
---|
1354 | #define MALLOC_ZERO(dest, nbytes) memset(dest, 0, nbytes)
|
---|
1355 |
|
---|
1356 | #else /* !USE_MEMCPY */
|
---|
1357 |
|
---|
1358 | /* Use Duff's device for good zeroing/copying performance. */
|
---|
1359 |
|
---|
1360 | #define MALLOC_ZERO(charp, nbytes) \
|
---|
1361 | do { \
|
---|
1362 | INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \
|
---|
1363 | long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
|
---|
1364 | if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
|
---|
1365 | switch (mctmp) { \
|
---|
1366 | case 0: for(;;) { *mzp++ = 0; \
|
---|
1367 | case 7: *mzp++ = 0; \
|
---|
1368 | case 6: *mzp++ = 0; \
|
---|
1369 | case 5: *mzp++ = 0; \
|
---|
1370 | case 4: *mzp++ = 0; \
|
---|
1371 | case 3: *mzp++ = 0; \
|
---|
1372 | case 2: *mzp++ = 0; \
|
---|
1373 | case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \
|
---|
1374 | } \
|
---|
1375 | } while(0)
|
---|
1376 |
|
---|
1377 | /* For overlapping case, dest is always _below_ src. */
|
---|
1378 |
|
---|
1379 | #define MALLOC_COPY(dest,src,nbytes,overlap) \
|
---|
1380 | do { \
|
---|
1381 | INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \
|
---|
1382 | INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \
|
---|
1383 | long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \
|
---|
1384 | if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \
|
---|
1385 | switch (mctmp) { \
|
---|
1386 | case 0: for(;;) { *mcdst++ = *mcsrc++; \
|
---|
1387 | case 7: *mcdst++ = *mcsrc++; \
|
---|
1388 | case 6: *mcdst++ = *mcsrc++; \
|
---|
1389 | case 5: *mcdst++ = *mcsrc++; \
|
---|
1390 | case 4: *mcdst++ = *mcsrc++; \
|
---|
1391 | case 3: *mcdst++ = *mcsrc++; \
|
---|
1392 | case 2: *mcdst++ = *mcsrc++; \
|
---|
1393 | case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \
|
---|
1394 | } \
|
---|
1395 | } while(0)
|
---|
1396 |
|
---|
1397 | #endif
|
---|
1398 |
|
---|
1399 | /* ------------------ MMAP support ------------------ */
|
---|
1400 |
|
---|
1401 |
|
---|
1402 | #if HAVE_MMAP
|
---|
1403 |
|
---|
1404 | #include <fcntl.h>
|
---|
1405 | #ifndef LACKS_SYS_MMAN_H
|
---|
1406 | #include <sys/mman.h>
|
---|
1407 | #endif
|
---|
1408 |
|
---|
1409 | #if !defined(MAP_ANONYMOUS) && defined(MAP_ANON)
|
---|
1410 | #define MAP_ANONYMOUS MAP_ANON
|
---|
1411 | #endif
|
---|
1412 |
|
---|
1413 |
|
---|
1414 | /*
|
---|
1415 | Nearly all versions of mmap support MAP_ANONYMOUS,
|
---|
1416 | so the following is unlikely to be needed, but is
|
---|
1417 | supplied just in case.
|
---|
1418 | */
|
---|
1419 |
|
---|
1420 | #ifndef MAP_ANONYMOUS
|
---|
1421 |
|
---|
1422 | static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */
|
---|
1423 |
|
---|
1424 | #define MMAP(addr, size, prot, flags) ((dev_zero_fd < 0) ? \
|
---|
1425 | (dev_zero_fd = open("/dev/zero", O_RDWR), \
|
---|
1426 | mmap((addr), (size), (prot), (flags), dev_zero_fd, 0)) : \
|
---|
1427 | mmap((addr), (size), (prot), (flags), dev_zero_fd, 0))
|
---|
1428 |
|
---|
1429 | #else
|
---|
1430 |
|
---|
1431 | #define MMAP(addr, size, prot, flags) \
|
---|
1432 | (mmap((addr), (size), (prot), (flags)|MAP_ANONYMOUS, -1, 0))
|
---|
1433 |
|
---|
1434 | #endif
|
---|
1435 |
|
---|
1436 | #endif /* HAVE_MMAP */
|
---|
1437 |
|
---|
1438 |
|
---|
1439 | /* ---------- Alternative MORECORE functions ------------ */
|
---|
1440 |
|
---|
1441 |
|
---|
1442 | /*
|
---|
1443 | General Requirements for MORECORE.
|
---|
1444 |
|
---|
1445 | The MORECORE function must have the following properties:
|
---|
1446 |
|
---|
1447 | If MORECORE_CONTIGUOUS is false:
|
---|
1448 |
|
---|
1449 | * MORECORE must allocate in multiples of pagesize. It will
|
---|
1450 | only be called with arguments that are multiples of pagesize.
|
---|
1451 |
|
---|
1452 | * MORECORE must page-align. That is, MORECORE(0) must
|
---|
1453 | return an address at a page boundary.
|
---|
1454 |
|
---|
1455 | else (i.e. If MORECORE_CONTIGUOUS is true):
|
---|
1456 |
|
---|
1457 | * Consecutive calls to MORECORE with positive arguments
|
---|
1458 | return increasing addresses, indicating that space has been
|
---|
1459 | contiguously extended.
|
---|
1460 |
|
---|
1461 | * MORECORE need not allocate in multiples of pagesize.
|
---|
1462 | Calls to MORECORE need not have args of multiples of pagesize.
|
---|
1463 |
|
---|
1464 | * MORECORE need not page-align.
|
---|
1465 |
|
---|
1466 | In either case:
|
---|
1467 |
|
---|
1468 | * MORECORE may allocate more memory than requested. (Or even less,
|
---|
1469 | but this will generally result in a malloc failure.)
|
---|
1470 |
|
---|
1471 | * MORECORE must not allocate memory when given argument zero, but
|
---|
1472 | instead return one past the end address of memory from previous
|
---|
1473 | nonzero call. This malloc does NOT call MORECORE(0)
|
---|
1474 | until at least one call with positive arguments is made, so
|
---|
1475 | the initial value returned is not important.
|
---|
1476 |
|
---|
1477 | * Even though consecutive calls to MORECORE need not return contiguous
|
---|
1478 | addresses, it must be OK for malloc'ed chunks to span multiple
|
---|
1479 | regions in those cases where they do happen to be contiguous.
|
---|
1480 |
|
---|
1481 | * MORECORE need not handle negative arguments -- it may instead
|
---|
1482 | just return MORECORE_FAILURE when given negative arguments.
|
---|
1483 | Negative arguments are always multiples of pagesize. MORECORE
|
---|
1484 | must not misinterpret negative args as large positive unsigned
|
---|
1485 | args.
|
---|
1486 |
|
---|
1487 | There is some variation across systems about the type of the
|
---|
1488 | argument to sbrk/MORECORE. If size_t is unsigned, then it cannot
|
---|
1489 | actually be size_t, because sbrk supports negative args, so it is
|
---|
1490 | normally the signed type of the same width as size_t (sometimes
|
---|
1491 | declared as "intptr_t", and sometimes "ptrdiff_t"). It doesn't much
|
---|
1492 | matter though. Internally, we use "long" as arguments, which should
|
---|
1493 | work across all reasonable possibilities.
|
---|
1494 |
|
---|
1495 | Additionally, if MORECORE ever returns failure for a positive
|
---|
1496 | request, and HAVE_MMAP is true, then mmap is used as a noncontiguous
|
---|
1497 | system allocator. This is a useful backup strategy for systems with
|
---|
1498 | holes in address spaces -- in this case sbrk cannot contiguously
|
---|
1499 | expand the heap, but mmap may be able to map noncontiguous space.
|
---|
1500 | If you'd like mmap to ALWAYS be used, you can define MORECORE to be
|
---|
1501 | a function that always returns MORECORE_FAILURE.
|
---|
1502 |
|
---|
1503 | If you are using this malloc with something other than unix sbrk to
|
---|
1504 | supply memory regions, you probably want to set MORECORE_CONTIGUOUS
|
---|
1505 | as false. As an example, here is a custom allocator kindly
|
---|
1506 | contributed for pre-OSX macOS. It uses virtually but not
|
---|
1507 | necessarily physically contiguous non-paged memory (locked in,
|
---|
1508 | present and won't get swapped out). You can use it by uncommenting
|
---|
1509 | this section, adding some #includes, and setting up the appropriate
|
---|
1510 | defines above:
|
---|
1511 |
|
---|
1512 | #define MORECORE osMoreCore
|
---|
1513 | #define MORECORE_CONTIGUOUS 0
|
---|
1514 |
|
---|
1515 | There is also a shutdown routine that should somehow be called for
|
---|
1516 | cleanup upon program exit.
|
---|
1517 |
|
---|
1518 | #define MAX_POOL_ENTRIES 100
|
---|
1519 | #define MINIMUM_MORECORE_SIZE (64 * 1024)
|
---|
1520 | static int next_os_pool;
|
---|
1521 | void *our_os_pools[MAX_POOL_ENTRIES];
|
---|
1522 |
|
---|
1523 | void *osMoreCore(int size)
|
---|
1524 | {
|
---|
1525 | void *ptr = 0;
|
---|
1526 | static void *sbrk_top = 0;
|
---|
1527 |
|
---|
1528 | if (size > 0)
|
---|
1529 | {
|
---|
1530 | if (size < MINIMUM_MORECORE_SIZE)
|
---|
1531 | size = MINIMUM_MORECORE_SIZE;
|
---|
1532 | if (CurrentExecutionLevel() == kTaskLevel)
|
---|
1533 | ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0);
|
---|
1534 | if (ptr == 0)
|
---|
1535 | {
|
---|
1536 | return (void *) MORECORE_FAILURE;
|
---|
1537 | }
|
---|
1538 | // save ptrs so they can be freed during cleanup
|
---|
1539 | our_os_pools[next_os_pool] = ptr;
|
---|
1540 | next_os_pool++;
|
---|
1541 | ptr = (void *) ((((unsigned long) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK);
|
---|
1542 | sbrk_top = (char *) ptr + size;
|
---|
1543 | return ptr;
|
---|
1544 | }
|
---|
1545 | else if (size < 0)
|
---|
1546 | {
|
---|
1547 | // we don't currently support shrink behavior
|
---|
1548 | return (void *) MORECORE_FAILURE;
|
---|
1549 | }
|
---|
1550 | else
|
---|
1551 | {
|
---|
1552 | return sbrk_top;
|
---|
1553 | }
|
---|
1554 | }
|
---|
1555 |
|
---|
1556 | // cleanup any allocated memory pools
|
---|
1557 | // called as last thing before shutting down driver
|
---|
1558 |
|
---|
1559 | void osCleanupMem(void)
|
---|
1560 | {
|
---|
1561 | void **ptr;
|
---|
1562 |
|
---|
1563 | for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++)
|
---|
1564 | if (*ptr)
|
---|
1565 | {
|
---|
1566 | PoolDeallocate(*ptr);
|
---|
1567 | *ptr = 0;
|
---|
1568 | }
|
---|
1569 | }
|
---|
1570 |
|
---|
1571 | */
|
---|
1572 |
|
---|
1573 |
|
---|
1574 |
|
---|
1575 | |
---|
1576 |
|
---|
1577 |
|
---|
1578 | /*
|
---|
1579 | ----------------------- Chunk representations -----------------------
|
---|
1580 | */
|
---|
1581 |
|
---|
1582 |
|
---|
1583 | /*
|
---|
1584 | This struct declaration is misleading (but accurate and necessary).
|
---|
1585 | It declares a "view" into memory allowing access to necessary
|
---|
1586 | fields at known offsets from a given base. See explanation below.
|
---|
1587 | */
|
---|
1588 |
|
---|
1589 | struct malloc_chunk {
|
---|
1590 |
|
---|
1591 | INTERNAL_SIZE_T prev_size; /* Size of previous chunk (if free). */
|
---|
1592 | INTERNAL_SIZE_T size; /* Size in bytes, including overhead. */
|
---|
1593 |
|
---|
1594 | struct malloc_chunk* fd; /* double links -- used only if free. */
|
---|
1595 | struct malloc_chunk* bk;
|
---|
1596 | };
|
---|
1597 |
|
---|
1598 |
|
---|
1599 | typedef struct malloc_chunk* mchunkptr;
|
---|
1600 |
|
---|
1601 | /*
|
---|
1602 |
|
---|
1603 | malloc_chunk details:
|
---|
1604 |
|
---|
1605 | (The following includes lightly edited explanations by Colin Plumb.)
|
---|
1606 |
|
---|
1607 | Chunks of memory are maintained using a `boundary tag' method as
|
---|
1608 | described in e.g., Knuth or Standish. (See the paper by Paul
|
---|
1609 | Wilson ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a
|
---|
1610 | survey of such techniques.) Sizes of free chunks are stored both
|
---|
1611 | in the front of each chunk and at the end. This makes
|
---|
1612 | consolidating fragmented chunks into bigger chunks very fast. The
|
---|
1613 | size fields also hold bits representing whether chunks are free or
|
---|
1614 | in use.
|
---|
1615 |
|
---|
1616 | An allocated chunk looks like this:
|
---|
1617 |
|
---|
1618 |
|
---|
1619 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
---|
1620 | | Size of previous chunk, if allocated | |
|
---|
1621 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
---|
1622 | | Size of chunk, in bytes |P|
|
---|
1623 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
---|
1624 | | User data starts here... .
|
---|
1625 | . .
|
---|
1626 | . (malloc_usable_space() bytes) .
|
---|
1627 | . |
|
---|
1628 | nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
---|
1629 | | Size of chunk |
|
---|
1630 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
---|
1631 |
|
---|
1632 |
|
---|
1633 | Where "chunk" is the front of the chunk for the purpose of most of
|
---|
1634 | the malloc code, but "mem" is the pointer that is returned to the
|
---|
1635 | user. "Nextchunk" is the beginning of the next contiguous chunk.
|
---|
1636 |
|
---|
1637 | Chunks always begin on even word boundries, so the mem portion
|
---|
1638 | (which is returned to the user) is also on an even word boundary, and
|
---|
1639 | thus double-word aligned.
|
---|
1640 |
|
---|
1641 | Free chunks are stored in circular doubly-linked lists, and look like this:
|
---|
1642 |
|
---|
1643 | chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
---|
1644 | | Size of previous chunk |
|
---|
1645 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
---|
1646 | `head:' | Size of chunk, in bytes |P|
|
---|
1647 | mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
---|
1648 | | Forward pointer to next chunk in list |
|
---|
1649 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
---|
1650 | | Back pointer to previous chunk in list |
|
---|
1651 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
---|
1652 | | Unused space (may be 0 bytes long) .
|
---|
1653 | . .
|
---|
1654 | . |
|
---|
1655 | nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
---|
1656 | `foot:' | Size of chunk, in bytes |
|
---|
1657 | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|
---|
1658 |
|
---|
1659 | The P (PREV_INUSE) bit, stored in the unused low-order bit of the
|
---|
1660 | chunk size (which is always a multiple of two words), is an in-use
|
---|
1661 | bit for the *previous* chunk. If that bit is *clear*, then the
|
---|
1662 | word before the current chunk size contains the previous chunk
|
---|
1663 | size, and can be used to find the front of the previous chunk.
|
---|
1664 | The very first chunk allocated always has this bit set,
|
---|
1665 | preventing access to non-existent (or non-owned) memory. If
|
---|
1666 | prev_inuse is set for any given chunk, then you CANNOT determine
|
---|
1667 | the size of the previous chunk, and might even get a memory
|
---|
1668 | addressing fault when trying to do so.
|
---|
1669 |
|
---|
1670 | Note that the `foot' of the current chunk is actually represented
|
---|
1671 | as the prev_size of the NEXT chunk. (This makes it easier to
|
---|
1672 | deal with alignments etc).
|
---|
1673 |
|
---|
1674 | The two exceptions to all this are
|
---|
1675 |
|
---|
1676 | 1. The special chunk `top' doesn't bother using the
|
---|
1677 | trailing size field since there is no next contiguous chunk
|
---|
1678 | that would have to index off it. After initialization, `top'
|
---|
1679 | is forced to always exist. If it would become less than
|
---|
1680 | MINSIZE bytes long, it is replenished.
|
---|
1681 |
|
---|
1682 | 2. Chunks allocated via mmap, which have the second-lowest-order
|
---|
1683 | bit (IS_MMAPPED) set in their size fields. Because they are
|
---|
1684 | allocated one-by-one, each must contain its own trailing size field.
|
---|
1685 |
|
---|
1686 | */
|
---|
1687 |
|
---|
1688 | |
---|
1689 |
|
---|
1690 |
|
---|
1691 | /*
|
---|
1692 | Size and alignment checks and conversions
|
---|
1693 | */
|
---|
1694 |
|
---|
1695 | /* conversion from malloc headers to user pointers, and back */
|
---|
1696 |
|
---|
1697 | #define chunk2mem(p) ((Void_t*)((char*)(p) + 2*SIZE_SZ))
|
---|
1698 | #define mem2chunk(mem) ((mchunkptr)((char*)(mem) - 2*SIZE_SZ))
|
---|
1699 |
|
---|
1700 | /* The smallest possible chunk */
|
---|
1701 | #define MIN_CHUNK_SIZE (sizeof(struct malloc_chunk))
|
---|
1702 |
|
---|
1703 | /* The smallest size we can malloc is an aligned minimal chunk */
|
---|
1704 |
|
---|
1705 | #define MINSIZE ((MIN_CHUNK_SIZE+MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)
|
---|
1706 |
|
---|
1707 | /* Check if m has acceptable alignment */
|
---|
1708 |
|
---|
1709 | #define aligned_OK(m) (((unsigned long)((m)) & (MALLOC_ALIGN_MASK)) == 0)
|
---|
1710 |
|
---|
1711 | /*
|
---|
1712 | Check for negative/huge sizes.
|
---|
1713 | This cannot just test for < 0 because argument might
|
---|
1714 | be an unsigned type of uncertain width.
|
---|
1715 | */
|
---|
1716 |
|
---|
1717 | #define IS_NEGATIVE(x) \
|
---|
1718 | ((unsigned long)x >= \
|
---|
1719 | (unsigned long)((((INTERNAL_SIZE_T)(1)) << ((SIZE_SZ)*8 - 1))))
|
---|
1720 |
|
---|
1721 |
|
---|
1722 | /* pad request bytes into a usable size -- internal version */
|
---|
1723 |
|
---|
1724 | #define request2size(req) \
|
---|
1725 | (((req) + SIZE_SZ + MALLOC_ALIGN_MASK < MINSIZE) ? \
|
---|
1726 | MINSIZE : \
|
---|
1727 | ((req) + SIZE_SZ + MALLOC_ALIGN_MASK) & ~MALLOC_ALIGN_MASK)
|
---|
1728 |
|
---|
1729 |
|
---|
1730 | /*
|
---|
1731 | Same, except check for negative/huge arguments.
|
---|
1732 | This lets through args that are positive but wrap into
|
---|
1733 | negatives when padded. However, these are trapped
|
---|
1734 | elsewhere.
|
---|
1735 | */
|
---|
1736 |
|
---|
1737 | #define checked_request2size(req, sz) \
|
---|
1738 | if (IS_NEGATIVE(req)) { \
|
---|
1739 | MALLOC_FAILURE_ACTION; \
|
---|
1740 | return 0; \
|
---|
1741 | } \
|
---|
1742 | (sz) = request2size(req);
|
---|
1743 |
|
---|
1744 | |
---|
1745 |
|
---|
1746 |
|
---|
1747 |
|
---|
1748 | /*
|
---|
1749 | Physical chunk operations
|
---|
1750 | */
|
---|
1751 |
|
---|
1752 |
|
---|
1753 | /* size field is or'ed with PREV_INUSE when previous adjacent chunk in use */
|
---|
1754 |
|
---|
1755 | #define PREV_INUSE 0x1
|
---|
1756 |
|
---|
1757 |
|
---|
1758 | /* size field is or'ed with IS_MMAPPED if the chunk was obtained with mmap() */
|
---|
1759 |
|
---|
1760 | #define IS_MMAPPED 0x2
|
---|
1761 |
|
---|
1762 |
|
---|
1763 | /* Bits to mask off when extracting size */
|
---|
1764 |
|
---|
1765 | #define SIZE_BITS (PREV_INUSE|IS_MMAPPED)
|
---|
1766 |
|
---|
1767 |
|
---|
1768 | /* Ptr to next physical malloc_chunk. */
|
---|
1769 |
|
---|
1770 | #define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->size & ~PREV_INUSE) ))
|
---|
1771 |
|
---|
1772 |
|
---|
1773 | /* Ptr to previous physical malloc_chunk */
|
---|
1774 |
|
---|
1775 | #define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_size) ))
|
---|
1776 |
|
---|
1777 |
|
---|
1778 | /* Treat space at ptr + offset as a chunk */
|
---|
1779 |
|
---|
1780 | #define chunk_at_offset(p, s) ((mchunkptr)(((char*)(p)) + (s)))
|
---|
1781 |
|
---|
1782 |
|
---|
1783 | |
---|
1784 |
|
---|
1785 |
|
---|
1786 | /*
|
---|
1787 | Dealing with use bits
|
---|
1788 |
|
---|
1789 | Note: IS_MMAPPED is intentionally not masked off from size field in
|
---|
1790 | macros for which mmapped chunks should never be seen. This should
|
---|
1791 | cause helpful core dumps to occur if it is tried by accident by
|
---|
1792 | people extending or adapting this malloc.
|
---|
1793 |
|
---|
1794 | */
|
---|
1795 |
|
---|
1796 |
|
---|
1797 | /* extract p's inuse bit */
|
---|
1798 |
|
---|
1799 | #define inuse(p)\
|
---|
1800 | ((((mchunkptr)(((char*)(p))+((p)->size & ~PREV_INUSE)))->size) & PREV_INUSE)
|
---|
1801 |
|
---|
1802 |
|
---|
1803 | /* extract inuse bit of previous chunk */
|
---|
1804 |
|
---|
1805 | #define prev_inuse(p) ((p)->size & PREV_INUSE)
|
---|
1806 |
|
---|
1807 |
|
---|
1808 | /* check for mmap()'ed chunk */
|
---|
1809 |
|
---|
1810 | #define chunk_is_mmapped(p) ((p)->size & IS_MMAPPED)
|
---|
1811 |
|
---|
1812 |
|
---|
1813 | /* set/clear chunk as being inuse without otherwise disturbing */
|
---|
1814 |
|
---|
1815 | #define set_inuse(p)\
|
---|
1816 | ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size |= PREV_INUSE
|
---|
1817 |
|
---|
1818 | #define clear_inuse(p)\
|
---|
1819 | ((mchunkptr)(((char*)(p)) + ((p)->size & ~PREV_INUSE)))->size &= ~(PREV_INUSE)
|
---|
1820 |
|
---|
1821 |
|
---|
1822 | /* check/set/clear inuse bits in known places */
|
---|
1823 |
|
---|
1824 | #define inuse_bit_at_offset(p, s)\
|
---|
1825 | (((mchunkptr)(((char*)(p)) + (s)))->size & PREV_INUSE)
|
---|
1826 |
|
---|
1827 | #define set_inuse_bit_at_offset(p, s)\
|
---|
1828 | (((mchunkptr)(((char*)(p)) + (s)))->size |= PREV_INUSE)
|
---|
1829 |
|
---|
1830 | #define clear_inuse_bit_at_offset(p, s)\
|
---|
1831 | (((mchunkptr)(((char*)(p)) + (s)))->size &= ~(PREV_INUSE))
|
---|
1832 |
|
---|
1833 |
|
---|
1834 | |
---|
1835 |
|
---|
1836 |
|
---|
1837 | /*
|
---|
1838 | Dealing with size fields
|
---|
1839 | */
|
---|
1840 |
|
---|
1841 | /* Get size, ignoring use bits */
|
---|
1842 |
|
---|
1843 | #define chunksize(p) ((p)->size & ~(SIZE_BITS))
|
---|
1844 |
|
---|
1845 | /* Set size at head, without disturbing its use bit */
|
---|
1846 |
|
---|
1847 | #define set_head_size(p, s) ((p)->size = (((p)->size & PREV_INUSE) | (s)))
|
---|
1848 |
|
---|
1849 | /* Set size/use field */
|
---|
1850 |
|
---|
1851 | #define set_head(p, s) ((p)->size = (s))
|
---|
1852 |
|
---|
1853 | /* Set size at footer (only when chunk is not in use) */
|
---|
1854 |
|
---|
1855 | #define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_size = (s))
|
---|
1856 |
|
---|
1857 |
|
---|
1858 | |
---|
1859 |
|
---|
1860 |
|
---|
1861 |
|
---|
1862 | /*
|
---|
1863 | ------------------ Internal data structures --------------------
|
---|
1864 |
|
---|
1865 | All internal state is held in an instance of malloc_state defined
|
---|
1866 | below. There are no other static variables, except in two optional
|
---|
1867 | cases:
|
---|
1868 | * If USE_MALLOC_LOCK is defined, the mALLOC_MUTEx declared
|
---|
1869 | above.
|
---|
1870 | * If HAVE_MMAP is true, but mmap doesn't support
|
---|
1871 | MAP_ANONYMOUS, a dummy file descriptor for mmap.
|
---|
1872 |
|
---|
1873 | Beware of lots of tricks that minimize the total space
|
---|
1874 | requirements. The result is a little over 1K bytes (for 4byte
|
---|
1875 | pointers and size_t.)
|
---|
1876 |
|
---|
1877 | */
|
---|
1878 |
|
---|
1879 | /*
|
---|
1880 |
|
---|
1881 | Bins
|
---|
1882 |
|
---|
1883 | An array of bin headers for free chunks. Each bin is doubly
|
---|
1884 | linked. The bins are approximately proportionally (log) spaced.
|
---|
1885 | There are a lot of these bins (128). This may look excessive, but
|
---|
1886 | works very well in practice. Most bins hold sizes that are
|
---|
1887 | unusual as malloc request sizes, but are more usual for fragments
|
---|
1888 | and consolidated sets of chunks, which is what these bins hold, so
|
---|
1889 | they can be found quickly. All procedures maintain the invariant
|
---|
1890 | that no consolidated chunk physically borders another one, so each
|
---|
1891 | chunk in a list is known to be preceeded and followed by either
|
---|
1892 | inuse chunks or the ends of memory.
|
---|
1893 |
|
---|
1894 | Chunks in bins are kept in size order, with ties going to the
|
---|
1895 | approximately least recently used chunk. Ordering is irrelevant
|
---|
1896 | for the small bins, which all contain the same-sized chunks, but
|
---|
1897 | facilitates best-fit allocation for larger chunks. (These lists
|
---|
1898 | are just sequential. Keeping them in order almost never requires
|
---|
1899 | enough traversal to warrant using fancier ordered data
|
---|
1900 | structures.) Chunks of the same size are linked with the most
|
---|
1901 | recently freed at the front, and allocations are taken from the
|
---|
1902 | back. This results in LRU (FIFO) allocation order, which tends
|
---|
1903 | to give each chunk an equal opportunity to be consolidated with
|
---|
1904 | adjacent freed chunks, resulting in larger free chunks and less
|
---|
1905 | fragmentation.
|
---|
1906 |
|
---|
1907 | To simplify use in double-linked lists, each bin header acts
|
---|
1908 | as a malloc_chunk. This avoids special-casing for headers.
|
---|
1909 | But to conserve space and (mainly) improve locality, we allocate
|
---|
1910 | only the fd/bk pointers of bins, and then use repositioning tricks
|
---|
1911 | to treat these as the fields of a malloc_chunk*.
|
---|
1912 | */
|
---|
1913 |
|
---|
1914 | typedef struct malloc_chunk* mbinptr;
|
---|
1915 |
|
---|
1916 | #define NBINS 128
|
---|
1917 |
|
---|
1918 |
|
---|
1919 | /* addressing -- note that bin_at(0) does not exist */
|
---|
1920 |
|
---|
1921 | #define bin_at(m, i) ((mbinptr)((char*)&((m)->bins[(i)<<1]) - (SIZE_SZ<<1)))
|
---|
1922 |
|
---|
1923 |
|
---|
1924 | /* analog of ++bin */
|
---|
1925 |
|
---|
1926 | #define next_bin(b) ((mbinptr)((char*)(b) + (sizeof(mchunkptr)<<1)))
|
---|
1927 |
|
---|
1928 |
|
---|
1929 | /* Reminders about list directionality within bins */
|
---|
1930 |
|
---|
1931 | #define first(b) ((b)->fd)
|
---|
1932 | #define last(b) ((b)->bk)
|
---|
1933 |
|
---|
1934 | /*
|
---|
1935 | Take a chunk off a bin list
|
---|
1936 | */
|
---|
1937 |
|
---|
1938 | #define unlink(P, BK, FD) { \
|
---|
1939 | FD = P->fd; \
|
---|
1940 | BK = P->bk; \
|
---|
1941 | FD->bk = BK; \
|
---|
1942 | BK->fd = FD; \
|
---|
1943 | }
|
---|
1944 |
|
---|
1945 | /*
|
---|
1946 | Indexing bins
|
---|
1947 |
|
---|
1948 | Bins for sizes < 512 bytes contain chunks of all the same size, spaced
|
---|
1949 | 8 bytes apart. Larger bins are approximately logarithmically spaced:
|
---|
1950 |
|
---|
1951 | 64 bins of size 8
|
---|
1952 | 32 bins of size 64
|
---|
1953 | 16 bins of size 512
|
---|
1954 | 8 bins of size 4096
|
---|
1955 | 4 bins of size 32768
|
---|
1956 | 2 bins of size 262144
|
---|
1957 | 1 bin of size what's left
|
---|
1958 |
|
---|
1959 | There is actually a little bit of slop in the numbers in bin_index
|
---|
1960 | for the sake of speed. This makes no difference elsewhere.
|
---|
1961 |
|
---|
1962 | The bins top out at around 1mb because we expect to service large
|
---|
1963 | chunks via mmap.
|
---|
1964 |
|
---|
1965 | */
|
---|
1966 |
|
---|
1967 | /* The first NSMALLBIN bins (and fastbins) hold only one size */
|
---|
1968 | #define NSMALLBINS 64
|
---|
1969 | #define SMALLBIN_WIDTH 8
|
---|
1970 | #define MIN_LARGE_SIZE 512
|
---|
1971 |
|
---|
1972 | #define in_smallbin_range(sz) ((sz) < MIN_LARGE_SIZE)
|
---|
1973 |
|
---|
1974 | #define smallbin_index(sz) (((unsigned)(sz)) >> 3)
|
---|
1975 |
|
---|
1976 | #define largebin_index(sz) \
|
---|
1977 | (((((unsigned long)(sz)) >> 6) <= 32)? 56 + (((unsigned long)(sz)) >> 6): \
|
---|
1978 | ((((unsigned long)(sz)) >> 9) <= 20)? 91 + (((unsigned long)(sz)) >> 9): \
|
---|
1979 | ((((unsigned long)(sz)) >> 12) <= 10)? 110 + (((unsigned long)(sz)) >> 12): \
|
---|
1980 | ((((unsigned long)(sz)) >> 15) <= 4)? 119 + (((unsigned long)(sz)) >> 15): \
|
---|
1981 | ((((unsigned long)(sz)) >> 18) <= 2)? 124 + (((unsigned long)(sz)) >> 18): \
|
---|
1982 | 126)
|
---|
1983 |
|
---|
1984 | #define bin_index(sz) \
|
---|
1985 | ((in_smallbin_range(sz)) ? smallbin_index(sz) : largebin_index(sz))
|
---|
1986 |
|
---|
1987 |
|
---|
1988 | /*
|
---|
1989 | Unsorted chunks
|
---|
1990 |
|
---|
1991 | All remainders from chunk splits, as well as all returned chunks,
|
---|
1992 | are first placed in the "unsorted" bin. They are then placed
|
---|
1993 | in regular bins after malloc gives them ONE chance to be used before
|
---|
1994 | binning. So, basically, the unsorted_chunks list acts as a queue,
|
---|
1995 | with chunks being placed on it in free (and malloc_consolidate),
|
---|
1996 | and taken off (to be either used or placed in bins) in malloc.
|
---|
1997 |
|
---|
1998 | */
|
---|
1999 |
|
---|
2000 |
|
---|
2001 | /* The otherwise unindexable 1-bin is used to hold unsorted chunks. */
|
---|
2002 |
|
---|
2003 | #define unsorted_chunks(M) (bin_at(M, 1))
|
---|
2004 |
|
---|
2005 |
|
---|
2006 | /*
|
---|
2007 | Top
|
---|
2008 |
|
---|
2009 | The top-most available chunk (i.e., the one bordering the end of
|
---|
2010 | available memory) is treated specially. It is never included in
|
---|
2011 | any bin, is used only if no other chunk is available, and is
|
---|
2012 | released back to the system if it is very large (see
|
---|
2013 | M_TRIM_THRESHOLD). `top' is never properly linked to its bin
|
---|
2014 | since it is always handled specially. Because top initially
|
---|
2015 | points to its own bin with initial zero size, thus forcing
|
---|
2016 | extension on the first malloc request, we avoid having any special
|
---|
2017 | code in malloc to check whether it even exists yet. But we still
|
---|
2018 | need to do so when getting memory from system, so we make
|
---|
2019 | initial_top treat the bin as a legal but unusable chunk during the
|
---|
2020 | interval between initialization and the first call to
|
---|
2021 | sYSMALLOc. (This is somewhat delicate, since it relies on
|
---|
2022 | the 2 preceding words to be zero during this interval as well.)
|
---|
2023 | */
|
---|
2024 |
|
---|
2025 |
|
---|
2026 | /* Conveniently, the unsorted bin can be used as dummy top on first call */
|
---|
2027 | #define initial_top(M) (unsorted_chunks(M))
|
---|
2028 |
|
---|
2029 | /*
|
---|
2030 | Binmap
|
---|
2031 |
|
---|
2032 | To help compensate for the large number of bins, a one-level index
|
---|
2033 | structure is used for bin-by-bin searching. `binmap' is a
|
---|
2034 | bitvector recording whether bins are definitely empty so they can
|
---|
2035 | be skipped over during during traversals. The bits are NOT always
|
---|
2036 | cleared as soon as bins are empty, but instead only
|
---|
2037 | when they are noticed to be empty during traversal in malloc.
|
---|
2038 |
|
---|
2039 | */
|
---|
2040 |
|
---|
2041 | /* Conservatively use 32 bits per map word, even if on 64bit system */
|
---|
2042 | #define BINMAPSHIFT 5
|
---|
2043 | #define BITSPERMAP (1U << BINMAPSHIFT)
|
---|
2044 | #define BINMAPSIZE (NBINS / BITSPERMAP)
|
---|
2045 |
|
---|
2046 | #define idx2block(i) ((i) >> BINMAPSHIFT)
|
---|
2047 | #define idx2bit(i) ((1U << ((i) & ((1U << BINMAPSHIFT)-1))))
|
---|
2048 |
|
---|
2049 | #define mark_bin(m,i) ((m)->binmap[idx2block(i)] |= idx2bit(i))
|
---|
2050 | #define unmark_bin(m,i) ((m)->binmap[idx2block(i)] &= ~(idx2bit(i)))
|
---|
2051 | #define get_binmap(m,i) ((m)->binmap[idx2block(i)] & idx2bit(i))
|
---|
2052 |
|
---|
2053 |
|
---|
2054 | /*
|
---|
2055 | Fastbins
|
---|
2056 |
|
---|
2057 | An array of lists holding recently freed small chunks. Fastbins
|
---|
2058 | are not doubly linked. It is faster to single-link them, and
|
---|
2059 | since chunks are never removed from the middles of these lists,
|
---|
2060 | double linking is not necessary.
|
---|
2061 |
|
---|
2062 | Chunks in fastbins keep their inuse bit set, so they cannot
|
---|
2063 | be consolidated with other free chunks. malloc_consolidate
|
---|
2064 | releases all chunks in fastbins and consolidates them with
|
---|
2065 | other free chunks.
|
---|
2066 | */
|
---|
2067 |
|
---|
2068 | typedef struct malloc_chunk* mfastbinptr;
|
---|
2069 |
|
---|
2070 | /* offset 2 to use otherwise unindexable first 2 bins */
|
---|
2071 | #define fastbin_index(sz) ((((unsigned int)(sz)) >> 3) - 2)
|
---|
2072 |
|
---|
2073 | /* The maximum fastbin request size we support */
|
---|
2074 | #define MAX_FAST_SIZE 80
|
---|
2075 |
|
---|
2076 | #define NFASTBINS (fastbin_index(request2size(MAX_FAST_SIZE))+1)
|
---|
2077 |
|
---|
2078 |
|
---|
2079 |
|
---|
2080 | /*
|
---|
2081 | Flag bit held in max_fast indicating that there probably are some
|
---|
2082 | fastbin chunks . It is set true on entering a chunk into any fastbin,
|
---|
2083 | and cleared only in malloc_consolidate.
|
---|
2084 |
|
---|
2085 | The truth value is inverted so that have_fastchunks will be true
|
---|
2086 | upon startup (since statics are zero-filled).
|
---|
2087 | */
|
---|
2088 |
|
---|
2089 |
|
---|
2090 | #define have_fastchunks(M) (((M)->max_fast & 1U) == 0)
|
---|
2091 | #define clear_fastchunks(M) ((M)->max_fast |= 1U)
|
---|
2092 | #define set_fastchunks(M) ((M)->max_fast &= ~1U)
|
---|
2093 |
|
---|
2094 | /*
|
---|
2095 | Initialization value of max_fast.
|
---|
2096 | Use impossibly small value if 0.
|
---|
2097 | Value also has flag bit clear.
|
---|
2098 | */
|
---|
2099 | #define req2max_fast(s) (((((s) == 0)? SMALLBIN_WIDTH: request2size(s))) | 1U)
|
---|
2100 |
|
---|
2101 |
|
---|
2102 | /*
|
---|
2103 | NONCONTIGUOUS_REGIONS is a special value for sbrk_base indicating that
|
---|
2104 | MORECORE does not return contiguous regions. In this case, we do not check
|
---|
2105 | or assume that the address of each chunk is at least sbrk_base. Otherwise,
|
---|
2106 | contiguity is exploited in merging together, when possible, results
|
---|
2107 | from consecutive MORECORE calls.
|
---|
2108 |
|
---|
2109 | The possible values for sbrk_base are:
|
---|
2110 | MORECORE_FAILURE:
|
---|
2111 | MORECORE has not yet been called, but we expect contiguous space
|
---|
2112 | NONCONTIGUOUS_REGIONS:
|
---|
2113 | we don't expect or rely on contiguous space
|
---|
2114 | any other legal address:
|
---|
2115 | the first address returned by MORECORE when contiguous
|
---|
2116 | */
|
---|
2117 |
|
---|
2118 | #define NONCONTIGUOUS_REGIONS ((char*)(-3))
|
---|
2119 |
|
---|
2120 |
|
---|
2121 | /*
|
---|
2122 | ----------- Internal state representation and initialization -----------
|
---|
2123 | */
|
---|
2124 |
|
---|
2125 |
|
---|
2126 | struct malloc_state {
|
---|
2127 |
|
---|
2128 | /* The maximum chunk size to be eligible for fastbin */
|
---|
2129 | INTERNAL_SIZE_T max_fast; /* low bit used as fastbin flag */
|
---|
2130 |
|
---|
2131 | /* Base of the topmost chunk -- not otherwise kept in a bin */
|
---|
2132 | mchunkptr top;
|
---|
2133 |
|
---|
2134 | /* The remainder from the most recent split of a small request */
|
---|
2135 | mchunkptr last_remainder;
|
---|
2136 |
|
---|
2137 | /* Fastbins */
|
---|
2138 | mfastbinptr fastbins[NFASTBINS];
|
---|
2139 |
|
---|
2140 | /* Normal bins packed as described above */
|
---|
2141 | mchunkptr bins[NBINS * 2];
|
---|
2142 |
|
---|
2143 | /* Bitmap of bins */
|
---|
2144 | unsigned int binmap[BINMAPSIZE];
|
---|
2145 |
|
---|
2146 | /* Tunable parameters */
|
---|
2147 | unsigned long trim_threshold;
|
---|
2148 | INTERNAL_SIZE_T top_pad;
|
---|
2149 | INTERNAL_SIZE_T mmap_threshold;
|
---|
2150 |
|
---|
2151 | /* Memory map support */
|
---|
2152 | int n_mmaps;
|
---|
2153 | int n_mmaps_max;
|
---|
2154 | int max_n_mmaps;
|
---|
2155 |
|
---|
2156 | /* Bookkeeping for sbrk */
|
---|
2157 | unsigned int pagesize; /* Cache malloc_getpagesize */
|
---|
2158 | char* sbrk_base; /* first address returned by sbrk,
|
---|
2159 | or NONCONTIGUOUS_REGIONS */
|
---|
2160 | /* Statistics */
|
---|
2161 |
|
---|
2162 | INTERNAL_SIZE_T mmapped_mem;
|
---|
2163 | INTERNAL_SIZE_T sbrked_mem;
|
---|
2164 |
|
---|
2165 | INTERNAL_SIZE_T max_sbrked_mem;
|
---|
2166 | INTERNAL_SIZE_T max_mmapped_mem;
|
---|
2167 | INTERNAL_SIZE_T max_total_mem;
|
---|
2168 | };
|
---|
2169 |
|
---|
2170 | typedef struct malloc_state *mstate;
|
---|
2171 |
|
---|
2172 | /*
|
---|
2173 | There is exactly one instance of this struct in this malloc.
|
---|
2174 |
|
---|
2175 | If you are adapting this malloc in a way that does NOT use a static
|
---|
2176 | malloc_state, you MUST explicitly zero-fill it before using. This
|
---|
2177 | malloc relies on the property that malloc_state is initialized to
|
---|
2178 | all zeroes (as is true of C statics).
|
---|
2179 |
|
---|
2180 | */
|
---|
2181 |
|
---|
2182 | static struct malloc_state av_; /* never directly referenced */
|
---|
2183 |
|
---|
2184 | /*
|
---|
2185 | All uses of av_ are via get_malloc_state().
|
---|
2186 | This simplifies construction of multithreaded, etc extensions.
|
---|
2187 |
|
---|
2188 | At most one call to get_malloc_state is made per invocation of
|
---|
2189 | the public versions of malloc, free, and all other routines
|
---|
2190 | except realloc, valloc, and vpalloc. Also, it is called
|
---|
2191 | in check* routines if DEBUG is set.
|
---|
2192 | */
|
---|
2193 |
|
---|
2194 | #define get_malloc_state() (&(av_))
|
---|
2195 |
|
---|
2196 | /*
|
---|
2197 | Initialize a malloc_state struct.
|
---|
2198 |
|
---|
2199 | This is called only from within malloc_consolidate, which needs
|
---|
2200 | be called in the same contexts anyway. It is never called directly
|
---|
2201 | outside of malloc_consolidate because some optimizing compilers try
|
---|
2202 | to inline it at all call points, which turns out not to be an
|
---|
2203 | optimization at all. (Inlining it only in malloc_consolidate is fine though.)
|
---|
2204 | */
|
---|
2205 |
|
---|
2206 | #if __STD_C
|
---|
2207 | static void malloc_init_state(mstate av)
|
---|
2208 | #else
|
---|
2209 | static void malloc_init_state(av) mstate av;
|
---|
2210 | #endif
|
---|
2211 | {
|
---|
2212 | int i;
|
---|
2213 | mbinptr bin;
|
---|
2214 |
|
---|
2215 |
|
---|
2216 | /* Uncomment this if you are not using a static av */
|
---|
2217 | /* MALLOC_ZERO(av, sizeof(struct malloc_state); */
|
---|
2218 |
|
---|
2219 | /* Establish circular links for normal bins */
|
---|
2220 | for (i = 1; i < NBINS; ++i) {
|
---|
2221 | bin = bin_at(av,i);
|
---|
2222 | bin->fd = bin->bk = bin;
|
---|
2223 | }
|
---|
2224 |
|
---|
2225 | av->max_fast = req2max_fast(DEFAULT_MXFAST);
|
---|
2226 |
|
---|
2227 | av->top_pad = DEFAULT_TOP_PAD;
|
---|
2228 | av->n_mmaps_max = DEFAULT_MMAP_MAX;
|
---|
2229 | av->mmap_threshold = DEFAULT_MMAP_THRESHOLD;
|
---|
2230 |
|
---|
2231 | #if MORECORE_CONTIGUOUS
|
---|
2232 | av->trim_threshold = DEFAULT_TRIM_THRESHOLD;
|
---|
2233 | av->sbrk_base = (char*)MORECORE_FAILURE;
|
---|
2234 | #else
|
---|
2235 | av->trim_threshold = (unsigned long)(-1);
|
---|
2236 | av->sbrk_base = NONCONTIGUOUS_REGIONS;
|
---|
2237 | #endif
|
---|
2238 |
|
---|
2239 | av->top = initial_top(av);
|
---|
2240 | av->pagesize = malloc_getpagesize;
|
---|
2241 | }
|
---|
2242 |
|
---|
2243 | /*
|
---|
2244 | Other internal utilities operating on mstates
|
---|
2245 | */
|
---|
2246 |
|
---|
2247 | #if __STD_C
|
---|
2248 | static Void_t* sYSMALLOc(INTERNAL_SIZE_T, mstate);
|
---|
2249 | static int sYSTRIm(size_t, mstate);
|
---|
2250 | static void malloc_consolidate(mstate);
|
---|
2251 | #else
|
---|
2252 | static Void_t* sYSMALLOc();
|
---|
2253 | static int sYSTRIm();
|
---|
2254 | static void malloc_consolidate();
|
---|
2255 | #endif
|
---|
2256 |
|
---|
2257 |
|
---|
2258 | /*
|
---|
2259 | Debugging support
|
---|
2260 |
|
---|
2261 | These routines make a number of assertions about the states
|
---|
2262 | of data structures that should be true at all times. If any
|
---|
2263 | are not true, it's very likely that a user program has somehow
|
---|
2264 | trashed memory. (It's also possible that there is a coding error
|
---|
2265 | in malloc. In which case, please report it!)
|
---|
2266 |
|
---|
2267 | */
|
---|
2268 |
|
---|
2269 | #if ! DEBUG
|
---|
2270 |
|
---|
2271 | #define check_chunk(P)
|
---|
2272 | #define check_free_chunk(P)
|
---|
2273 | #define check_inuse_chunk(P)
|
---|
2274 | #define check_remalloced_chunk(P,N)
|
---|
2275 | #define check_malloced_chunk(P,N)
|
---|
2276 | #define check_malloc_state()
|
---|
2277 |
|
---|
2278 | #else
|
---|
2279 | #define check_chunk(P) do_check_chunk(P)
|
---|
2280 | #define check_free_chunk(P) do_check_free_chunk(P)
|
---|
2281 | #define check_inuse_chunk(P) do_check_inuse_chunk(P)
|
---|
2282 | #define check_remalloced_chunk(P,N) do_check_remalloced_chunk(P,N)
|
---|
2283 | #define check_malloced_chunk(P,N) do_check_malloced_chunk(P,N)
|
---|
2284 | #define check_malloc_state() do_check_malloc_state()
|
---|
2285 |
|
---|
2286 |
|
---|
2287 | /*
|
---|
2288 | Properties of all chunks
|
---|
2289 | */
|
---|
2290 |
|
---|
2291 | #if __STD_C
|
---|
2292 | static void do_check_chunk(mchunkptr p)
|
---|
2293 | #else
|
---|
2294 | static void do_check_chunk(p) mchunkptr p;
|
---|
2295 | #endif
|
---|
2296 | {
|
---|
2297 |
|
---|
2298 | mstate av = get_malloc_state();
|
---|
2299 | unsigned long sz = chunksize(p);
|
---|
2300 |
|
---|
2301 | if (!chunk_is_mmapped(p)) {
|
---|
2302 |
|
---|
2303 | /* Has legal address ... */
|
---|
2304 | if (av->sbrk_base != NONCONTIGUOUS_REGIONS) {
|
---|
2305 | assert(((char*)p) >= ((char*)(av->sbrk_base)));
|
---|
2306 | }
|
---|
2307 |
|
---|
2308 | if (p != av->top) {
|
---|
2309 | if (av->sbrk_base != NONCONTIGUOUS_REGIONS) {
|
---|
2310 | assert(((char*)p + sz) <= ((char*)(av->top)));
|
---|
2311 | }
|
---|
2312 | }
|
---|
2313 | else {
|
---|
2314 | if (av->sbrk_base != NONCONTIGUOUS_REGIONS) {
|
---|
2315 | assert(((char*)p + sz) <= ((char*)(av->sbrk_base) + av->sbrked_mem));
|
---|
2316 | }
|
---|
2317 | /* top size is always at least MINSIZE */
|
---|
2318 | assert((long)(sz) >= (long)(MINSIZE));
|
---|
2319 | /* top predecessor always marked inuse */
|
---|
2320 | assert(prev_inuse(p));
|
---|
2321 | }
|
---|
2322 |
|
---|
2323 | }
|
---|
2324 | else {
|
---|
2325 | #if HAVE_MMAP
|
---|
2326 | /* address is outside main heap */
|
---|
2327 | /* unless mmap has been used as sbrk backup */
|
---|
2328 | if (av->sbrk_base != NONCONTIGUOUS_REGIONS) {
|
---|
2329 | assert(! (((char*)p) >= ((char*)av->sbrk_base) &&
|
---|
2330 | ((char*)p) < ((char*)(av->sbrk_base) + av->sbrked_mem)));
|
---|
2331 | }
|
---|
2332 | /* chunk is page-aligned */
|
---|
2333 | assert(((p->prev_size + sz) & (av->pagesize-1)) == 0);
|
---|
2334 | /* mem is aligned */
|
---|
2335 | assert(aligned_OK(chunk2mem(p)));
|
---|
2336 | #else
|
---|
2337 | /* force an appropriate assert violation if debug set */
|
---|
2338 | assert(!chunk_is_mmapped(p));
|
---|
2339 | #endif
|
---|
2340 | }
|
---|
2341 |
|
---|
2342 | }
|
---|
2343 |
|
---|
2344 | /*
|
---|
2345 | Properties of free chunks
|
---|
2346 | */
|
---|
2347 |
|
---|
2348 |
|
---|
2349 | #if __STD_C
|
---|
2350 | static void do_check_free_chunk(mchunkptr p)
|
---|
2351 | #else
|
---|
2352 | static void do_check_free_chunk(p) mchunkptr p;
|
---|
2353 | #endif
|
---|
2354 | {
|
---|
2355 | mstate av = get_malloc_state();
|
---|
2356 |
|
---|
2357 | INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
|
---|
2358 | mchunkptr next = chunk_at_offset(p, sz);
|
---|
2359 |
|
---|
2360 | do_check_chunk(p);
|
---|
2361 |
|
---|
2362 | /* Chunk must claim to be free ... */
|
---|
2363 | assert(!inuse(p));
|
---|
2364 | assert (!chunk_is_mmapped(p));
|
---|
2365 |
|
---|
2366 | /* Unless a special marker, must have OK fields */
|
---|
2367 | if ((unsigned long)sz >= (unsigned long)MINSIZE)
|
---|
2368 | {
|
---|
2369 | assert((sz & MALLOC_ALIGN_MASK) == 0);
|
---|
2370 | assert(aligned_OK(chunk2mem(p)));
|
---|
2371 | /* ... matching footer field */
|
---|
2372 | assert(next->prev_size == sz);
|
---|
2373 | /* ... and is fully consolidated */
|
---|
2374 | assert(prev_inuse(p));
|
---|
2375 | assert (next == av->top || inuse(next));
|
---|
2376 |
|
---|
2377 | /* ... and has minimally sane links */
|
---|
2378 | assert(p->fd->bk == p);
|
---|
2379 | assert(p->bk->fd == p);
|
---|
2380 | }
|
---|
2381 | else /* markers are always of size SIZE_SZ */
|
---|
2382 | assert(sz == SIZE_SZ);
|
---|
2383 | }
|
---|
2384 |
|
---|
2385 | /*
|
---|
2386 | Properties of inuse chunks
|
---|
2387 | */
|
---|
2388 |
|
---|
2389 |
|
---|
2390 | #if __STD_C
|
---|
2391 | static void do_check_inuse_chunk(mchunkptr p)
|
---|
2392 | #else
|
---|
2393 | static void do_check_inuse_chunk(p) mchunkptr p;
|
---|
2394 | #endif
|
---|
2395 | {
|
---|
2396 | mstate av = get_malloc_state();
|
---|
2397 | mchunkptr next;
|
---|
2398 | do_check_chunk(p);
|
---|
2399 |
|
---|
2400 | if (chunk_is_mmapped(p))
|
---|
2401 | return; /* mmapped chunks have no next/prev */
|
---|
2402 |
|
---|
2403 | /* Check whether it claims to be in use ... */
|
---|
2404 | assert(inuse(p));
|
---|
2405 |
|
---|
2406 | next = next_chunk(p);
|
---|
2407 |
|
---|
2408 | /* ... and is surrounded by OK chunks.
|
---|
2409 | Since more things can be checked with free chunks than inuse ones,
|
---|
2410 | if an inuse chunk borders them and debug is on, it's worth doing them.
|
---|
2411 | */
|
---|
2412 | if (!prev_inuse(p)) {
|
---|
2413 | /* Note that we cannot even look at prev unless it is not inuse */
|
---|
2414 | mchunkptr prv = prev_chunk(p);
|
---|
2415 | assert(next_chunk(prv) == p);
|
---|
2416 | do_check_free_chunk(prv);
|
---|
2417 | }
|
---|
2418 |
|
---|
2419 | if (next == av->top) {
|
---|
2420 | assert(prev_inuse(next));
|
---|
2421 | assert(chunksize(next) >= MINSIZE);
|
---|
2422 | }
|
---|
2423 | else if (!inuse(next))
|
---|
2424 | do_check_free_chunk(next);
|
---|
2425 |
|
---|
2426 | }
|
---|
2427 |
|
---|
2428 | /*
|
---|
2429 | Properties of chunks recycled from fastbins
|
---|
2430 | */
|
---|
2431 |
|
---|
2432 | #if __STD_C
|
---|
2433 | static void do_check_remalloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
|
---|
2434 | #else
|
---|
2435 | static void do_check_remalloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
|
---|
2436 | #endif
|
---|
2437 | {
|
---|
2438 |
|
---|
2439 | INTERNAL_SIZE_T sz = p->size & ~PREV_INUSE;
|
---|
2440 |
|
---|
2441 | do_check_inuse_chunk(p);
|
---|
2442 |
|
---|
2443 | /* Legal size ... */
|
---|
2444 | assert((sz & MALLOC_ALIGN_MASK) == 0);
|
---|
2445 | assert((long)sz - (long)MINSIZE >= 0);
|
---|
2446 | assert((long)sz - (long)s >= 0);
|
---|
2447 | assert((long)sz - (long)(s + MINSIZE) < 0);
|
---|
2448 |
|
---|
2449 | /* ... and alignment */
|
---|
2450 | assert(aligned_OK(chunk2mem(p)));
|
---|
2451 |
|
---|
2452 | }
|
---|
2453 |
|
---|
2454 | /*
|
---|
2455 | Properties of nonrecycled chunks at the point they are malloced
|
---|
2456 | */
|
---|
2457 |
|
---|
2458 | #if __STD_C
|
---|
2459 | static void do_check_malloced_chunk(mchunkptr p, INTERNAL_SIZE_T s)
|
---|
2460 | #else
|
---|
2461 | static void do_check_malloced_chunk(p, s) mchunkptr p; INTERNAL_SIZE_T s;
|
---|
2462 | #endif
|
---|
2463 | {
|
---|
2464 | /* same as recycled case ... */
|
---|
2465 | do_check_remalloced_chunk(p, s);
|
---|
2466 |
|
---|
2467 | /*
|
---|
2468 | ... plus, must obey implementation invariant that prev_inuse is
|
---|
2469 | always true of any allocated chunk; i.e., that each allocated
|
---|
2470 | chunk borders either a previously allocated and still in-use
|
---|
2471 | chunk, or the base of its memory arena. This is ensured
|
---|
2472 | by making all allocations from the the `lowest' part of any found
|
---|
2473 | chunk. This does not necessarily hold however for chunks
|
---|
2474 | recycled via fastbins.
|
---|
2475 | */
|
---|
2476 |
|
---|
2477 | assert(prev_inuse(p));
|
---|
2478 | }
|
---|
2479 |
|
---|
2480 |
|
---|
2481 | /*
|
---|
2482 | Properties of malloc_state.
|
---|
2483 |
|
---|
2484 | This may be useful for debugging malloc, as well as detecting user
|
---|
2485 | programmer errors that somehow write into malloc_state.
|
---|
2486 | */
|
---|
2487 |
|
---|
2488 | static void do_check_malloc_state()
|
---|
2489 | {
|
---|
2490 | mstate av = get_malloc_state();
|
---|
2491 | int i;
|
---|
2492 | mchunkptr p;
|
---|
2493 | mchunkptr q;
|
---|
2494 | mbinptr b;
|
---|
2495 | unsigned int biton;
|
---|
2496 | int empty;
|
---|
2497 | unsigned int idx;
|
---|
2498 | INTERNAL_SIZE_T size;
|
---|
2499 | unsigned long total = 0;
|
---|
2500 | int max_fast_bin;
|
---|
2501 |
|
---|
2502 |
|
---|
2503 | /* internal size_t must be no wider than pointer type */
|
---|
2504 | assert(sizeof(INTERNAL_SIZE_T) <= sizeof(char*));
|
---|
2505 |
|
---|
2506 | /* alignment is a power of 2 */
|
---|
2507 | assert((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-1)) == 0);
|
---|
2508 |
|
---|
2509 | /* cannot run remaining checks until fully initialized */
|
---|
2510 | if (av->top == 0 || av->top == initial_top(av))
|
---|
2511 | return;
|
---|
2512 |
|
---|
2513 | /* pagesize is a power of 2 */
|
---|
2514 | assert((av->pagesize & (av->pagesize-1)) == 0);
|
---|
2515 |
|
---|
2516 | /* properties of fastbins */
|
---|
2517 |
|
---|
2518 | /* max_fast is in allowed range */
|
---|
2519 |
|
---|
2520 | assert((av->max_fast & ~1) <= request2size(MAX_FAST_SIZE));
|
---|
2521 |
|
---|
2522 | max_fast_bin = fastbin_index(av->max_fast);
|
---|
2523 |
|
---|
2524 | for (i = 0; i < NFASTBINS; ++i) {
|
---|
2525 | p = av->fastbins[i];
|
---|
2526 |
|
---|
2527 | /* all bins past max_fast are empty */
|
---|
2528 | if (i > max_fast_bin)
|
---|
2529 | assert(p == 0);
|
---|
2530 |
|
---|
2531 | while (p != 0) {
|
---|
2532 | /* each chunk claims to be inuse */
|
---|
2533 | do_check_inuse_chunk(p);
|
---|
2534 | total += chunksize(p);
|
---|
2535 | /* chunk belongs in this bin */
|
---|
2536 | assert(fastbin_index(chunksize(p)) == i);
|
---|
2537 | p = p->fd;
|
---|
2538 | }
|
---|
2539 | }
|
---|
2540 |
|
---|
2541 | if (total != 0)
|
---|
2542 | assert(have_fastchunks(av));
|
---|
2543 |
|
---|
2544 | /* check normal bins */
|
---|
2545 | for (i = 1; i < NBINS; ++i) {
|
---|
2546 | b = bin_at(av,i);
|
---|
2547 |
|
---|
2548 | /* binmap is accurate (except for bin 1 == unsorted_chunks) */
|
---|
2549 | if (i >= 2) {
|
---|
2550 | biton = get_binmap(av,i);
|
---|
2551 | empty = last(b) == b;
|
---|
2552 | if (!biton)
|
---|
2553 | assert(empty);
|
---|
2554 | else if (!empty)
|
---|
2555 | assert(biton);
|
---|
2556 | }
|
---|
2557 |
|
---|
2558 | for (p = last(b); p != b; p = p->bk) {
|
---|
2559 | /* each chunk claims to be free */
|
---|
2560 | do_check_free_chunk(p);
|
---|
2561 | size = chunksize(p);
|
---|
2562 | total += size;
|
---|
2563 | if (i >= 2) {
|
---|
2564 | /* chunk belongs in bin */
|
---|
2565 | idx = bin_index(size);
|
---|
2566 | assert(idx == i);
|
---|
2567 | /* lists are sorted */
|
---|
2568 | assert(p->bk == b || chunksize(p->bk) >= chunksize(p));
|
---|
2569 | }
|
---|
2570 | /* chunk is followed by a legal chain of inuse chunks */
|
---|
2571 | for (q = next_chunk(p);
|
---|
2572 | q != av->top && inuse(q) && (long)(chunksize(q)) >= (long)MINSIZE;
|
---|
2573 | q = next_chunk(q))
|
---|
2574 | do_check_inuse_chunk(q);
|
---|
2575 |
|
---|
2576 | }
|
---|
2577 | }
|
---|
2578 |
|
---|
2579 | /* top chunk is OK */
|
---|
2580 | check_chunk(av->top);
|
---|
2581 |
|
---|
2582 | /* sanity checks for statistics */
|
---|
2583 |
|
---|
2584 | assert(total <= (unsigned long)(av->max_total_mem));
|
---|
2585 | assert(av->n_mmaps >= 0);
|
---|
2586 | assert(av->n_mmaps <= av->n_mmaps_max);
|
---|
2587 | assert(av->n_mmaps <= av->max_n_mmaps);
|
---|
2588 | assert(av->max_n_mmaps <= av->n_mmaps_max);
|
---|
2589 |
|
---|
2590 | assert((unsigned long)(av->sbrked_mem) <=
|
---|
2591 | (unsigned long)(av->max_sbrked_mem));
|
---|
2592 |
|
---|
2593 | assert((unsigned long)(av->mmapped_mem) <=
|
---|
2594 | (unsigned long)(av->max_mmapped_mem));
|
---|
2595 |
|
---|
2596 | assert((unsigned long)(av->max_total_mem) >=
|
---|
2597 | (unsigned long)(av->mmapped_mem) + (unsigned long)(av->sbrked_mem));
|
---|
2598 |
|
---|
2599 | }
|
---|
2600 |
|
---|
2601 |
|
---|
2602 | #endif
|
---|
2603 |
|
---|
2604 |
|
---|
2605 |
|
---|
2606 | |
---|
2607 |
|
---|
2608 |
|
---|
2609 | /* ----------- Routines dealing with system allocation -------------- */
|
---|
2610 |
|
---|
2611 | /*
|
---|
2612 | Handle malloc cases requiring more memory from system.
|
---|
2613 | malloc relays to sYSMALLOc if it cannot allocate out of
|
---|
2614 | existing memory.
|
---|
2615 |
|
---|
2616 | On entry, it is assumed that av->top does not have enough
|
---|
2617 | space to service request for nb bytes, thus requiring more meory
|
---|
2618 | from system.
|
---|
2619 | */
|
---|
2620 |
|
---|
2621 | #if __STD_C
|
---|
2622 | static Void_t* sYSMALLOc(INTERNAL_SIZE_T nb, mstate av)
|
---|
2623 | #else
|
---|
2624 | static Void_t* sYSMALLOc(nb, av) INTERNAL_SIZE_T nb; mstate av;
|
---|
2625 | #endif
|
---|
2626 | {
|
---|
2627 | mchunkptr old_top; /* incoming value of av->top */
|
---|
2628 | INTERNAL_SIZE_T old_size; /* its size */
|
---|
2629 | char* old_end; /* its end address */
|
---|
2630 |
|
---|
2631 | long size; /* arg to first MORECORE or mmap call */
|
---|
2632 | char* brk; /* return value from MORECORE */
|
---|
2633 | char* mm; /* return value from mmap call*/
|
---|
2634 |
|
---|
2635 | long correction; /* arg to 2nd MORECORE call */
|
---|
2636 | char* snd_brk; /* 2nd return val */
|
---|
2637 |
|
---|
2638 | INTERNAL_SIZE_T front_misalign; /* unusable bytes at front of new space */
|
---|
2639 | INTERNAL_SIZE_T end_misalign; /* partial page left at end of new space */
|
---|
2640 | char* aligned_brk; /* aligned offset into brk */
|
---|
2641 |
|
---|
2642 | mchunkptr p; /* the allocated/returned chunk */
|
---|
2643 | mchunkptr remainder; /* remainder from allocation */
|
---|
2644 | long remainder_size; /* its size */
|
---|
2645 |
|
---|
2646 | unsigned long sum; /* for updating stats */
|
---|
2647 |
|
---|
2648 | size_t pagemask = av->pagesize - 1;
|
---|
2649 |
|
---|
2650 | /*
|
---|
2651 | If have mmap, and the request size meets the mmap threshold, and
|
---|
2652 | the system supports mmap, and there are few enough currently
|
---|
2653 | allocated mmapped regions, and a call to mmap succeeds, try to
|
---|
2654 | directly map this request rather than expanding top.
|
---|
2655 | */
|
---|
2656 |
|
---|
2657 | #if HAVE_MMAP
|
---|
2658 | if ((unsigned long)nb >= (unsigned long)(av->mmap_threshold) &&
|
---|
2659 | (av->n_mmaps < av->n_mmaps_max)) {
|
---|
2660 |
|
---|
2661 | /*
|
---|
2662 | Round up size to nearest page. For mmapped chunks, the overhead
|
---|
2663 | is one SIZE_SZ unit larger than for normal chunks, because there
|
---|
2664 | is no following chunk whose prev_size field could be used.
|
---|
2665 | */
|
---|
2666 | size = (nb + SIZE_SZ + MALLOC_ALIGN_MASK + pagemask) & ~pagemask;
|
---|
2667 |
|
---|
2668 | mm = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE));
|
---|
2669 |
|
---|
2670 | if (mm != (char*)(MORECORE_FAILURE)) {
|
---|
2671 |
|
---|
2672 | /*
|
---|
2673 | The offset to the start of the mmapped region is stored
|
---|
2674 | in the prev_size field of the chunk. This allows us to adjust
|
---|
2675 | returned start address to meet alignment requirements here
|
---|
2676 | and in memalign(), and still be able to compute proper
|
---|
2677 | address argument for later munmap in free() and realloc().
|
---|
2678 | */
|
---|
2679 |
|
---|
2680 | front_misalign = (INTERNAL_SIZE_T)chunk2mem(mm) & MALLOC_ALIGN_MASK;
|
---|
2681 | if (front_misalign > 0) {
|
---|
2682 | correction = MALLOC_ALIGNMENT - front_misalign;
|
---|
2683 | p = (mchunkptr)(mm + correction);
|
---|
2684 | p->prev_size = correction;
|
---|
2685 | set_head(p, (size - correction) |IS_MMAPPED);
|
---|
2686 | }
|
---|
2687 | else {
|
---|
2688 | p = (mchunkptr)mm;
|
---|
2689 | set_head(p, size|IS_MMAPPED);
|
---|
2690 | }
|
---|
2691 |
|
---|
2692 | check_chunk(p);
|
---|
2693 |
|
---|
2694 | /* update statistics */
|
---|
2695 |
|
---|
2696 | if (++av->n_mmaps > av->max_n_mmaps)
|
---|
2697 | av->max_n_mmaps = av->n_mmaps;
|
---|
2698 |
|
---|
2699 | sum = av->mmapped_mem += size;
|
---|
2700 | if (sum > (unsigned long)(av->max_mmapped_mem))
|
---|
2701 | av->max_mmapped_mem = sum;
|
---|
2702 | sum += av->sbrked_mem;
|
---|
2703 | if (sum > (unsigned long)(av->max_total_mem))
|
---|
2704 | av->max_total_mem = sum;
|
---|
2705 |
|
---|
2706 | return chunk2mem(p);
|
---|
2707 | }
|
---|
2708 | }
|
---|
2709 | #endif
|
---|
2710 |
|
---|
2711 | /* record incoming configuration of top */
|
---|
2712 |
|
---|
2713 | old_top = av->top;
|
---|
2714 | old_size = chunksize(old_top);
|
---|
2715 | old_end = (char*)(chunk_at_offset(old_top, old_size));
|
---|
2716 |
|
---|
2717 | brk = snd_brk = (char*)(MORECORE_FAILURE);
|
---|
2718 |
|
---|
2719 | /*
|
---|
2720 | If not the first time through, we require old_size to be
|
---|
2721 | at least MINSIZE and to have prev_inuse set.
|
---|
2722 | */
|
---|
2723 |
|
---|
2724 | assert(old_top == initial_top(av) ||
|
---|
2725 | ((unsigned long) (old_size) >= (unsigned long)(MINSIZE) &&
|
---|
2726 | prev_inuse(old_top)));
|
---|
2727 |
|
---|
2728 |
|
---|
2729 | /* Request enough space for nb + pad + overhead */
|
---|
2730 |
|
---|
2731 | size = nb + av->top_pad + MINSIZE;
|
---|
2732 |
|
---|
2733 | /*
|
---|
2734 | If contiguous, we can subtract out existing space that we hope to
|
---|
2735 | combine with new space. We add it back later only if
|
---|
2736 | we don't actually get contiguous space.
|
---|
2737 | */
|
---|
2738 |
|
---|
2739 | if (av->sbrk_base != NONCONTIGUOUS_REGIONS)
|
---|
2740 | size -= old_size;
|
---|
2741 |
|
---|
2742 | /*
|
---|
2743 | Round to a multiple of page size.
|
---|
2744 | If MORECORE is not contiguous, this ensures that we only call it
|
---|
2745 | with whole-page arguments. And if MORECORE is contiguous and
|
---|
2746 | this is not first time through, this preserves page-alignment of
|
---|
2747 | previous calls. Otherwise, we re-correct anyway to page-align below.
|
---|
2748 | */
|
---|
2749 |
|
---|
2750 | size = (size + pagemask) & ~pagemask;
|
---|
2751 |
|
---|
2752 | /*
|
---|
2753 | Don't try to call MORECORE if argument is so big as to appear
|
---|
2754 | negative. Note that since mmap takes size_t arg, it may succeed
|
---|
2755 | below even if we cannot call MORECORE.
|
---|
2756 | */
|
---|
2757 |
|
---|
2758 | if (size > 0)
|
---|
2759 | brk = (char*)(MORECORE(size));
|
---|
2760 |
|
---|
2761 | /*
|
---|
2762 | If have mmap, try using it as a backup when MORECORE fails. This
|
---|
2763 | is worth doing on systems that have "holes" in address space, so
|
---|
2764 | sbrk cannot extend to give contiguous space, but space is available
|
---|
2765 | elsewhere. Note that we ignore mmap max count and threshold limits,
|
---|
2766 | since there is no reason to artificially limit use here.
|
---|
2767 | */
|
---|
2768 |
|
---|
2769 | #if HAVE_MMAP
|
---|
2770 | if (brk == (char*)(MORECORE_FAILURE)) {
|
---|
2771 |
|
---|
2772 | /* Cannot merge with old top, so add its size back in */
|
---|
2773 |
|
---|
2774 | if (av->sbrk_base != NONCONTIGUOUS_REGIONS)
|
---|
2775 | size = (size + old_size + pagemask) & ~pagemask;
|
---|
2776 |
|
---|
2777 | /* If we are relying on mmap as backup, then use larger units */
|
---|
2778 |
|
---|
2779 | if ((unsigned long)size < (unsigned long)MMAP_AS_MORECORE_SIZE)
|
---|
2780 | size = MMAP_AS_MORECORE_SIZE;
|
---|
2781 |
|
---|
2782 | brk = (char*)(MMAP(0, size, PROT_READ|PROT_WRITE, MAP_PRIVATE));
|
---|
2783 |
|
---|
2784 | if (brk != (char*)(MORECORE_FAILURE)) {
|
---|
2785 |
|
---|
2786 | /* We do not need, and cannot use, another sbrk call to find end */
|
---|
2787 | snd_brk = brk + size;
|
---|
2788 |
|
---|
2789 | /*
|
---|
2790 | Record that we no longer have a contiguous sbrk region.
|
---|
2791 | After the first time mmap is used as backup, we cannot
|
---|
2792 | ever rely on contiguous space.
|
---|
2793 | */
|
---|
2794 | av->sbrk_base = NONCONTIGUOUS_REGIONS;
|
---|
2795 | }
|
---|
2796 | }
|
---|
2797 | #endif
|
---|
2798 |
|
---|
2799 | if (brk != (char*)(MORECORE_FAILURE)) {
|
---|
2800 |
|
---|
2801 | av->sbrked_mem += size;
|
---|
2802 |
|
---|
2803 | /*
|
---|
2804 | If MORECORE extends previous space, we can likewise extend top size.
|
---|
2805 | */
|
---|
2806 |
|
---|
2807 | if (brk == old_end && snd_brk == (char*)(MORECORE_FAILURE)) {
|
---|
2808 | set_head(old_top, (size + old_size) | PREV_INUSE);
|
---|
2809 | }
|
---|
2810 |
|
---|
2811 | /*
|
---|
2812 | Otherwise, make adjustments guided by the special values of
|
---|
2813 | av->sbrk_base (MORECORE_FAILURE or NONCONTIGUOUS_REGIONS):
|
---|
2814 |
|
---|
2815 | * If the first time through or noncontiguous, we need to call sbrk
|
---|
2816 | just to find out where the end of memory lies.
|
---|
2817 |
|
---|
2818 | * We need to ensure that all returned chunks from malloc will meet
|
---|
2819 | MALLOC_ALIGNMENT
|
---|
2820 |
|
---|
2821 | * If there was an intervening foreign sbrk, we need to adjust sbrk
|
---|
2822 | request size to account for fact that we will not be able to
|
---|
2823 | combine new space with existing space in old_top.
|
---|
2824 |
|
---|
2825 | * Almost all systems internally allocate whole pages at a time, in
|
---|
2826 | which case we might as well use the whole last page of request.
|
---|
2827 | So we allocate enough more memory to hit a page boundary now,
|
---|
2828 | which in turn causes future contiguous calls to page-align.
|
---|
2829 |
|
---|
2830 | */
|
---|
2831 |
|
---|
2832 | else {
|
---|
2833 | front_misalign = 0;
|
---|
2834 | end_misalign = 0;
|
---|
2835 | correction = 0;
|
---|
2836 | aligned_brk = brk;
|
---|
2837 |
|
---|
2838 | /* handle contiguous cases */
|
---|
2839 | if (av->sbrk_base != NONCONTIGUOUS_REGIONS) {
|
---|
2840 |
|
---|
2841 | /* Guarantee alignment of first new chunk made from this space */
|
---|
2842 |
|
---|
2843 | front_misalign = (INTERNAL_SIZE_T)chunk2mem(brk) & MALLOC_ALIGN_MASK;
|
---|
2844 | if (front_misalign > 0) {
|
---|
2845 |
|
---|
2846 | /*
|
---|
2847 | Skip over some bytes to arrive at an aligned position.
|
---|
2848 | We don't need to specially mark these wasted front bytes.
|
---|
2849 | They will never be accessed anyway because
|
---|
2850 | prev_inuse of av->top (and any chunk created from its start)
|
---|
2851 | is always true after initialization.
|
---|
2852 | */
|
---|
2853 |
|
---|
2854 | correction = MALLOC_ALIGNMENT - front_misalign;
|
---|
2855 | aligned_brk += correction;
|
---|
2856 | }
|
---|
2857 |
|
---|
2858 | /*
|
---|
2859 | If this isn't adjacent to a previous sbrk, then we will not
|
---|
2860 | be able to merge with old_top space, so must add to 2nd request.
|
---|
2861 | */
|
---|
2862 |
|
---|
2863 | correction += old_size;
|
---|
2864 |
|
---|
2865 | /* Pad out to hit a page boundary */
|
---|
2866 |
|
---|
2867 | end_misalign = (INTERNAL_SIZE_T)(brk + size + correction);
|
---|
2868 | correction += ((end_misalign + pagemask) & ~pagemask) - end_misalign;
|
---|
2869 |
|
---|
2870 | assert(correction >= 0);
|
---|
2871 |
|
---|
2872 | snd_brk = (char*)(MORECORE(correction));
|
---|
2873 |
|
---|
2874 | /*
|
---|
2875 | If can't allocate correction, try to at least find out current
|
---|
2876 | brk. It might be enough to proceed without failing.
|
---|
2877 |
|
---|
2878 | Note that if second sbrk did NOT fail, we assume that space
|
---|
2879 | is contiguous with first sbrk. This is a safe assumption unless
|
---|
2880 | program is multithreaded but doesn't use locks and a foreign sbrk
|
---|
2881 | occurred between our first and second calls.
|
---|
2882 | */
|
---|
2883 |
|
---|
2884 | if (snd_brk == (char*)(MORECORE_FAILURE)) {
|
---|
2885 | correction = 0;
|
---|
2886 | snd_brk = (char*)(MORECORE(0));
|
---|
2887 | }
|
---|
2888 | }
|
---|
2889 |
|
---|
2890 | /* handle non-contiguous cases */
|
---|
2891 | else {
|
---|
2892 |
|
---|
2893 | /* MORECORE/mmap must correctly align etc */
|
---|
2894 | assert(((unsigned long)chunk2mem(brk) & MALLOC_ALIGN_MASK) == 0);
|
---|
2895 |
|
---|
2896 | /* Find out current end of memory */
|
---|
2897 | if (snd_brk == (char*)(MORECORE_FAILURE)) {
|
---|
2898 | snd_brk = (char*)(MORECORE(0));
|
---|
2899 | }
|
---|
2900 |
|
---|
2901 | /* This must lie on a page boundary */
|
---|
2902 | if (snd_brk != (char*)(MORECORE_FAILURE)) {
|
---|
2903 | assert(((INTERNAL_SIZE_T)(snd_brk) & pagemask) == 0);
|
---|
2904 | }
|
---|
2905 | }
|
---|
2906 |
|
---|
2907 | /* Adjust top based on results of second sbrk */
|
---|
2908 | if (snd_brk != (char*)(MORECORE_FAILURE)) {
|
---|
2909 |
|
---|
2910 | av->top = (mchunkptr)aligned_brk;
|
---|
2911 | set_head(av->top, (snd_brk - aligned_brk + correction) | PREV_INUSE);
|
---|
2912 |
|
---|
2913 | av->sbrked_mem += correction;
|
---|
2914 |
|
---|
2915 | /* If first time through and contiguous, record base */
|
---|
2916 | if (old_top == initial_top(av)) {
|
---|
2917 | if (av->sbrk_base == (char*)(MORECORE_FAILURE))
|
---|
2918 | av->sbrk_base = brk;
|
---|
2919 | }
|
---|
2920 |
|
---|
2921 | /*
|
---|
2922 | Otherwise, we either have a gap due to foreign sbrk or a
|
---|
2923 | non-contiguous region. Insert a double fencepost at old_top
|
---|
2924 | to prevent consolidation with space we don't own. These
|
---|
2925 | fenceposts are artificial chunks that are marked as inuse
|
---|
2926 | and are in any case too small to use. We need two to make
|
---|
2927 | sizes and alignments work out.
|
---|
2928 | */
|
---|
2929 |
|
---|
2930 | else {
|
---|
2931 |
|
---|
2932 | /*
|
---|
2933 | Shrink old_top to insert fenceposts, keeping size a
|
---|
2934 | multiple of MALLOC_ALIGNMENT.
|
---|
2935 | */
|
---|
2936 | old_size = (old_size - 3*SIZE_SZ) & ~MALLOC_ALIGN_MASK;
|
---|
2937 | set_head(old_top, old_size | PREV_INUSE);
|
---|
2938 |
|
---|
2939 | /*
|
---|
2940 | Note that the following assignments overwrite old_top when
|
---|
2941 | old_size was previously MINSIZE. This is intentional. We
|
---|
2942 | need the fencepost, even if old_top otherwise gets lost.
|
---|
2943 | */
|
---|
2944 | chunk_at_offset(old_top, old_size )->size =
|
---|
2945 | SIZE_SZ|PREV_INUSE;
|
---|
2946 |
|
---|
2947 | chunk_at_offset(old_top, old_size + SIZE_SZ)->size =
|
---|
2948 | SIZE_SZ|PREV_INUSE;
|
---|
2949 |
|
---|
2950 | /* If possible, release the rest. */
|
---|
2951 | if (old_size >= MINSIZE)
|
---|
2952 | fREe(chunk2mem(old_top));
|
---|
2953 |
|
---|
2954 | }
|
---|
2955 | }
|
---|
2956 | }
|
---|
2957 |
|
---|
2958 | /* Update statistics */
|
---|
2959 |
|
---|
2960 | sum = av->sbrked_mem;
|
---|
2961 | if (sum > (unsigned long)(av->max_sbrked_mem))
|
---|
2962 | av->max_sbrked_mem = sum;
|
---|
2963 |
|
---|
2964 | sum += av->mmapped_mem;
|
---|
2965 | if (sum > (unsigned long)(av->max_total_mem))
|
---|
2966 | av->max_total_mem = sum;
|
---|
2967 |
|
---|
2968 | check_malloc_state();
|
---|
2969 |
|
---|
2970 | /* finally, do the allocation */
|
---|
2971 |
|
---|
2972 | p = av->top;
|
---|
2973 | size = chunksize(p);
|
---|
2974 | remainder_size = (long)size - (long)nb;
|
---|
2975 |
|
---|
2976 | /* check that one of the above allocation paths succeeded */
|
---|
2977 | if (remainder_size >= (long)MINSIZE) {
|
---|
2978 | remainder = chunk_at_offset(p, nb);
|
---|
2979 | av->top = remainder;
|
---|
2980 | set_head(p, nb | PREV_INUSE);
|
---|
2981 | set_head(remainder, remainder_size | PREV_INUSE);
|
---|
2982 |
|
---|
2983 | check_malloced_chunk(p, nb);
|
---|
2984 | return chunk2mem(p);
|
---|
2985 | }
|
---|
2986 | }
|
---|
2987 |
|
---|
2988 | /* catch all failure paths */
|
---|
2989 | MALLOC_FAILURE_ACTION;
|
---|
2990 | return 0;
|
---|
2991 | }
|
---|
2992 |
|
---|
2993 |
|
---|
2994 | /*
|
---|
2995 | sYSTRIm is an inverse of sorts to sYSMALLOc.
|
---|
2996 | It gives memory back to the system (via negative
|
---|
2997 | arguments to sbrk) if there is unused memory at the `high' end of
|
---|
2998 | the malloc pool. It is called automatically by free()
|
---|
2999 | when top space exceeds the trim threshold.
|
---|
3000 | returns 1 if it actually released any memory, else 0.
|
---|
3001 | */
|
---|
3002 |
|
---|
3003 | #if __STD_C
|
---|
3004 | static int sYSTRIm(size_t pad, mstate av)
|
---|
3005 | #else
|
---|
3006 | static int sYSTRIm(pad, av) size_t pad; mstate av;
|
---|
3007 | #endif
|
---|
3008 | {
|
---|
3009 | long top_size; /* Amount of top-most memory */
|
---|
3010 | long extra; /* Amount to release */
|
---|
3011 | long released; /* Amount actually released */
|
---|
3012 | char* current_brk; /* address returned by pre-check sbrk call */
|
---|
3013 | char* new_brk; /* address returned by post-check sbrk call */
|
---|
3014 | size_t pagesz;
|
---|
3015 |
|
---|
3016 | /* Don't bother trying if sbrk doesn't provide contiguous regions */
|
---|
3017 | if (av->sbrk_base != NONCONTIGUOUS_REGIONS) {
|
---|
3018 |
|
---|
3019 | pagesz = av->pagesize;
|
---|
3020 | top_size = chunksize(av->top);
|
---|
3021 |
|
---|
3022 | /* Release in pagesize units, keeping at least one page */
|
---|
3023 | extra = ((top_size - pad - MINSIZE + (pagesz-1)) / pagesz - 1) * pagesz;
|
---|
3024 |
|
---|
3025 | if (extra > 0) {
|
---|
3026 |
|
---|
3027 | /*
|
---|
3028 | Only proceed if end of memory is where we last set it.
|
---|
3029 | This avoids problems if there were foreign sbrk calls.
|
---|
3030 | */
|
---|
3031 | current_brk = (char*)(MORECORE(0));
|
---|
3032 | if (current_brk == (char*)(av->top) + top_size) {
|
---|
3033 |
|
---|
3034 | /*
|
---|
3035 | Attempt to release memory. We ignore return value,
|
---|
3036 | and instead call again to find out where new end of memory is.
|
---|
3037 | This avoids problems if first call releases less than we asked,
|
---|
3038 | of if failure somehow altered brk value. (We could still
|
---|
3039 | encounter problems if it altered brk in some very bad way,
|
---|
3040 | but the only thing we can do is adjust anyway, which will cause
|
---|
3041 | some downstream failure.)
|
---|
3042 | */
|
---|
3043 |
|
---|
3044 | MORECORE(-extra);
|
---|
3045 | new_brk = (char*)(MORECORE(0));
|
---|
3046 |
|
---|
3047 | if (new_brk != (char*)MORECORE_FAILURE) {
|
---|
3048 | released = (long)(current_brk - new_brk);
|
---|
3049 |
|
---|
3050 | if (released != 0) {
|
---|
3051 | /* Success. Adjust top. */
|
---|
3052 | av->sbrked_mem -= released;
|
---|
3053 | set_head(av->top, (top_size - released) | PREV_INUSE);
|
---|
3054 | check_malloc_state();
|
---|
3055 | return 1;
|
---|
3056 | }
|
---|
3057 | }
|
---|
3058 | }
|
---|
3059 | }
|
---|
3060 | }
|
---|
3061 |
|
---|
3062 | return 0;
|
---|
3063 | }
|
---|
3064 |
|
---|
3065 | /* ----------------------- Main public routines ----------------------- */
|
---|
3066 |
|
---|
3067 |
|
---|
3068 | /*
|
---|
3069 | Malloc routine. See running comments for algorithm description.
|
---|
3070 | */
|
---|
3071 |
|
---|
3072 | #if __STD_C
|
---|
3073 | Void_t* mALLOc(size_t bytes)
|
---|
3074 | #else
|
---|
3075 | Void_t* mALLOc(bytes) size_t bytes;
|
---|
3076 | #endif
|
---|
3077 | {
|
---|
3078 | mstate av = get_malloc_state();
|
---|
3079 |
|
---|
3080 | INTERNAL_SIZE_T nb; /* normalized request size */
|
---|
3081 | unsigned int idx; /* associated bin index */
|
---|
3082 | mbinptr bin; /* associated bin */
|
---|
3083 | mfastbinptr* fb; /* associated fastbin */
|
---|
3084 |
|
---|
3085 | mchunkptr victim; /* inspected/selected chunk */
|
---|
3086 | INTERNAL_SIZE_T size; /* its size */
|
---|
3087 | int victim_index; /* its bin index */
|
---|
3088 |
|
---|
3089 | mchunkptr remainder; /* remainder from a split */
|
---|
3090 | long remainder_size; /* its size */
|
---|
3091 |
|
---|
3092 | unsigned int block; /* bit map traverser */
|
---|
3093 | unsigned int bit; /* bit map traverser */
|
---|
3094 | unsigned int map; /* current word of binmap */
|
---|
3095 |
|
---|
3096 | mchunkptr fwd; /* misc temp for linking */
|
---|
3097 | mchunkptr bck; /* misc temp for linking */
|
---|
3098 |
|
---|
3099 |
|
---|
3100 | /*
|
---|
3101 | Check request for legality and convert to internal form, nb.
|
---|
3102 | This rejects negative arguments when size_t is treated as
|
---|
3103 | signed. It also rejects arguments that are so large that the size
|
---|
3104 | appears negative when aligned and padded. The converted form
|
---|
3105 | adds SIZE_T bytes overhead plus possibly more to obtain necessary
|
---|
3106 | alignment and/or to obtain a size of at least MINSIZE, the
|
---|
3107 | smallest allocatable size.
|
---|
3108 | */
|
---|
3109 |
|
---|
3110 | checked_request2size(bytes, nb);
|
---|
3111 |
|
---|
3112 | /*
|
---|
3113 | If the size qualifies as a fastbin, first check corresponding bin.
|
---|
3114 | This code is safe to execute even if av not yet initialized, so we
|
---|
3115 | can try it first, which saves some time on this fast path.
|
---|
3116 | */
|
---|
3117 |
|
---|
3118 | if (nb <= av->max_fast) {
|
---|
3119 | fb = &(av->fastbins[(fastbin_index(nb))]);
|
---|
3120 | if ( (victim = *fb) != 0) {
|
---|
3121 | *fb = victim->fd;
|
---|
3122 | check_remalloced_chunk(victim, nb);
|
---|
3123 | return chunk2mem(victim);
|
---|
3124 | }
|
---|
3125 | }
|
---|
3126 |
|
---|
3127 | /*
|
---|
3128 | If a small request, check regular bin. Since these "smallbins"
|
---|
3129 | hold one size each, no searching within bins is necessary.
|
---|
3130 |
|
---|
3131 | (If a large request, we need to wait until unsorted chunks are
|
---|
3132 | processed to find best fit. But for small ones, fits are exact
|
---|
3133 | anyway, so we can check now, which is faster.)
|
---|
3134 | */
|
---|
3135 |
|
---|
3136 | if (in_smallbin_range(nb)) {
|
---|
3137 | idx = smallbin_index(nb);
|
---|
3138 | bin = bin_at(av,idx);
|
---|
3139 |
|
---|
3140 | if ( (victim = last(bin)) != bin) {
|
---|
3141 | if (victim == 0) /* initialization check */
|
---|
3142 | malloc_consolidate(av);
|
---|
3143 | else {
|
---|
3144 | bck = victim->bk;
|
---|
3145 | set_inuse_bit_at_offset(victim, nb);
|
---|
3146 | bin->bk = bck;
|
---|
3147 | bck->fd = bin;
|
---|
3148 |
|
---|
3149 | check_malloced_chunk(victim, nb);
|
---|
3150 | return chunk2mem(victim);
|
---|
3151 | }
|
---|
3152 | }
|
---|
3153 | }
|
---|
3154 |
|
---|
3155 | /*
|
---|
3156 | If a large request, consolidate fastbins before continuing.
|
---|
3157 | While it might look excessive to kill all fastbins before
|
---|
3158 | even seeing if there is space available, this avoids
|
---|
3159 | fragmentation problems normally associated with fastbins.
|
---|
3160 | Also, in practice, programs tend to have runs of either small or
|
---|
3161 | large requests, but less often mixtures, so consolidation is not
|
---|
3162 | usually invoked all that often.
|
---|
3163 | */
|
---|
3164 |
|
---|
3165 | else {
|
---|
3166 | idx = largebin_index(nb);
|
---|
3167 | if (have_fastchunks(av)) /* consolidation/initialization check */
|
---|
3168 | malloc_consolidate(av);
|
---|
3169 | }
|
---|
3170 |
|
---|
3171 |
|
---|
3172 | /*
|
---|
3173 | Process recently freed or remaindered chunks, taking one only if
|
---|
3174 | it is exact fit, or, if a small request, it is the remainder from
|
---|
3175 | the most recent non-exact fit. Place other traversed chunks in
|
---|
3176 | bins. Note that this step is the only place in any routine where
|
---|
3177 | chunks are placed in bins.
|
---|
3178 |
|
---|
3179 | The outer loop here is needed because we might not realize until
|
---|
3180 | near the end of malloc that we should have consolidated, so must
|
---|
3181 | do so and retry. This happens at most once, and only when we would
|
---|
3182 | otherwise need to expand memory to service a "small" request.
|
---|
3183 | */
|
---|
3184 |
|
---|
3185 |
|
---|
3186 | for(;;) {
|
---|
3187 |
|
---|
3188 | while ( (victim = unsorted_chunks(av)->bk) != unsorted_chunks(av)) {
|
---|
3189 | bck = victim->bk;
|
---|
3190 | size = chunksize(victim);
|
---|
3191 |
|
---|
3192 | /*
|
---|
3193 | If a small request, try to use last remainder if it is the
|
---|
3194 | only chunk in unsorted bin. This helps promote locality for
|
---|
3195 | runs of consecutive small requests. This is the only
|
---|
3196 | exception to best-fit.
|
---|
3197 | */
|
---|
3198 |
|
---|
3199 | if (in_smallbin_range(nb) &&
|
---|
3200 | victim == av->last_remainder &&
|
---|
3201 | bck == unsorted_chunks(av) &&
|
---|
3202 | (remainder_size = (long)size - (long)nb) >= (long)MINSIZE) {
|
---|
3203 |
|
---|
3204 | /* split and reattach remainder */
|
---|
3205 | remainder = chunk_at_offset(victim, nb);
|
---|
3206 | unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
|
---|
3207 | av->last_remainder = remainder;
|
---|
3208 | remainder->bk = remainder->fd = unsorted_chunks(av);
|
---|
3209 |
|
---|
3210 | set_head(victim, nb | PREV_INUSE);
|
---|
3211 | set_head(remainder, remainder_size | PREV_INUSE);
|
---|
3212 | set_foot(remainder, remainder_size);
|
---|
3213 |
|
---|
3214 | check_malloced_chunk(victim, nb);
|
---|
3215 | return chunk2mem(victim);
|
---|
3216 | }
|
---|
3217 |
|
---|
3218 | /* remove from unsorted list */
|
---|
3219 | unsorted_chunks(av)->bk = bck;
|
---|
3220 | bck->fd = unsorted_chunks(av);
|
---|
3221 |
|
---|
3222 | /* Take now instead of binning if exact fit */
|
---|
3223 |
|
---|
3224 | if (size == nb) {
|
---|
3225 | set_inuse_bit_at_offset(victim, size);
|
---|
3226 | check_malloced_chunk(victim, nb);
|
---|
3227 | return chunk2mem(victim);
|
---|
3228 | }
|
---|
3229 |
|
---|
3230 | /* place chunk in bin */
|
---|
3231 |
|
---|
3232 | if (in_smallbin_range(size)) {
|
---|
3233 | victim_index = smallbin_index(size);
|
---|
3234 | bck = bin_at(av, victim_index);
|
---|
3235 | fwd = bck->fd;
|
---|
3236 | }
|
---|
3237 | else {
|
---|
3238 | victim_index = largebin_index(size);
|
---|
3239 | bck = bin_at(av, victim_index);
|
---|
3240 | fwd = bck->fd;
|
---|
3241 |
|
---|
3242 | /* maintain large bins in sorted order */
|
---|
3243 | if (fwd != bck) {
|
---|
3244 | /* if smaller than smallest, bypass loop below */
|
---|
3245 | if ((unsigned long)size <=
|
---|
3246 | (unsigned long)(chunksize(bck->bk))) {
|
---|
3247 | fwd = bck;
|
---|
3248 | bck = bck->bk;
|
---|
3249 | }
|
---|
3250 | else {
|
---|
3251 | while (fwd != bck &&
|
---|
3252 | (unsigned long)size < (unsigned long)(chunksize(fwd))) {
|
---|
3253 | fwd = fwd->fd;
|
---|
3254 | }
|
---|
3255 | bck = fwd->bk;
|
---|
3256 | }
|
---|
3257 | }
|
---|
3258 | }
|
---|
3259 |
|
---|
3260 | mark_bin(av, victim_index);
|
---|
3261 | victim->bk = bck;
|
---|
3262 | victim->fd = fwd;
|
---|
3263 | fwd->bk = victim;
|
---|
3264 | bck->fd = victim;
|
---|
3265 | }
|
---|
3266 |
|
---|
3267 | /*
|
---|
3268 | If a large request, scan through the chunks of current bin in
|
---|
3269 | sorted order to find smallest that fits. This is the only step
|
---|
3270 | where an unbounded number of chunks might be scanned without doing
|
---|
3271 | anything useful with them. However the lists tend to be very
|
---|
3272 | short.
|
---|
3273 | */
|
---|
3274 |
|
---|
3275 | if (!in_smallbin_range(nb)) {
|
---|
3276 | bin = bin_at(av, idx);
|
---|
3277 |
|
---|
3278 | /* skip scan if largest chunk is too small */
|
---|
3279 | if ((victim = last(bin)) != bin &&
|
---|
3280 | (long)(chunksize(first(bin))) - (long)(nb) >= 0) {
|
---|
3281 | do {
|
---|
3282 | size = chunksize(victim);
|
---|
3283 | remainder_size = (long)size - (long)nb;
|
---|
3284 |
|
---|
3285 | if (remainder_size >= 0) {
|
---|
3286 | unlink(victim, bck, fwd);
|
---|
3287 |
|
---|
3288 | /* Exhaust */
|
---|
3289 | if (remainder_size < (long)MINSIZE) {
|
---|
3290 | set_inuse_bit_at_offset(victim, size);
|
---|
3291 | check_malloced_chunk(victim, nb);
|
---|
3292 | return chunk2mem(victim);
|
---|
3293 | }
|
---|
3294 | /* Split */
|
---|
3295 | else {
|
---|
3296 | remainder = chunk_at_offset(victim, nb);
|
---|
3297 | unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
|
---|
3298 | remainder->bk = remainder->fd = unsorted_chunks(av);
|
---|
3299 | set_head(victim, nb | PREV_INUSE);
|
---|
3300 | set_head(remainder, remainder_size | PREV_INUSE);
|
---|
3301 | set_foot(remainder, remainder_size);
|
---|
3302 | check_malloced_chunk(victim, nb);
|
---|
3303 | return chunk2mem(victim);
|
---|
3304 | }
|
---|
3305 | }
|
---|
3306 | } while ( (victim = victim->bk) != bin);
|
---|
3307 | }
|
---|
3308 | }
|
---|
3309 |
|
---|
3310 | /*
|
---|
3311 | Search for a chunk by scanning bins, starting with next largest
|
---|
3312 | bin. This search is strictly by best-fit; i.e., the smallest
|
---|
3313 | (with ties going to approximately the least recently used) chunk
|
---|
3314 | that fits is selected.
|
---|
3315 |
|
---|
3316 | The bitmap avoids needing to check that most blocks are nonempty.
|
---|
3317 | The particular case of skipping all bins during warm-up phases
|
---|
3318 | when no chunks have been returned yet is faster than it might look.
|
---|
3319 | */
|
---|
3320 |
|
---|
3321 | ++idx;
|
---|
3322 | bin = bin_at(av,idx);
|
---|
3323 | block = idx2block(idx);
|
---|
3324 | map = av->binmap[block];
|
---|
3325 | bit = idx2bit(idx);
|
---|
3326 |
|
---|
3327 | for (;;) {
|
---|
3328 | /*
|
---|
3329 | Skip rest of block if there are no more set bits in this block.
|
---|
3330 | */
|
---|
3331 |
|
---|
3332 | if (bit > map || bit == 0) {
|
---|
3333 | for (;;) {
|
---|
3334 | if (++block >= BINMAPSIZE) /* out of bins */
|
---|
3335 | break;
|
---|
3336 |
|
---|
3337 | else if ( (map = av->binmap[block]) != 0) {
|
---|
3338 | bin = bin_at(av, (block << BINMAPSHIFT));
|
---|
3339 | bit = 1;
|
---|
3340 | break;
|
---|
3341 | }
|
---|
3342 | }
|
---|
3343 | /* Optimizers seem to like this double-break better than goto */
|
---|
3344 | if (block >= BINMAPSIZE)
|
---|
3345 | break;
|
---|
3346 | }
|
---|
3347 |
|
---|
3348 | /* Advance to bin with set bit. There must be one. */
|
---|
3349 | while ((bit & map) == 0) {
|
---|
3350 | bin = next_bin(bin);
|
---|
3351 | bit <<= 1;
|
---|
3352 | }
|
---|
3353 |
|
---|
3354 | victim = last(bin);
|
---|
3355 |
|
---|
3356 | /* False alarm -- the bin is empty. Clear the bit. */
|
---|
3357 | if (victim == bin) {
|
---|
3358 | av->binmap[block] = map &= ~bit; /* Write through */
|
---|
3359 | bin = next_bin(bin);
|
---|
3360 | bit <<= 1;
|
---|
3361 | }
|
---|
3362 |
|
---|
3363 | /* We know the first chunk in this bin is big enough to use. */
|
---|
3364 | else {
|
---|
3365 | size = chunksize(victim);
|
---|
3366 | remainder_size = (long)size - (long)nb;
|
---|
3367 |
|
---|
3368 | assert(remainder_size >= 0);
|
---|
3369 |
|
---|
3370 | /* unlink */
|
---|
3371 | bck = victim->bk;
|
---|
3372 | bin->bk = bck;
|
---|
3373 | bck->fd = bin;
|
---|
3374 |
|
---|
3375 |
|
---|
3376 | /* Exhaust */
|
---|
3377 | if (remainder_size < (long)MINSIZE) {
|
---|
3378 | set_inuse_bit_at_offset(victim, size);
|
---|
3379 | check_malloced_chunk(victim, nb);
|
---|
3380 | return chunk2mem(victim);
|
---|
3381 | }
|
---|
3382 |
|
---|
3383 | /* Split */
|
---|
3384 | else {
|
---|
3385 | remainder = chunk_at_offset(victim, nb);
|
---|
3386 |
|
---|
3387 | unsorted_chunks(av)->bk = unsorted_chunks(av)->fd = remainder;
|
---|
3388 | remainder->bk = remainder->fd = unsorted_chunks(av);
|
---|
3389 | /* advertise as last remainder */
|
---|
3390 | if (in_smallbin_range(nb))
|
---|
3391 | av->last_remainder = remainder;
|
---|
3392 |
|
---|
3393 | set_head(victim, nb | PREV_INUSE);
|
---|
3394 | set_head(remainder, remainder_size | PREV_INUSE);
|
---|
3395 | set_foot(remainder, remainder_size);
|
---|
3396 | check_malloced_chunk(victim, nb);
|
---|
3397 | return chunk2mem(victim);
|
---|
3398 | }
|
---|
3399 | }
|
---|
3400 | }
|
---|
3401 |
|
---|
3402 | /*
|
---|
3403 | If large enough, split off the chunk bordering the end of memory
|
---|
3404 | ("top"). Note that this use of top is in accord with the best-fit search
|
---|
3405 | rule. In effect, top is treated as larger (and thus less well
|
---|
3406 | fitting) than any other available chunk since it can be extended
|
---|
3407 | to be as large as necessary (up to system limitations).
|
---|
3408 |
|
---|
3409 | We require that "top" always exists (i.e., has size >= MINSIZE)
|
---|
3410 | after initialization, so if it would otherwise be exhuasted by
|
---|
3411 | current request, it is replenished. (Among the reasons for
|
---|
3412 | ensuring it exists is that we may need MINSIZE space to put in
|
---|
3413 | fenceposts in sysmalloc.)
|
---|
3414 | */
|
---|
3415 |
|
---|
3416 | victim = av->top;
|
---|
3417 | size = chunksize(victim);
|
---|
3418 | remainder_size = (long)size - (long)nb;
|
---|
3419 |
|
---|
3420 | if (remainder_size >= (long)MINSIZE) {
|
---|
3421 | remainder = chunk_at_offset(victim, nb);
|
---|
3422 | av->top = remainder;
|
---|
3423 | set_head(victim, nb | PREV_INUSE);
|
---|
3424 | set_head(remainder, remainder_size | PREV_INUSE);
|
---|
3425 |
|
---|
3426 | check_malloced_chunk(victim, nb);
|
---|
3427 | return chunk2mem(victim);
|
---|
3428 | }
|
---|
3429 |
|
---|
3430 | /*
|
---|
3431 | If there is space available in fastbins, consolidate and retry,
|
---|
3432 | to possibly avoid expanding memory. This can occur only if nb is
|
---|
3433 | in smallbin range so we didn't consolidate upon entry.
|
---|
3434 | */
|
---|
3435 |
|
---|
3436 | else if (have_fastchunks(av)) {
|
---|
3437 | assert(in_smallbin_range(nb));
|
---|
3438 | idx = smallbin_index(nb); /* restore original bin index */
|
---|
3439 | malloc_consolidate(av);
|
---|
3440 | }
|
---|
3441 |
|
---|
3442 | /*
|
---|
3443 | Otherwise, relay to handle system-dependent cases
|
---|
3444 | */
|
---|
3445 | else
|
---|
3446 | return sYSMALLOc(nb, av);
|
---|
3447 | }
|
---|
3448 | }
|
---|
3449 |
|
---|
3450 | |
---|
3451 |
|
---|
3452 |
|
---|
3453 | /*
|
---|
3454 | Free routine. See running comments for algorithm description.
|
---|
3455 | */
|
---|
3456 |
|
---|
3457 | #if __STD_C
|
---|
3458 | void fREe(Void_t* mem)
|
---|
3459 | #else
|
---|
3460 | void fREe(mem) Void_t* mem;
|
---|
3461 | #endif
|
---|
3462 | {
|
---|
3463 | mstate av = get_malloc_state();
|
---|
3464 |
|
---|
3465 | mchunkptr p; /* chunk corresponding to mem */
|
---|
3466 | INTERNAL_SIZE_T size; /* its size */
|
---|
3467 | mfastbinptr* fb; /* associated fastbin */
|
---|
3468 | mchunkptr nextchunk; /* next contiguous chunk */
|
---|
3469 | INTERNAL_SIZE_T nextsize; /* its size */
|
---|
3470 | int nextinuse; /* true if nextchunk is used */
|
---|
3471 | INTERNAL_SIZE_T prevsize; /* size of previous contiguous chunk */
|
---|
3472 | mchunkptr bck; /* misc temp for linking */
|
---|
3473 | mchunkptr fwd; /* misc temp for linking */
|
---|
3474 |
|
---|
3475 |
|
---|
3476 | /* free(0) has no effect */
|
---|
3477 | if (mem != 0) {
|
---|
3478 |
|
---|
3479 | p = mem2chunk(mem);
|
---|
3480 | check_inuse_chunk(p);
|
---|
3481 |
|
---|
3482 | size = chunksize(p);
|
---|
3483 |
|
---|
3484 | /*
|
---|
3485 | If eligible, place chunk on a fastbin so it can be found
|
---|
3486 | and used quickly in malloc.
|
---|
3487 | */
|
---|
3488 |
|
---|
3489 | if ((unsigned long)size <= (unsigned long)av->max_fast
|
---|
3490 |
|
---|
3491 | #if TRIM_FASTBINS
|
---|
3492 | /*
|
---|
3493 | If TRIM_FASTBINS set, don't place chunks
|
---|
3494 | bordering top into fastbins
|
---|
3495 | */
|
---|
3496 | && (chunk_at_offset(p, size) != av->top)
|
---|
3497 | #endif
|
---|
3498 | ) {
|
---|
3499 |
|
---|
3500 | set_fastchunks(av);
|
---|
3501 | fb = &(av->fastbins[fastbin_index(size)]);
|
---|
3502 | p->fd = *fb;
|
---|
3503 | *fb = p;
|
---|
3504 | }
|
---|
3505 |
|
---|
3506 | /*
|
---|
3507 | Consolidate non-mmapped chunks as they arrive.
|
---|
3508 | */
|
---|
3509 |
|
---|
3510 | else if (!chunk_is_mmapped(p)) {
|
---|
3511 |
|
---|
3512 | nextchunk = chunk_at_offset(p, size);
|
---|
3513 |
|
---|
3514 | /* consolidate backward */
|
---|
3515 | if (!prev_inuse(p)) {
|
---|
3516 | prevsize = p->prev_size;
|
---|
3517 | size += prevsize;
|
---|
3518 | p = chunk_at_offset(p, -((long) prevsize));
|
---|
3519 | unlink(p, bck, fwd);
|
---|
3520 | }
|
---|
3521 |
|
---|
3522 | nextsize = chunksize(nextchunk);
|
---|
3523 |
|
---|
3524 | if (nextchunk != av->top) {
|
---|
3525 |
|
---|
3526 | /* get and clear inuse bit */
|
---|
3527 | nextinuse = inuse_bit_at_offset(nextchunk, nextsize);
|
---|
3528 | set_head(nextchunk, nextsize);
|
---|
3529 |
|
---|
3530 | /* consolidate forward */
|
---|
3531 | if (!nextinuse) {
|
---|
3532 | unlink(nextchunk, bck, fwd);
|
---|
3533 | size += nextsize;
|
---|
3534 | }
|
---|
3535 |
|
---|
3536 | /*
|
---|
3537 | Place chunk in unsorted chunk list. Chunks are
|
---|
3538 | not placed into regular bins until after they have
|
---|
3539 | been given one chance to be used in malloc.
|
---|
3540 | */
|
---|
3541 |
|
---|
3542 | bck = unsorted_chunks(av);
|
---|
3543 | fwd = bck->fd;
|
---|
3544 | p->bk = bck;
|
---|
3545 | p->fd = fwd;
|
---|
3546 | bck->fd = p;
|
---|
3547 | fwd->bk = p;
|
---|
3548 |
|
---|
3549 | set_head(p, size | PREV_INUSE);
|
---|
3550 | set_foot(p, size);
|
---|
3551 | }
|
---|
3552 |
|
---|
3553 | /*
|
---|
3554 | If the chunk borders the current high end of memory,
|
---|
3555 | consolidate into top
|
---|
3556 | */
|
---|
3557 |
|
---|
3558 | else {
|
---|
3559 | size += nextsize;
|
---|
3560 | set_head(p, size | PREV_INUSE);
|
---|
3561 | av->top = p;
|
---|
3562 |
|
---|
3563 | /*
|
---|
3564 | If the total unused topmost memory exceeds trim
|
---|
3565 | threshold, ask malloc_trim to reduce top.
|
---|
3566 |
|
---|
3567 | Unless max_fast is 0, we don't know if there are fastbins
|
---|
3568 | bordering top, so we cannot tell for sure whether threshold has
|
---|
3569 | been reached unless fastbins are consolidated. But we don't
|
---|
3570 | want to consolidate on each free. As a compromise,
|
---|
3571 | consolidation is performed if half the threshold is
|
---|
3572 | reached.
|
---|
3573 |
|
---|
3574 | */
|
---|
3575 |
|
---|
3576 | if ((unsigned long)(size) > (unsigned long)(av->trim_threshold / 2)) {
|
---|
3577 | if (have_fastchunks(av)) {
|
---|
3578 | malloc_consolidate(av);
|
---|
3579 | size = chunksize(av->top);
|
---|
3580 | }
|
---|
3581 |
|
---|
3582 | if ((unsigned long)(size) > (unsigned long)(av->trim_threshold))
|
---|
3583 | sYSTRIm(av->top_pad, av);
|
---|
3584 | }
|
---|
3585 | }
|
---|
3586 | }
|
---|
3587 |
|
---|
3588 | /*
|
---|
3589 | If the chunk was allocated via mmap, release via munmap()
|
---|
3590 | Note that if HAVE_MMAP is false but chunk_is_mmapped is
|
---|
3591 | true, then user must have overwritten memory. There's nothing
|
---|
3592 | we can do to catch this error unless DEBUG is set, in which case
|
---|
3593 | check_inuse_chunk (above) will have triggered error.
|
---|
3594 | */
|
---|
3595 |
|
---|
3596 | else {
|
---|
3597 | #if HAVE_MMAP
|
---|
3598 | int ret;
|
---|
3599 | INTERNAL_SIZE_T offset = p->prev_size;
|
---|
3600 | av->n_mmaps--;
|
---|
3601 | av->mmapped_mem -= (size + offset);
|
---|
3602 | ret = munmap((char*)p - offset, size + offset);
|
---|
3603 | /* munmap returns non-zero on failure */
|
---|
3604 | assert(ret == 0);
|
---|
3605 | #endif
|
---|
3606 | }
|
---|
3607 | }
|
---|
3608 | }
|
---|
3609 |
|
---|
3610 | |
---|
3611 |
|
---|
3612 |
|
---|
3613 | /*
|
---|
3614 | malloc_consolidate is a specialized version of free() that tears
|
---|
3615 | down chunks held in fastbins. Free itself cannot be used for this
|
---|
3616 | purpose since, among other things, it might place chunks back onto
|
---|
3617 | fastbins. So, instead, we need to use a minor variant of the same
|
---|
3618 | code.
|
---|
3619 |
|
---|
3620 | Also, because this routine needs to be called the first time through
|
---|
3621 | malloc anyway, it turns out to be the perfect place to bury
|
---|
3622 | initialization code.
|
---|
3623 | */
|
---|
3624 |
|
---|
3625 | #if __STD_C
|
---|
3626 | static void malloc_consolidate(mstate av)
|
---|
3627 | #else
|
---|
3628 | static void malloc_consolidate(av) mstate av;
|
---|
3629 | #endif
|
---|
3630 | {
|
---|
3631 | mfastbinptr* fb;
|
---|
3632 | mfastbinptr* maxfb;
|
---|
3633 | mchunkptr p;
|
---|
3634 | mchunkptr nextp;
|
---|
3635 | mchunkptr unsorted_bin;
|
---|
3636 | mchunkptr first_unsorted;
|
---|
3637 |
|
---|
3638 | /* These have same use as in free() */
|
---|
3639 | mchunkptr nextchunk;
|
---|
3640 | INTERNAL_SIZE_T size;
|
---|
3641 | INTERNAL_SIZE_T nextsize;
|
---|
3642 | INTERNAL_SIZE_T prevsize;
|
---|
3643 | int nextinuse;
|
---|
3644 | mchunkptr bck;
|
---|
3645 | mchunkptr fwd;
|
---|
3646 |
|
---|
3647 | /*
|
---|
3648 | If max_fast is 0, we know that malloc hasn't
|
---|
3649 | yet been initialized, in which case do so.
|
---|
3650 | */
|
---|
3651 |
|
---|
3652 | if (av->max_fast == 0) {
|
---|
3653 | malloc_init_state(av);
|
---|
3654 | check_malloc_state();
|
---|
3655 | }
|
---|
3656 | else if (have_fastchunks(av)) {
|
---|
3657 | clear_fastchunks(av);
|
---|
3658 |
|
---|
3659 | unsorted_bin = unsorted_chunks(av);
|
---|
3660 |
|
---|
3661 | /*
|
---|
3662 | Remove each chunk from fast bin and consolidate it, placing it
|
---|
3663 | then in unsorted bin. Among other reasons for doing this,
|
---|
3664 | placing in unsorted bin avoids needing to calculate actual bins
|
---|
3665 | until malloc is sure that chunks aren't immediately going to be
|
---|
3666 | reused anyway.
|
---|
3667 | */
|
---|
3668 |
|
---|
3669 | maxfb = &(av->fastbins[fastbin_index(av->max_fast)]);
|
---|
3670 | fb = &(av->fastbins[0]);
|
---|
3671 | do {
|
---|
3672 | if ( (p = *fb) != 0) {
|
---|
3673 | *fb = 0;
|
---|
3674 |
|
---|
3675 | do {
|
---|
3676 | check_inuse_chunk(p);
|
---|
3677 | nextp = p->fd;
|
---|
3678 |
|
---|
3679 | /* Slightly streamlined version of consolidation code in free() */
|
---|
3680 | size = p->size & ~PREV_INUSE;
|
---|
3681 | nextchunk = chunk_at_offset(p, size);
|
---|
3682 |
|
---|
3683 | if (!prev_inuse(p)) {
|
---|
3684 | prevsize = p->prev_size;
|
---|
3685 | size += prevsize;
|
---|
3686 | p = chunk_at_offset(p, -((long) prevsize));
|
---|
3687 | unlink(p, bck, fwd);
|
---|
3688 | }
|
---|
3689 |
|
---|
3690 | nextsize = chunksize(nextchunk);
|
---|
3691 |
|
---|
3692 | if (nextchunk != av->top) {
|
---|
3693 |
|
---|
3694 | nextinuse = inuse_bit_at_offset(nextchunk, nextsize);
|
---|
3695 | set_head(nextchunk, nextsize);
|
---|
3696 |
|
---|
3697 | if (!nextinuse) {
|
---|
3698 | size += nextsize;
|
---|
3699 | unlink(nextchunk, bck, fwd);
|
---|
3700 | }
|
---|
3701 |
|
---|
3702 | first_unsorted = unsorted_bin->fd;
|
---|
3703 | unsorted_bin->fd = p;
|
---|
3704 | first_unsorted->bk = p;
|
---|
3705 |
|
---|
3706 | set_head(p, size | PREV_INUSE);
|
---|
3707 | p->bk = unsorted_bin;
|
---|
3708 | p->fd = first_unsorted;
|
---|
3709 | set_foot(p, size);
|
---|
3710 | }
|
---|
3711 |
|
---|
3712 | else {
|
---|
3713 | size += nextsize;
|
---|
3714 | set_head(p, size | PREV_INUSE);
|
---|
3715 | av->top = p;
|
---|
3716 | }
|
---|
3717 |
|
---|
3718 | } while ( (p = nextp) != 0);
|
---|
3719 |
|
---|
3720 | }
|
---|
3721 | } while (fb++ != maxfb);
|
---|
3722 | }
|
---|
3723 | }
|
---|
3724 |
|
---|
3725 |
|
---|
3726 | |
---|
3727 |
|
---|
3728 |
|
---|
3729 | /*
|
---|
3730 | Realloc algorithm cases:
|
---|
3731 |
|
---|
3732 | * Chunks that were obtained via mmap cannot be extended or shrunk
|
---|
3733 | unless HAVE_MREMAP is defined, in which case mremap is used.
|
---|
3734 | Otherwise, if the reallocation is for additional space, they are
|
---|
3735 | copied. If for less, they are just left alone.
|
---|
3736 |
|
---|
3737 | * Otherwise, if the reallocation is for additional space, and the
|
---|
3738 | chunk can be extended, it is, else a malloc-copy-free sequence is
|
---|
3739 | taken. There are several different ways that a chunk could be
|
---|
3740 | extended. All are tried:
|
---|
3741 |
|
---|
3742 | * Extending forward into following adjacent free chunk.
|
---|
3743 | * Shifting backwards, joining preceding adjacent space
|
---|
3744 | * Both shifting backwards and extending forward.
|
---|
3745 | * Extending into newly sbrked space
|
---|
3746 |
|
---|
3747 | * If there is not enough memory available to realloc, realloc
|
---|
3748 | returns null, but does NOT free the existing space.
|
---|
3749 |
|
---|
3750 | * If the reallocation is for less space, the newly unused space is
|
---|
3751 | lopped off and freed. Unless the #define REALLOC_ZERO_BYTES_FREES
|
---|
3752 | is set, realloc with a size argument of zero (re)allocates a
|
---|
3753 | minimum-sized chunk.
|
---|
3754 |
|
---|
3755 |
|
---|
3756 | The old unix realloc convention of allowing the last-free'd chunk
|
---|
3757 | to be used as an argument to realloc is no longer supported.
|
---|
3758 | I don't know of any programs still relying on this feature,
|
---|
3759 | and allowing it would also allow too many other incorrect
|
---|
3760 | usages of realloc to be sensible.
|
---|
3761 | */
|
---|
3762 |
|
---|
3763 | #if __STD_C
|
---|
3764 | Void_t* rEALLOc(Void_t* oldmem, size_t bytes)
|
---|
3765 | #else
|
---|
3766 | Void_t* rEALLOc(oldmem, bytes) Void_t* oldmem; size_t bytes;
|
---|
3767 | #endif
|
---|
3768 | {
|
---|
3769 | mstate av = get_malloc_state();
|
---|
3770 |
|
---|
3771 | INTERNAL_SIZE_T nb; /* padded request size */
|
---|
3772 |
|
---|
3773 | mchunkptr oldp; /* chunk corresponding to oldmem */
|
---|
3774 | INTERNAL_SIZE_T oldsize; /* its size */
|
---|
3775 |
|
---|
3776 | mchunkptr newp; /* chunk to return */
|
---|
3777 | INTERNAL_SIZE_T newsize; /* its size */
|
---|
3778 | Void_t* newmem; /* corresponding user mem */
|
---|
3779 |
|
---|
3780 | mchunkptr next; /* next contiguous chunk after oldp */
|
---|
3781 | mchunkptr prev; /* previous contiguous chunk before oldp */
|
---|
3782 |
|
---|
3783 | mchunkptr remainder; /* extra space at end of newp */
|
---|
3784 | long remainder_size; /* its size */
|
---|
3785 |
|
---|
3786 | mchunkptr bck; /* misc temp for linking */
|
---|
3787 | mchunkptr fwd; /* misc temp for linking */
|
---|
3788 |
|
---|
3789 | INTERNAL_SIZE_T copysize; /* bytes to copy */
|
---|
3790 | int ncopies; /* INTERNAL_SIZE_T words to copy */
|
---|
3791 | INTERNAL_SIZE_T* s; /* copy source */
|
---|
3792 | INTERNAL_SIZE_T* d; /* copy destination */
|
---|
3793 |
|
---|
3794 |
|
---|
3795 | #ifdef REALLOC_ZERO_BYTES_FREES
|
---|
3796 | if (bytes == 0) {
|
---|
3797 | fREe(oldmem);
|
---|
3798 | return 0;
|
---|
3799 | }
|
---|
3800 | #endif
|
---|
3801 |
|
---|
3802 | /* realloc of null is supposed to be same as malloc */
|
---|
3803 | if (oldmem == 0) return mALLOc(bytes);
|
---|
3804 |
|
---|
3805 | checked_request2size(bytes, nb);
|
---|
3806 |
|
---|
3807 | oldp = mem2chunk(oldmem);
|
---|
3808 | oldsize = chunksize(oldp);
|
---|
3809 |
|
---|
3810 | check_inuse_chunk(oldp);
|
---|
3811 |
|
---|
3812 | if (!chunk_is_mmapped(oldp)) {
|
---|
3813 |
|
---|
3814 | if ((unsigned long)(oldsize) >= (unsigned long)(nb)) {
|
---|
3815 | /* already big enough; split below */
|
---|
3816 | newp = oldp;
|
---|
3817 | newsize = oldsize;
|
---|
3818 | }
|
---|
3819 |
|
---|
3820 | else {
|
---|
3821 | newp = 0;
|
---|
3822 | newsize = 0;
|
---|
3823 |
|
---|
3824 | next = chunk_at_offset(oldp, oldsize);
|
---|
3825 |
|
---|
3826 | if (next == av->top) { /* Expand forward into top */
|
---|
3827 | newsize = oldsize + chunksize(next);
|
---|
3828 |
|
---|
3829 | if ((unsigned long)(newsize) >= (unsigned long)(nb + MINSIZE)) {
|
---|
3830 | set_head_size(oldp, nb);
|
---|
3831 | av->top = chunk_at_offset(oldp, nb);
|
---|
3832 | set_head(av->top, (newsize - nb) | PREV_INUSE);
|
---|
3833 | return chunk2mem(oldp);
|
---|
3834 | }
|
---|
3835 |
|
---|
3836 | else if (!prev_inuse(oldp)) { /* Shift backwards + top */
|
---|
3837 | prev = prev_chunk(oldp);
|
---|
3838 | newsize += chunksize(prev);
|
---|
3839 |
|
---|
3840 | if ((unsigned long)(newsize) >= (unsigned long)(nb + MINSIZE)) {
|
---|
3841 | newp = prev;
|
---|
3842 | unlink(prev, bck, fwd);
|
---|
3843 | av->top = chunk_at_offset(newp, nb);
|
---|
3844 | set_head(av->top, (newsize - nb) | PREV_INUSE);
|
---|
3845 | newsize = nb;
|
---|
3846 | }
|
---|
3847 | }
|
---|
3848 | }
|
---|
3849 |
|
---|
3850 | else if (!inuse(next)) { /* Forward into next chunk */
|
---|
3851 | newsize = oldsize + chunksize(next);
|
---|
3852 |
|
---|
3853 | if (((unsigned long)(newsize) >= (unsigned long)(nb))) {
|
---|
3854 | newp = oldp;
|
---|
3855 | unlink(next, bck, fwd);
|
---|
3856 | }
|
---|
3857 |
|
---|
3858 | else if (!prev_inuse(oldp)) { /* Forward + backward */
|
---|
3859 | prev = prev_chunk(oldp);
|
---|
3860 | newsize += chunksize(prev);
|
---|
3861 |
|
---|
3862 | if (((unsigned long)(newsize) >= (unsigned long)(nb))) {
|
---|
3863 | newp = prev;
|
---|
3864 | unlink(prev, bck, fwd);
|
---|
3865 | unlink(next, bck, fwd);
|
---|
3866 | }
|
---|
3867 | }
|
---|
3868 | }
|
---|
3869 |
|
---|
3870 | else if (!prev_inuse(oldp)) { /* Backward only */
|
---|
3871 | prev = prev_chunk(oldp);
|
---|
3872 | newsize = oldsize + chunksize(prev);
|
---|
3873 |
|
---|
3874 | if ((unsigned long)(newsize) >= (unsigned long)(nb)) {
|
---|
3875 | newp = prev;
|
---|
3876 | unlink(prev, bck, fwd);
|
---|
3877 | }
|
---|
3878 | }
|
---|
3879 |
|
---|
3880 | if (newp != 0) {
|
---|
3881 | if (newp != oldp) {
|
---|
3882 | /* Backward copies are not worth unrolling */
|
---|
3883 | MALLOC_COPY(chunk2mem(newp), oldmem, oldsize - SIZE_SZ, 1);
|
---|
3884 | }
|
---|
3885 | }
|
---|
3886 |
|
---|
3887 | /* Must allocate */
|
---|
3888 | else {
|
---|
3889 | newmem = mALLOc(nb - MALLOC_ALIGN_MASK);
|
---|
3890 | if (newmem == 0)
|
---|
3891 | return 0; /* propagate failure */
|
---|
3892 |
|
---|
3893 | newp = mem2chunk(newmem);
|
---|
3894 | newsize = chunksize(newp);
|
---|
3895 |
|
---|
3896 | /*
|
---|
3897 | Avoid copy if newp is next chunk after oldp.
|
---|
3898 | */
|
---|
3899 | if (newp == next) {
|
---|
3900 | newsize += oldsize;
|
---|
3901 | newp = oldp;
|
---|
3902 | }
|
---|
3903 | else {
|
---|
3904 |
|
---|
3905 | /*
|
---|
3906 | Unroll copy of <= 36 bytes (72 if 8byte sizes)
|
---|
3907 | We know that contents have an odd number of
|
---|
3908 | INTERNAL_SIZE_T-sized words; minimally 3.
|
---|
3909 | */
|
---|
3910 |
|
---|
3911 | copysize = oldsize - SIZE_SZ;
|
---|
3912 | s = (INTERNAL_SIZE_T*)oldmem;
|
---|
3913 | d = (INTERNAL_SIZE_T*)(chunk2mem(newp));
|
---|
3914 | ncopies = copysize / sizeof(INTERNAL_SIZE_T);
|
---|
3915 | assert(ncopies >= 3);
|
---|
3916 |
|
---|
3917 | if (ncopies > 9)
|
---|
3918 | MALLOC_COPY(d, s, copysize, 0);
|
---|
3919 |
|
---|
3920 | else {
|
---|
3921 | *(d+0) = *(s+0);
|
---|
3922 | *(d+1) = *(s+1);
|
---|
3923 | *(d+2) = *(s+2);
|
---|
3924 | if (ncopies > 4) {
|
---|
3925 | *(d+3) = *(s+3);
|
---|
3926 | *(d+4) = *(s+4);
|
---|
3927 | if (ncopies > 6) {
|
---|
3928 | *(d+5) = *(s+5);
|
---|
3929 | *(d+6) = *(s+6);
|
---|
3930 | if (ncopies > 8) {
|
---|
3931 | *(d+7) = *(s+7);
|
---|
3932 | *(d+8) = *(s+8);
|
---|
3933 | }
|
---|
3934 | }
|
---|
3935 | }
|
---|
3936 | }
|
---|
3937 |
|
---|
3938 | fREe(oldmem);
|
---|
3939 | check_inuse_chunk(newp);
|
---|
3940 | return chunk2mem(newp);
|
---|
3941 | }
|
---|
3942 | }
|
---|
3943 | }
|
---|
3944 |
|
---|
3945 |
|
---|
3946 | /* If possible, free extra space in old or extended chunk */
|
---|
3947 |
|
---|
3948 | remainder_size = (long)newsize - (long)nb;
|
---|
3949 | assert(remainder_size >= 0);
|
---|
3950 |
|
---|
3951 | if (remainder_size >= (long)MINSIZE) { /* split remainder */
|
---|
3952 | remainder = chunk_at_offset(newp, nb);
|
---|
3953 | set_head_size(newp, nb);
|
---|
3954 | set_head(remainder, remainder_size | PREV_INUSE);
|
---|
3955 | /* Mark remainder as inuse so free() won't complain */
|
---|
3956 | set_inuse_bit_at_offset(remainder, remainder_size);
|
---|
3957 | fREe(chunk2mem(remainder));
|
---|
3958 | }
|
---|
3959 |
|
---|
3960 | else { /* not enough extra to split off */
|
---|
3961 | set_head_size(newp, newsize);
|
---|
3962 | set_inuse_bit_at_offset(newp, newsize);
|
---|
3963 | }
|
---|
3964 |
|
---|
3965 | check_inuse_chunk(newp);
|
---|
3966 | return chunk2mem(newp);
|
---|
3967 | }
|
---|
3968 |
|
---|
3969 | /*
|
---|
3970 | Handle mmap cases
|
---|
3971 | */
|
---|
3972 |
|
---|
3973 | else {
|
---|
3974 | #if HAVE_MMAP
|
---|
3975 |
|
---|
3976 | #if HAVE_MREMAP
|
---|
3977 | INTERNAL_SIZE_T offset = oldp->prev_size;
|
---|
3978 | size_t pagemask = av->pagesize - 1;
|
---|
3979 | char *cp;
|
---|
3980 | unsigned long sum;
|
---|
3981 |
|
---|
3982 | /* Note the extra SIZE_SZ overhead */
|
---|
3983 | newsize = (nb + offset + SIZE_SZ + pagemask) & ~pagemask;
|
---|
3984 |
|
---|
3985 | /* don't need to remap if still within same page */
|
---|
3986 | if (oldsize == newsize - offset)
|
---|
3987 | return oldmem;
|
---|
3988 |
|
---|
3989 | cp = (char*)mremap((char*)oldp - offset, oldsize + offset, newsize, 1);
|
---|
3990 |
|
---|
3991 | if (cp != (char*)MORECORE_FAILURE) {
|
---|
3992 |
|
---|
3993 | newp = (mchunkptr)(cp + offset);
|
---|
3994 | set_head(newp, (newsize - offset)|IS_MMAPPED);
|
---|
3995 |
|
---|
3996 | assert(aligned_OK(chunk2mem(newp)));
|
---|
3997 | assert((newp->prev_size == offset));
|
---|
3998 |
|
---|
3999 | /* update statistics */
|
---|
4000 | sum = av->mmapped_mem += newsize - oldsize;
|
---|
4001 | if (sum > (unsigned long)(av->max_mmapped_mem))
|
---|
4002 | av->max_mmapped_mem = sum;
|
---|
4003 | sum += av->sbrked_mem;
|
---|
4004 | if (sum > (unsigned long)(av->max_total_mem))
|
---|
4005 | av->max_total_mem = sum;
|
---|
4006 |
|
---|
4007 | return chunk2mem(newp);
|
---|
4008 | }
|
---|
4009 |
|
---|
4010 | #endif
|
---|
4011 |
|
---|
4012 | /* Note the extra SIZE_SZ overhead. */
|
---|
4013 | if ((long)oldsize - (long)SIZE_SZ >= (long)nb)
|
---|
4014 | newmem = oldmem; /* do nothing */
|
---|
4015 | else {
|
---|
4016 | /* Must alloc, copy, free. */
|
---|
4017 | newmem = mALLOc(nb - MALLOC_ALIGN_MASK);
|
---|
4018 | if (newmem != 0) {
|
---|
4019 | MALLOC_COPY(newmem, oldmem, oldsize - 2*SIZE_SZ, 0);
|
---|
4020 | fREe(oldmem);
|
---|
4021 | }
|
---|
4022 | }
|
---|
4023 | return newmem;
|
---|
4024 |
|
---|
4025 | #else
|
---|
4026 | /* If !HAVE_MMAP, but chunk_is_mmapped, user must have overwritten mem */
|
---|
4027 | check_malloc_state();
|
---|
4028 | MALLOC_FAILURE_ACTION;
|
---|
4029 | return 0;
|
---|
4030 | #endif
|
---|
4031 | }
|
---|
4032 |
|
---|
4033 | }
|
---|
4034 |
|
---|
4035 | |
---|
4036 |
|
---|
4037 |
|
---|
4038 | /*
|
---|
4039 | memalign requests more than enough space from malloc, finds a spot
|
---|
4040 | within that chunk that meets the alignment request, and then
|
---|
4041 | possibly frees the leading and trailing space.
|
---|
4042 |
|
---|
4043 | Alignments must be powers of two. If the argument is not a
|
---|
4044 | power of two, the nearest greater power is used.
|
---|
4045 |
|
---|
4046 | 8-byte alignment is guaranteed by normal malloc calls, so don't
|
---|
4047 | bother calling memalign with an argument of 8 or less.
|
---|
4048 |
|
---|
4049 | Overreliance on memalign is a sure way to fragment space.
|
---|
4050 | */
|
---|
4051 |
|
---|
4052 |
|
---|
4053 | #if __STD_C
|
---|
4054 | Void_t* mEMALIGn(size_t alignment, size_t bytes)
|
---|
4055 | #else
|
---|
4056 | Void_t* mEMALIGn(alignment, bytes) size_t alignment; size_t bytes;
|
---|
4057 | #endif
|
---|
4058 | {
|
---|
4059 | INTERNAL_SIZE_T nb; /* padded request size */
|
---|
4060 | char* m; /* memory returned by malloc call */
|
---|
4061 | mchunkptr p; /* corresponding chunk */
|
---|
4062 | char* brk; /* alignment point within p */
|
---|
4063 | mchunkptr newp; /* chunk to return */
|
---|
4064 | INTERNAL_SIZE_T newsize; /* its size */
|
---|
4065 | INTERNAL_SIZE_T leadsize; /* leading space befor alignment point */
|
---|
4066 | mchunkptr remainder; /* spare room at end to split off */
|
---|
4067 | long remainder_size; /* its size */
|
---|
4068 |
|
---|
4069 |
|
---|
4070 | /* If need less alignment than we give anyway, just relay to malloc */
|
---|
4071 |
|
---|
4072 | if (alignment <= MALLOC_ALIGNMENT) return mALLOc(bytes);
|
---|
4073 |
|
---|
4074 | /* Otherwise, ensure that it is at least a minimum chunk size */
|
---|
4075 |
|
---|
4076 | if (alignment < MINSIZE) alignment = MINSIZE;
|
---|
4077 |
|
---|
4078 | /* Make sure alignment is power of 2 (in case MINSIZE is not). */
|
---|
4079 | if ((alignment & (alignment - 1)) != 0) {
|
---|
4080 | size_t a = MALLOC_ALIGNMENT * 2;
|
---|
4081 | while ((unsigned long)a < (unsigned long)alignment) a <<= 1;
|
---|
4082 | alignment = a;
|
---|
4083 | }
|
---|
4084 |
|
---|
4085 | checked_request2size(bytes, nb);
|
---|
4086 |
|
---|
4087 | /* Call malloc with worst case padding to hit alignment. */
|
---|
4088 |
|
---|
4089 | m = (char*)(mALLOc(nb + alignment + MINSIZE));
|
---|
4090 |
|
---|
4091 | if (m == 0) return 0; /* propagate failure */
|
---|
4092 |
|
---|
4093 | p = mem2chunk(m);
|
---|
4094 |
|
---|
4095 | if ((((unsigned long)(m)) % alignment) != 0) { /* misaligned */
|
---|
4096 |
|
---|
4097 | /*
|
---|
4098 | Find an aligned spot inside chunk. Since we need to give back
|
---|
4099 | leading space in a chunk of at least MINSIZE, if the first
|
---|
4100 | calculation places us at a spot with less than MINSIZE leader,
|
---|
4101 | we can move to the next aligned spot -- we've allocated enough
|
---|
4102 | total room so that this is always possible.
|
---|
4103 | */
|
---|
4104 |
|
---|
4105 | brk = (char*)mem2chunk(((unsigned long)(m + alignment - 1)) &
|
---|
4106 | -((signed long) alignment));
|
---|
4107 | if ((long)(brk - (char*)(p)) < (long)MINSIZE)
|
---|
4108 | brk = brk + alignment;
|
---|
4109 |
|
---|
4110 | newp = (mchunkptr)brk;
|
---|
4111 | leadsize = brk - (char*)(p);
|
---|
4112 | newsize = chunksize(p) - leadsize;
|
---|
4113 |
|
---|
4114 | /* For mmapped chunks, just adjust offset */
|
---|
4115 | if (chunk_is_mmapped(p)) {
|
---|
4116 | newp->prev_size = p->prev_size + leadsize;
|
---|
4117 | set_head(newp, newsize|IS_MMAPPED);
|
---|
4118 | return chunk2mem(newp);
|
---|
4119 | }
|
---|
4120 |
|
---|
4121 | /* Otherwise, give back leader, use the rest */
|
---|
4122 |
|
---|
4123 | set_head(newp, newsize | PREV_INUSE);
|
---|
4124 | set_inuse_bit_at_offset(newp, newsize);
|
---|
4125 | set_head_size(p, leadsize);
|
---|
4126 | fREe(chunk2mem(p));
|
---|
4127 | p = newp;
|
---|
4128 |
|
---|
4129 | assert (newsize >= nb &&
|
---|
4130 | (((unsigned long)(chunk2mem(p))) % alignment) == 0);
|
---|
4131 | }
|
---|
4132 |
|
---|
4133 | /* Also give back spare room at the end */
|
---|
4134 | if (!chunk_is_mmapped(p)) {
|
---|
4135 |
|
---|
4136 | remainder_size = (long)(chunksize(p)) - (long)nb;
|
---|
4137 |
|
---|
4138 | if (remainder_size >= (long)MINSIZE) {
|
---|
4139 | remainder = chunk_at_offset(p, nb);
|
---|
4140 | set_head(remainder, remainder_size | PREV_INUSE);
|
---|
4141 | set_head_size(p, nb);
|
---|
4142 | fREe(chunk2mem(remainder));
|
---|
4143 | }
|
---|
4144 | }
|
---|
4145 |
|
---|
4146 | check_inuse_chunk(p);
|
---|
4147 | return chunk2mem(p);
|
---|
4148 |
|
---|
4149 | }
|
---|
4150 |
|
---|
4151 | |
---|
4152 |
|
---|
4153 |
|
---|
4154 |
|
---|
4155 | /*
|
---|
4156 | calloc calls malloc, then zeroes out the allocated chunk.
|
---|
4157 | */
|
---|
4158 |
|
---|
4159 | #if __STD_C
|
---|
4160 | Void_t* cALLOc(size_t n_elements, size_t elem_size)
|
---|
4161 | #else
|
---|
4162 | Void_t* cALLOc(n_elements, elem_size) size_t n_elements; size_t elem_size;
|
---|
4163 | #endif
|
---|
4164 | {
|
---|
4165 | mchunkptr p;
|
---|
4166 | INTERNAL_SIZE_T clearsize;
|
---|
4167 | int nclears;
|
---|
4168 | INTERNAL_SIZE_T* d;
|
---|
4169 |
|
---|
4170 | Void_t* mem = mALLOc(n_elements * elem_size);
|
---|
4171 |
|
---|
4172 | if (mem != 0) {
|
---|
4173 | p = mem2chunk(mem);
|
---|
4174 | if (!chunk_is_mmapped(p)) { /* don't need to clear mmapped space */
|
---|
4175 |
|
---|
4176 | /*
|
---|
4177 | Unroll clear of <= 36 bytes (72 if 8byte sizes)
|
---|
4178 | We know that contents have an odd number of
|
---|
4179 | INTERNAL_SIZE_T-sized words; minimally 3.
|
---|
4180 | */
|
---|
4181 |
|
---|
4182 | d = (INTERNAL_SIZE_T*)mem;
|
---|
4183 | clearsize = chunksize(p) - SIZE_SZ;
|
---|
4184 | nclears = clearsize / sizeof(INTERNAL_SIZE_T);
|
---|
4185 | assert(nclears >= 3);
|
---|
4186 |
|
---|
4187 | if (nclears > 9)
|
---|
4188 | MALLOC_ZERO(d, clearsize);
|
---|
4189 |
|
---|
4190 | else {
|
---|
4191 | *(d+0) = 0;
|
---|
4192 | *(d+1) = 0;
|
---|
4193 | *(d+2) = 0;
|
---|
4194 | if (nclears > 4) {
|
---|
4195 | *(d+3) = 0;
|
---|
4196 | *(d+4) = 0;
|
---|
4197 | if (nclears > 6) {
|
---|
4198 | *(d+5) = 0;
|
---|
4199 | *(d+6) = 0;
|
---|
4200 | if (nclears > 8) {
|
---|
4201 | *(d+7) = 0;
|
---|
4202 | *(d+8) = 0;
|
---|
4203 | }
|
---|
4204 | }
|
---|
4205 | }
|
---|
4206 | }
|
---|
4207 | }
|
---|
4208 | }
|
---|
4209 | return mem;
|
---|
4210 | }
|
---|
4211 |
|
---|
4212 |
|
---|
4213 | /*
|
---|
4214 | cfree just calls free. It is needed/defined on some systems
|
---|
4215 | that pair it with calloc, presumably for odd historical reasons
|
---|
4216 | (such as: cfree is used in example code in first edition of K&R).
|
---|
4217 | */
|
---|
4218 |
|
---|
4219 | #if __STD_C
|
---|
4220 | void cFREe(Void_t *mem)
|
---|
4221 | #else
|
---|
4222 | void cFREe(mem) Void_t *mem;
|
---|
4223 | #endif
|
---|
4224 | {
|
---|
4225 | fREe(mem);
|
---|
4226 | }
|
---|
4227 |
|
---|
4228 |
|
---|
4229 | |
---|
4230 |
|
---|
4231 |
|
---|
4232 |
|
---|
4233 | /*
|
---|
4234 | valloc just invokes memalign with alignment argument equal
|
---|
4235 | to the page size of the system (or as near to this as can
|
---|
4236 | be figured out from all the includes/defines above.)
|
---|
4237 | */
|
---|
4238 |
|
---|
4239 | #if __STD_C
|
---|
4240 | Void_t* vALLOc(size_t bytes)
|
---|
4241 | #else
|
---|
4242 | Void_t* vALLOc(bytes) size_t bytes;
|
---|
4243 | #endif
|
---|
4244 | {
|
---|
4245 | /* Ensure initialization/consolidation */
|
---|
4246 | mstate av = get_malloc_state();
|
---|
4247 | malloc_consolidate(av);
|
---|
4248 | return mEMALIGn(av->pagesize, bytes);
|
---|
4249 | }
|
---|
4250 |
|
---|
4251 | /*
|
---|
4252 | pvalloc just invokes valloc for the nearest pagesize
|
---|
4253 | that will accommodate request
|
---|
4254 | */
|
---|
4255 |
|
---|
4256 |
|
---|
4257 | #if __STD_C
|
---|
4258 | Void_t* pVALLOc(size_t bytes)
|
---|
4259 | #else
|
---|
4260 | Void_t* pVALLOc(bytes) size_t bytes;
|
---|
4261 | #endif
|
---|
4262 | {
|
---|
4263 | mstate av = get_malloc_state();
|
---|
4264 | size_t pagesz;
|
---|
4265 |
|
---|
4266 | /* Ensure initialization/consolidation */
|
---|
4267 | malloc_consolidate(av);
|
---|
4268 |
|
---|
4269 | pagesz = av->pagesize;
|
---|
4270 | return mEMALIGn(pagesz, (bytes + pagesz - 1) & ~(pagesz - 1));
|
---|
4271 | }
|
---|
4272 | |
---|
4273 |
|
---|
4274 |
|
---|
4275 | /*
|
---|
4276 | Malloc_Trim gives memory back to the system (via negative
|
---|
4277 | arguments to sbrk) if there is unused memory at the `high' end of
|
---|
4278 | the malloc pool. You can call this after freeing large blocks of
|
---|
4279 | memory to potentially reduce the system-level memory requirements
|
---|
4280 | of a program. However, it cannot guarantee to reduce memory. Under
|
---|
4281 | some allocation patterns, some large free blocks of memory will be
|
---|
4282 | locked between two used chunks, so they cannot be given back to
|
---|
4283 | the system.
|
---|
4284 |
|
---|
4285 | The `pad' argument to malloc_trim represents the amount of free
|
---|
4286 | trailing space to leave untrimmed. If this argument is zero,
|
---|
4287 | only the minimum amount of memory to maintain internal data
|
---|
4288 | structures will be left (one page or less). Non-zero arguments
|
---|
4289 | can be supplied to maintain enough trailing space to service
|
---|
4290 | future expected allocations without having to re-obtain memory
|
---|
4291 | from the system.
|
---|
4292 |
|
---|
4293 | Malloc_trim returns 1 if it actually released any memory, else 0.
|
---|
4294 | */
|
---|
4295 |
|
---|
4296 | #if __STD_C
|
---|
4297 | int mTRIm(size_t pad)
|
---|
4298 | #else
|
---|
4299 | int mTRIm(pad) size_t pad;
|
---|
4300 | #endif
|
---|
4301 | {
|
---|
4302 | mstate av = get_malloc_state();
|
---|
4303 | /* Ensure initialization/consolidation */
|
---|
4304 | malloc_consolidate(av);
|
---|
4305 |
|
---|
4306 | return sYSTRIm(pad, av);
|
---|
4307 | }
|
---|
4308 |
|
---|
4309 | /*
|
---|
4310 | malloc_usable_size tells you how many bytes you can actually use in
|
---|
4311 | an allocated chunk, which may be more than you requested (although
|
---|
4312 | often not). You can use this many bytes without worrying about
|
---|
4313 | overwriting other allocated objects. Not a particularly great
|
---|
4314 | programming practice, but still sometimes useful.
|
---|
4315 | */
|
---|
4316 |
|
---|
4317 | #if __STD_C
|
---|
4318 | size_t mUSABLe(Void_t* mem)
|
---|
4319 | #else
|
---|
4320 | size_t mUSABLe(mem) Void_t* mem;
|
---|
4321 | #endif
|
---|
4322 | {
|
---|
4323 | mchunkptr p;
|
---|
4324 | if (mem != 0) {
|
---|
4325 | p = mem2chunk(mem);
|
---|
4326 | if (chunk_is_mmapped(p))
|
---|
4327 | return chunksize(p) - 2*SIZE_SZ;
|
---|
4328 | else if (inuse(p))
|
---|
4329 | return chunksize(p) - SIZE_SZ;
|
---|
4330 | }
|
---|
4331 | return 0;
|
---|
4332 | }
|
---|
4333 |
|
---|
4334 |
|
---|
4335 | |
---|
4336 |
|
---|
4337 |
|
---|
4338 | /*
|
---|
4339 | mallinfo returns a copy of updated current mallinfo.
|
---|
4340 | */
|
---|
4341 |
|
---|
4342 | struct mallinfo mALLINFo()
|
---|
4343 | {
|
---|
4344 | mstate av = get_malloc_state();
|
---|
4345 | struct mallinfo mi;
|
---|
4346 | int i;
|
---|
4347 | mbinptr b;
|
---|
4348 | mchunkptr p;
|
---|
4349 | INTERNAL_SIZE_T avail;
|
---|
4350 | int navail;
|
---|
4351 | int nfastblocks;
|
---|
4352 | int fastbytes;
|
---|
4353 |
|
---|
4354 | /* Ensure initialization */
|
---|
4355 | if (av->top == 0) malloc_consolidate(av);
|
---|
4356 |
|
---|
4357 | check_malloc_state();
|
---|
4358 |
|
---|
4359 | /* Account for top */
|
---|
4360 | avail = chunksize(av->top);
|
---|
4361 | navail = 1; /* top always exists */
|
---|
4362 |
|
---|
4363 | /* traverse fastbins */
|
---|
4364 | nfastblocks = 0;
|
---|
4365 | fastbytes = 0;
|
---|
4366 |
|
---|
4367 | for (i = 0; i < NFASTBINS; ++i) {
|
---|
4368 | for (p = av->fastbins[i]; p != 0; p = p->fd) {
|
---|
4369 | ++nfastblocks;
|
---|
4370 | fastbytes += chunksize(p);
|
---|
4371 | }
|
---|
4372 | }
|
---|
4373 |
|
---|
4374 | avail += fastbytes;
|
---|
4375 |
|
---|
4376 | /* traverse regular bins */
|
---|
4377 | for (i = 1; i < NBINS; ++i) {
|
---|
4378 | b = bin_at(av, i);
|
---|
4379 | for (p = last(b); p != b; p = p->bk) {
|
---|
4380 | avail += chunksize(p);
|
---|
4381 | navail++;
|
---|
4382 | }
|
---|
4383 | }
|
---|
4384 |
|
---|
4385 | mi.smblks = nfastblocks;
|
---|
4386 | mi.ordblks = navail;
|
---|
4387 | mi.fordblks = avail;
|
---|
4388 | mi.uordblks = av->sbrked_mem - avail;
|
---|
4389 | mi.arena = av->sbrked_mem;
|
---|
4390 | mi.hblks = av->n_mmaps;
|
---|
4391 | mi.hblkhd = av->mmapped_mem;
|
---|
4392 | mi.fsmblks = fastbytes;
|
---|
4393 | mi.keepcost = chunksize(av->top);
|
---|
4394 | mi.usmblks = av->max_total_mem;
|
---|
4395 | return mi;
|
---|
4396 | }
|
---|
4397 |
|
---|
4398 | |
---|
4399 |
|
---|
4400 |
|
---|
4401 | /*
|
---|
4402 | malloc_stats prints on stderr the amount of space obtained from the
|
---|
4403 | system (both via sbrk and mmap), the maximum amount (which may be
|
---|
4404 | more than current if malloc_trim and/or munmap got called), and the
|
---|
4405 | current number of bytes allocated via malloc (or realloc, etc) but
|
---|
4406 | not yet freed. Note that this is the number of bytes allocated, not
|
---|
4407 | the number requested. It will be larger than the number requested
|
---|
4408 | because of alignment and bookkeeping overhead. Because it includes
|
---|
4409 | alignment wastage as being in use, this figure may be greater than zero
|
---|
4410 | even when no user-level chunks are allocated.
|
---|
4411 |
|
---|
4412 | The reported current and maximum system memory can be inaccurate if
|
---|
4413 | a program makes other calls to system memory allocation functions
|
---|
4414 | (normally sbrk) outside of malloc.
|
---|
4415 |
|
---|
4416 | malloc_stats prints only the most commonly interesting statistics.
|
---|
4417 | More information can be obtained by calling mallinfo.
|
---|
4418 | */
|
---|
4419 |
|
---|
4420 | void mSTATs()
|
---|
4421 | {
|
---|
4422 | struct mallinfo mi = mALLINFo();
|
---|
4423 |
|
---|
4424 | #ifdef WIN32
|
---|
4425 | {
|
---|
4426 | unsigned long free, reserved, committed;
|
---|
4427 | vminfo (&free, &reserved, &committed);
|
---|
4428 | fprintf(stderr, "free bytes = %10lu\n",
|
---|
4429 | free);
|
---|
4430 | fprintf(stderr, "reserved bytes = %10lu\n",
|
---|
4431 | reserved);
|
---|
4432 | fprintf(stderr, "committed bytes = %10lu\n",
|
---|
4433 | committed);
|
---|
4434 | }
|
---|
4435 | #endif
|
---|
4436 |
|
---|
4437 |
|
---|
4438 | fprintf(stderr, "max system bytes = %10lu\n",
|
---|
4439 | (unsigned long)(mi.usmblks));
|
---|
4440 | fprintf(stderr, "system bytes = %10lu\n",
|
---|
4441 | (unsigned long)(mi.arena + mi.hblkhd));
|
---|
4442 | fprintf(stderr, "in use bytes = %10lu\n",
|
---|
4443 | (unsigned long)(mi.uordblks + mi.hblkhd));
|
---|
4444 |
|
---|
4445 | #ifdef WIN32
|
---|
4446 | {
|
---|
4447 | unsigned long kernel, user;
|
---|
4448 | if (cpuinfo (TRUE, &kernel, &user)) {
|
---|
4449 | fprintf(stderr, "kernel ms = %10lu\n",
|
---|
4450 | kernel);
|
---|
4451 | fprintf(stderr, "user ms = %10lu\n",
|
---|
4452 | user);
|
---|
4453 | }
|
---|
4454 | }
|
---|
4455 | #endif
|
---|
4456 | }
|
---|
4457 |
|
---|
4458 | |
---|
4459 |
|
---|
4460 |
|
---|
4461 | /*
|
---|
4462 | mallopt is the general SVID/XPG interface to tunable parameters.
|
---|
4463 | The format is to provide a (parameter-number, parameter-value)
|
---|
4464 | pair. mallopt then sets the corresponding parameter to the
|
---|
4465 | argument value if it can (i.e., so long as the value is
|
---|
4466 | meaningful), and returns 1 if successful else 0. See descriptions
|
---|
4467 | of tunable parameters above for meanings.
|
---|
4468 | */
|
---|
4469 |
|
---|
4470 | #if __STD_C
|
---|
4471 | int mALLOPt(int param_number, int value)
|
---|
4472 | #else
|
---|
4473 | int mALLOPt(param_number, value) int param_number; int value;
|
---|
4474 | #endif
|
---|
4475 | {
|
---|
4476 | mstate av = get_malloc_state();
|
---|
4477 | /* Ensure initialization/consolidation */
|
---|
4478 | malloc_consolidate(av);
|
---|
4479 |
|
---|
4480 | switch(param_number) {
|
---|
4481 | case M_MXFAST:
|
---|
4482 | if (value >= 0 && value <= MAX_FAST_SIZE) {
|
---|
4483 | av->max_fast = req2max_fast(value);
|
---|
4484 | return 1;
|
---|
4485 | }
|
---|
4486 | else
|
---|
4487 | return 0;
|
---|
4488 |
|
---|
4489 | case M_TRIM_THRESHOLD:
|
---|
4490 | av->trim_threshold = value;
|
---|
4491 | return 1;
|
---|
4492 |
|
---|
4493 | case M_TOP_PAD:
|
---|
4494 | av->top_pad = value;
|
---|
4495 | return 1;
|
---|
4496 |
|
---|
4497 | case M_MMAP_THRESHOLD:
|
---|
4498 | av->mmap_threshold = value;
|
---|
4499 | return 1;
|
---|
4500 |
|
---|
4501 | case M_MMAP_MAX:
|
---|
4502 | #if HAVE_MMAP
|
---|
4503 | av->n_mmaps_max = value;
|
---|
4504 | return 1;
|
---|
4505 | #else
|
---|
4506 | if (value != 0)
|
---|
4507 | return 0;
|
---|
4508 | else {
|
---|
4509 | av->n_mmaps_max = value;
|
---|
4510 | return 1;
|
---|
4511 | }
|
---|
4512 | #endif
|
---|
4513 |
|
---|
4514 | default:
|
---|
4515 | return 0;
|
---|
4516 | }
|
---|
4517 | }
|
---|
4518 | |
---|
4519 |
|
---|
4520 |
|
---|
4521 | /* -------------------------------------------------------------- */
|
---|
4522 |
|
---|
4523 | /*
|
---|
4524 | Emulation of sbrk for win32.
|
---|
4525 | Donated by J. Walter <[email protected]>.
|
---|
4526 | For additional information about this code, and malloc on Win32, see
|
---|
4527 | http://www.genesys-e.de/jwalter/
|
---|
4528 |
|
---|
4529 | */
|
---|
4530 |
|
---|
4531 |
|
---|
4532 | #ifdef WIN32
|
---|
4533 |
|
---|
4534 | #ifdef _DEBUG
|
---|
4535 | /* #define TRACE */
|
---|
4536 | #endif
|
---|
4537 |
|
---|
4538 | /* Support for USE_MALLOC_LOCK */
|
---|
4539 | #ifdef USE_MALLOC_LOCK
|
---|
4540 |
|
---|
4541 | /* Wait for spin lock */
|
---|
4542 | static int slwait (int *sl) {
|
---|
4543 | while (InterlockedCompareExchange ((void **) sl, (void *) 1, (void *) 0) != 0)
|
---|
4544 | Sleep (0);
|
---|
4545 | return 0;
|
---|
4546 | }
|
---|
4547 |
|
---|
4548 | /* Release spin lock */
|
---|
4549 | static int slrelease (int *sl) {
|
---|
4550 | InterlockedExchange (sl, 0);
|
---|
4551 | return 0;
|
---|
4552 | }
|
---|
4553 |
|
---|
4554 | #ifdef NEEDED
|
---|
4555 | /* Spin lock for emulation code */
|
---|
4556 | static int g_sl;
|
---|
4557 | #endif
|
---|
4558 |
|
---|
4559 | #endif /* USE_MALLOC_LOCK */
|
---|
4560 |
|
---|
4561 | /* getpagesize for windows */
|
---|
4562 | static long getpagesize (void) {
|
---|
4563 | static long g_pagesize = 0;
|
---|
4564 | if (! g_pagesize) {
|
---|
4565 | SYSTEM_INFO system_info;
|
---|
4566 | GetSystemInfo (&system_info);
|
---|
4567 | g_pagesize = system_info.dwPageSize;
|
---|
4568 | }
|
---|
4569 | return g_pagesize;
|
---|
4570 | }
|
---|
4571 | static long getregionsize (void) {
|
---|
4572 | static long g_regionsize = 0;
|
---|
4573 | if (! g_regionsize) {
|
---|
4574 | SYSTEM_INFO system_info;
|
---|
4575 | GetSystemInfo (&system_info);
|
---|
4576 | g_regionsize = system_info.dwAllocationGranularity;
|
---|
4577 | }
|
---|
4578 | return g_regionsize;
|
---|
4579 | }
|
---|
4580 |
|
---|
4581 | /* A region list entry */
|
---|
4582 | typedef struct _region_list_entry {
|
---|
4583 | void *top_allocated;
|
---|
4584 | void *top_committed;
|
---|
4585 | void *top_reserved;
|
---|
4586 | long reserve_size;
|
---|
4587 | struct _region_list_entry *previous;
|
---|
4588 | } region_list_entry;
|
---|
4589 |
|
---|
4590 | /* Allocate and link a region entry in the region list */
|
---|
4591 | static int region_list_append (region_list_entry **last, void *base_reserved, long reserve_size) {
|
---|
4592 | region_list_entry *next = HeapAlloc (GetProcessHeap (), 0, sizeof (region_list_entry));
|
---|
4593 | if (! next)
|
---|
4594 | return FALSE;
|
---|
4595 | next->top_allocated = (char *) base_reserved;
|
---|
4596 | next->top_committed = (char *) base_reserved;
|
---|
4597 | next->top_reserved = (char *) base_reserved + reserve_size;
|
---|
4598 | next->reserve_size = reserve_size;
|
---|
4599 | next->previous = *last;
|
---|
4600 | *last = next;
|
---|
4601 | return TRUE;
|
---|
4602 | }
|
---|
4603 | /* Free and unlink the last region entry from the region list */
|
---|
4604 | static int region_list_remove (region_list_entry **last) {
|
---|
4605 | region_list_entry *previous = (*last)->previous;
|
---|
4606 | if (! HeapFree (GetProcessHeap (), sizeof (region_list_entry), *last))
|
---|
4607 | return FALSE;
|
---|
4608 | *last = previous;
|
---|
4609 | return TRUE;
|
---|
4610 | }
|
---|
4611 |
|
---|
4612 | #define CEIL(size,to) (((size)+(to)-1)&~((to)-1))
|
---|
4613 | #define FLOOR(size,to) ((size)&~((to)-1))
|
---|
4614 |
|
---|
4615 | #define SBRK_SCALE 0
|
---|
4616 | /* #define SBRK_SCALE 1 */
|
---|
4617 | /* #define SBRK_SCALE 2 */
|
---|
4618 | /* #define SBRK_SCALE 4 */
|
---|
4619 |
|
---|
4620 | /* sbrk for windows */
|
---|
4621 | static void *sbrk (long size) {
|
---|
4622 | static long g_pagesize, g_my_pagesize;
|
---|
4623 | static long g_regionsize, g_my_regionsize;
|
---|
4624 | static region_list_entry *g_last;
|
---|
4625 | void *result = (void *) MORECORE_FAILURE;
|
---|
4626 | #ifdef TRACE
|
---|
4627 | printf ("sbrk %d\n", size);
|
---|
4628 | #endif
|
---|
4629 | #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
|
---|
4630 | /* Wait for spin lock */
|
---|
4631 | slwait (&g_sl);
|
---|
4632 | #endif
|
---|
4633 | /* First time initialization */
|
---|
4634 | if (! g_pagesize) {
|
---|
4635 | g_pagesize = getpagesize ();
|
---|
4636 | g_my_pagesize = g_pagesize << SBRK_SCALE;
|
---|
4637 | }
|
---|
4638 | if (! g_regionsize) {
|
---|
4639 | g_regionsize = getregionsize ();
|
---|
4640 | g_my_regionsize = g_regionsize << SBRK_SCALE;
|
---|
4641 | }
|
---|
4642 | if (! g_last) {
|
---|
4643 | if (! region_list_append (&g_last, 0, 0))
|
---|
4644 | goto sbrk_exit;
|
---|
4645 | }
|
---|
4646 | /* Assert invariants */
|
---|
4647 | assert (g_last);
|
---|
4648 | assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_allocated &&
|
---|
4649 | g_last->top_allocated <= g_last->top_committed);
|
---|
4650 | assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_committed &&
|
---|
4651 | g_last->top_committed <= g_last->top_reserved &&
|
---|
4652 | (unsigned) g_last->top_committed % g_pagesize == 0);
|
---|
4653 | assert ((unsigned) g_last->top_reserved % g_regionsize == 0);
|
---|
4654 | assert ((unsigned) g_last->reserve_size % g_regionsize == 0);
|
---|
4655 | /* Allocation requested? */
|
---|
4656 | if (size >= 0) {
|
---|
4657 | /* Allocation size is the requested size */
|
---|
4658 | long allocate_size = size;
|
---|
4659 | /* Compute the size to commit */
|
---|
4660 | long to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
|
---|
4661 | /* Do we reach the commit limit? */
|
---|
4662 | if (to_commit > 0) {
|
---|
4663 | /* Round size to commit */
|
---|
4664 | long commit_size = CEIL (to_commit, g_my_pagesize);
|
---|
4665 | /* Compute the size to reserve */
|
---|
4666 | long to_reserve = (char *) g_last->top_committed + commit_size - (char *) g_last->top_reserved;
|
---|
4667 | /* Do we reach the reserve limit? */
|
---|
4668 | if (to_reserve > 0) {
|
---|
4669 | /* Compute the remaining size to commit in the current region */
|
---|
4670 | long remaining_commit_size = (char *) g_last->top_reserved - (char *) g_last->top_committed;
|
---|
4671 | if (remaining_commit_size > 0) {
|
---|
4672 | /* Assert preconditions */
|
---|
4673 | assert ((unsigned) g_last->top_committed % g_pagesize == 0);
|
---|
4674 | assert (0 < remaining_commit_size && remaining_commit_size % g_pagesize == 0); {
|
---|
4675 | /* Commit this */
|
---|
4676 | void *base_committed = VirtualAlloc (g_last->top_committed, remaining_commit_size,
|
---|
4677 | MEM_COMMIT, PAGE_READWRITE);
|
---|
4678 | /* Check returned pointer for consistency */
|
---|
4679 | if (base_committed != g_last->top_committed)
|
---|
4680 | goto sbrk_exit;
|
---|
4681 | /* Assert postconditions */
|
---|
4682 | assert ((unsigned) base_committed % g_pagesize == 0);
|
---|
4683 | #ifdef TRACE
|
---|
4684 | printf ("Commit %p %d\n", base_committed, remaining_commit_size);
|
---|
4685 | #endif
|
---|
4686 | /* Adjust the regions commit top */
|
---|
4687 | g_last->top_committed = (char *) base_committed + remaining_commit_size;
|
---|
4688 | }
|
---|
4689 | } {
|
---|
4690 | /* Now we are going to search and reserve. */
|
---|
4691 | int contiguous = -1;
|
---|
4692 | int found = FALSE;
|
---|
4693 | MEMORY_BASIC_INFORMATION memory_info;
|
---|
4694 | void *base_reserved;
|
---|
4695 | long reserve_size;
|
---|
4696 | do {
|
---|
4697 | /* Assume contiguous memory */
|
---|
4698 | contiguous = TRUE;
|
---|
4699 | /* Round size to reserve */
|
---|
4700 | reserve_size = CEIL (to_reserve, g_my_regionsize);
|
---|
4701 | /* Start with the current region's top */
|
---|
4702 | memory_info.BaseAddress = g_last->top_reserved;
|
---|
4703 | /* Assert preconditions */
|
---|
4704 | assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
|
---|
4705 | assert (0 < reserve_size && reserve_size % g_regionsize == 0);
|
---|
4706 | while (VirtualQuery (memory_info.BaseAddress, &memory_info, sizeof (memory_info))) {
|
---|
4707 | /* Assert postconditions */
|
---|
4708 | assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
|
---|
4709 | #ifdef TRACE
|
---|
4710 | printf ("Query %p %d %s\n", memory_info.BaseAddress, memory_info.RegionSize,
|
---|
4711 | memory_info.State == MEM_FREE ? "FREE":
|
---|
4712 | (memory_info.State == MEM_RESERVE ? "RESERVED":
|
---|
4713 | (memory_info.State == MEM_COMMIT ? "COMMITTED": "?")));
|
---|
4714 | #endif
|
---|
4715 | /* Region is free, well aligned and big enough: we are done */
|
---|
4716 | if (memory_info.State == MEM_FREE &&
|
---|
4717 | (unsigned) memory_info.BaseAddress % g_regionsize == 0 &&
|
---|
4718 | memory_info.RegionSize >= (unsigned) reserve_size) {
|
---|
4719 | found = TRUE;
|
---|
4720 | break;
|
---|
4721 | }
|
---|
4722 | /* From now on we can't get contiguous memory! */
|
---|
4723 | contiguous = FALSE;
|
---|
4724 | /* Recompute size to reserve */
|
---|
4725 | reserve_size = CEIL (allocate_size, g_my_regionsize);
|
---|
4726 | memory_info.BaseAddress = (char *) memory_info.BaseAddress + memory_info.RegionSize;
|
---|
4727 | /* Assert preconditions */
|
---|
4728 | assert ((unsigned) memory_info.BaseAddress % g_pagesize == 0);
|
---|
4729 | assert (0 < reserve_size && reserve_size % g_regionsize == 0);
|
---|
4730 | }
|
---|
4731 | /* Search failed? */
|
---|
4732 | if (! found)
|
---|
4733 | goto sbrk_exit;
|
---|
4734 | /* Assert preconditions */
|
---|
4735 | assert ((unsigned) memory_info.BaseAddress % g_regionsize == 0);
|
---|
4736 | assert (0 < reserve_size && reserve_size % g_regionsize == 0);
|
---|
4737 | /* Try to reserve this */
|
---|
4738 | base_reserved = VirtualAlloc (memory_info.BaseAddress, reserve_size,
|
---|
4739 | MEM_RESERVE, PAGE_NOACCESS);
|
---|
4740 | if (! base_reserved) {
|
---|
4741 | int rc = GetLastError ();
|
---|
4742 | if (rc != ERROR_INVALID_ADDRESS)
|
---|
4743 | goto sbrk_exit;
|
---|
4744 | }
|
---|
4745 | /* A null pointer signals (hopefully) a race condition with another thread. */
|
---|
4746 | /* In this case, we try again. */
|
---|
4747 | } while (! base_reserved);
|
---|
4748 | /* Check returned pointer for consistency */
|
---|
4749 | if (memory_info.BaseAddress && base_reserved != memory_info.BaseAddress)
|
---|
4750 | goto sbrk_exit;
|
---|
4751 | /* Assert postconditions */
|
---|
4752 | assert ((unsigned) base_reserved % g_regionsize == 0);
|
---|
4753 | #ifdef TRACE
|
---|
4754 | printf ("Reserve %p %d\n", base_reserved, reserve_size);
|
---|
4755 | #endif
|
---|
4756 | /* Did we get contiguous memory? */
|
---|
4757 | if (contiguous) {
|
---|
4758 | long start_size = (char *) g_last->top_committed - (char *) g_last->top_allocated;
|
---|
4759 | /* Adjust allocation size */
|
---|
4760 | allocate_size -= start_size;
|
---|
4761 | /* Adjust the regions allocation top */
|
---|
4762 | g_last->top_allocated = g_last->top_committed;
|
---|
4763 | /* Recompute the size to commit */
|
---|
4764 | to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
|
---|
4765 | /* Round size to commit */
|
---|
4766 | commit_size = CEIL (to_commit, g_my_pagesize);
|
---|
4767 | }
|
---|
4768 | /* Append the new region to the list */
|
---|
4769 | if (! region_list_append (&g_last, base_reserved, reserve_size))
|
---|
4770 | goto sbrk_exit;
|
---|
4771 | /* Didn't we get contiguous memory? */
|
---|
4772 | if (! contiguous) {
|
---|
4773 | /* Recompute the size to commit */
|
---|
4774 | to_commit = (char *) g_last->top_allocated + allocate_size - (char *) g_last->top_committed;
|
---|
4775 | /* Round size to commit */
|
---|
4776 | commit_size = CEIL (to_commit, g_my_pagesize);
|
---|
4777 | }
|
---|
4778 | }
|
---|
4779 | }
|
---|
4780 | /* Assert preconditions */
|
---|
4781 | assert ((unsigned) g_last->top_committed % g_pagesize == 0);
|
---|
4782 | assert (0 < commit_size && commit_size % g_pagesize == 0); {
|
---|
4783 | /* Commit this */
|
---|
4784 | void *base_committed = VirtualAlloc (g_last->top_committed, commit_size,
|
---|
4785 | MEM_COMMIT, PAGE_READWRITE);
|
---|
4786 | /* Check returned pointer for consistency */
|
---|
4787 | if (base_committed != g_last->top_committed)
|
---|
4788 | goto sbrk_exit;
|
---|
4789 | /* Assert postconditions */
|
---|
4790 | assert ((unsigned) base_committed % g_pagesize == 0);
|
---|
4791 | #ifdef TRACE
|
---|
4792 | printf ("Commit %p %d\n", base_committed, commit_size);
|
---|
4793 | #endif
|
---|
4794 | /* Adjust the regions commit top */
|
---|
4795 | g_last->top_committed = (char *) base_committed + commit_size;
|
---|
4796 | }
|
---|
4797 | }
|
---|
4798 | /* Adjust the regions allocation top */
|
---|
4799 | g_last->top_allocated = (char *) g_last->top_allocated + allocate_size;
|
---|
4800 | result = (char *) g_last->top_allocated - size;
|
---|
4801 | /* Deallocation requested? */
|
---|
4802 | } else if (size < 0) {
|
---|
4803 | long deallocate_size = - size;
|
---|
4804 | /* As long as we have a region to release */
|
---|
4805 | while ((char *) g_last->top_allocated - deallocate_size < (char *) g_last->top_reserved - g_last->reserve_size) {
|
---|
4806 | /* Get the size to release */
|
---|
4807 | long release_size = g_last->reserve_size;
|
---|
4808 | /* Get the base address */
|
---|
4809 | void *base_reserved = (char *) g_last->top_reserved - release_size;
|
---|
4810 | /* Assert preconditions */
|
---|
4811 | assert ((unsigned) base_reserved % g_regionsize == 0);
|
---|
4812 | assert (0 < release_size && release_size % g_regionsize == 0); {
|
---|
4813 | /* Release this */
|
---|
4814 | int rc = VirtualFree (base_reserved, 0,
|
---|
4815 | MEM_RELEASE);
|
---|
4816 | /* Check returned code for consistency */
|
---|
4817 | if (! rc)
|
---|
4818 | goto sbrk_exit;
|
---|
4819 | #ifdef TRACE
|
---|
4820 | printf ("Release %p %d\n", base_reserved, release_size);
|
---|
4821 | #endif
|
---|
4822 | }
|
---|
4823 | /* Adjust deallocation size */
|
---|
4824 | deallocate_size -= (char *) g_last->top_allocated - (char *) base_reserved;
|
---|
4825 | /* Remove the old region from the list */
|
---|
4826 | if (! region_list_remove (&g_last))
|
---|
4827 | goto sbrk_exit;
|
---|
4828 | } {
|
---|
4829 | /* Compute the size to decommit */
|
---|
4830 | long to_decommit = (char *) g_last->top_committed - ((char *) g_last->top_allocated - deallocate_size);
|
---|
4831 | if (to_decommit >= g_my_pagesize) {
|
---|
4832 | /* Compute the size to decommit */
|
---|
4833 | long decommit_size = FLOOR (to_decommit, g_my_pagesize);
|
---|
4834 | /* Compute the base address */
|
---|
4835 | void *base_committed = (char *) g_last->top_committed - decommit_size;
|
---|
4836 | /* Assert preconditions */
|
---|
4837 | assert ((unsigned) base_committed % g_pagesize == 0);
|
---|
4838 | assert (0 < decommit_size && decommit_size % g_pagesize == 0); {
|
---|
4839 | /* Decommit this */
|
---|
4840 | int rc = VirtualFree ((char *) base_committed, decommit_size,
|
---|
4841 | MEM_DECOMMIT);
|
---|
4842 | /* Check returned code for consistency */
|
---|
4843 | if (! rc)
|
---|
4844 | goto sbrk_exit;
|
---|
4845 | #ifdef TRACE
|
---|
4846 | printf ("Decommit %p %d\n", base_committed, decommit_size);
|
---|
4847 | #endif
|
---|
4848 | }
|
---|
4849 | /* Adjust deallocation size and regions commit and allocate top */
|
---|
4850 | deallocate_size -= (char *) g_last->top_allocated - (char *) base_committed;
|
---|
4851 | g_last->top_committed = base_committed;
|
---|
4852 | g_last->top_allocated = base_committed;
|
---|
4853 | }
|
---|
4854 | }
|
---|
4855 | /* Adjust regions allocate top */
|
---|
4856 | g_last->top_allocated = (char *) g_last->top_allocated - deallocate_size;
|
---|
4857 | /* Check for underflow */
|
---|
4858 | if ((char *) g_last->top_reserved - g_last->reserve_size > (char *) g_last->top_allocated ||
|
---|
4859 | g_last->top_allocated > g_last->top_committed) {
|
---|
4860 | /* Adjust regions allocate top */
|
---|
4861 | g_last->top_allocated = (char *) g_last->top_reserved - g_last->reserve_size;
|
---|
4862 | goto sbrk_exit;
|
---|
4863 | }
|
---|
4864 | result = g_last->top_allocated;
|
---|
4865 | }
|
---|
4866 | /* Assert invariants */
|
---|
4867 | assert (g_last);
|
---|
4868 | assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_allocated &&
|
---|
4869 | g_last->top_allocated <= g_last->top_committed);
|
---|
4870 | assert ((char *) g_last->top_reserved - g_last->reserve_size <= (char *) g_last->top_committed &&
|
---|
4871 | g_last->top_committed <= g_last->top_reserved &&
|
---|
4872 | (unsigned) g_last->top_committed % g_pagesize == 0);
|
---|
4873 | assert ((unsigned) g_last->top_reserved % g_regionsize == 0);
|
---|
4874 | assert ((unsigned) g_last->reserve_size % g_regionsize == 0);
|
---|
4875 |
|
---|
4876 | sbrk_exit:
|
---|
4877 | #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
|
---|
4878 | /* Release spin lock */
|
---|
4879 | slrelease (&g_sl);
|
---|
4880 | #endif
|
---|
4881 | return result;
|
---|
4882 | }
|
---|
4883 |
|
---|
4884 | /* mmap for windows */
|
---|
4885 | static void *mmap (void *ptr, long size, long prot, long type, long handle, long arg) {
|
---|
4886 | static long g_pagesize;
|
---|
4887 | static long g_regionsize;
|
---|
4888 | #ifdef TRACE
|
---|
4889 | printf ("mmap %d\n", size);
|
---|
4890 | #endif
|
---|
4891 | #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
|
---|
4892 | /* Wait for spin lock */
|
---|
4893 | slwait (&g_sl);
|
---|
4894 | #endif
|
---|
4895 | /* First time initialization */
|
---|
4896 | if (! g_pagesize)
|
---|
4897 | g_pagesize = getpagesize ();
|
---|
4898 | if (! g_regionsize)
|
---|
4899 | g_regionsize = getregionsize ();
|
---|
4900 | /* Assert preconditions */
|
---|
4901 | assert ((unsigned) ptr % g_regionsize == 0);
|
---|
4902 | assert (size % g_pagesize == 0);
|
---|
4903 | /* Allocate this */
|
---|
4904 | ptr = VirtualAlloc (ptr, size,
|
---|
4905 | MEM_RESERVE | MEM_COMMIT | MEM_TOP_DOWN, PAGE_READWRITE);
|
---|
4906 | if (! ptr) {
|
---|
4907 | ptr = (void *) MORECORE_FAILURE;
|
---|
4908 | goto mmap_exit;
|
---|
4909 | }
|
---|
4910 | /* Assert postconditions */
|
---|
4911 | assert ((unsigned) ptr % g_regionsize == 0);
|
---|
4912 | #ifdef TRACE
|
---|
4913 | printf ("Commit %p %d\n", ptr, size);
|
---|
4914 | #endif
|
---|
4915 | mmap_exit:
|
---|
4916 | #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
|
---|
4917 | /* Release spin lock */
|
---|
4918 | slrelease (&g_sl);
|
---|
4919 | #endif
|
---|
4920 | return ptr;
|
---|
4921 | }
|
---|
4922 |
|
---|
4923 | /* munmap for windows */
|
---|
4924 | static long munmap (void *ptr, long size) {
|
---|
4925 | static long g_pagesize;
|
---|
4926 | static long g_regionsize;
|
---|
4927 | int rc = MUNMAP_FAILURE;
|
---|
4928 | #ifdef TRACE
|
---|
4929 | printf ("munmap %p %d\n", ptr, size);
|
---|
4930 | #endif
|
---|
4931 | #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
|
---|
4932 | /* Wait for spin lock */
|
---|
4933 | slwait (&g_sl);
|
---|
4934 | #endif
|
---|
4935 | /* First time initialization */
|
---|
4936 | if (! g_pagesize)
|
---|
4937 | g_pagesize = getpagesize ();
|
---|
4938 | if (! g_regionsize)
|
---|
4939 | g_regionsize = getregionsize ();
|
---|
4940 | /* Assert preconditions */
|
---|
4941 | assert ((unsigned) ptr % g_regionsize == 0);
|
---|
4942 | assert (size % g_pagesize == 0);
|
---|
4943 | /* Free this */
|
---|
4944 | if (! VirtualFree (ptr, 0,
|
---|
4945 | MEM_RELEASE))
|
---|
4946 | goto munmap_exit;
|
---|
4947 | rc = 0;
|
---|
4948 | #ifdef TRACE
|
---|
4949 | printf ("Release %p %d\n", ptr, size);
|
---|
4950 | #endif
|
---|
4951 | munmap_exit:
|
---|
4952 | #if defined (USE_MALLOC_LOCK) && defined (NEEDED)
|
---|
4953 | /* Release spin lock */
|
---|
4954 | slrelease (&g_sl);
|
---|
4955 | #endif
|
---|
4956 | return rc;
|
---|
4957 | }
|
---|
4958 |
|
---|
4959 | static void vminfo (unsigned long *free, unsigned long *reserved, unsigned long *committed) {
|
---|
4960 | MEMORY_BASIC_INFORMATION memory_info;
|
---|
4961 | memory_info.BaseAddress = 0;
|
---|
4962 | *free = *reserved = *committed = 0;
|
---|
4963 | while (VirtualQuery (memory_info.BaseAddress, &memory_info, sizeof (memory_info))) {
|
---|
4964 | switch (memory_info.State) {
|
---|
4965 | case MEM_FREE:
|
---|
4966 | *free += memory_info.RegionSize;
|
---|
4967 | break;
|
---|
4968 | case MEM_RESERVE:
|
---|
4969 | *reserved += memory_info.RegionSize;
|
---|
4970 | break;
|
---|
4971 | case MEM_COMMIT:
|
---|
4972 | *committed += memory_info.RegionSize;
|
---|
4973 | break;
|
---|
4974 | }
|
---|
4975 | memory_info.BaseAddress = (char *) memory_info.BaseAddress + memory_info.RegionSize;
|
---|
4976 | }
|
---|
4977 | }
|
---|
4978 |
|
---|
4979 | static int cpuinfo (int whole, unsigned long *kernel, unsigned long *user) {
|
---|
4980 | if (whole) {
|
---|
4981 | __int64 creation64, exit64, kernel64, user64;
|
---|
4982 | int rc = GetProcessTimes (GetCurrentProcess (),
|
---|
4983 | (FILETIME *) &creation64,
|
---|
4984 | (FILETIME *) &exit64,
|
---|
4985 | (FILETIME *) &kernel64,
|
---|
4986 | (FILETIME *) &user64);
|
---|
4987 | if (! rc) {
|
---|
4988 | *kernel = 0;
|
---|
4989 | *user = 0;
|
---|
4990 | return FALSE;
|
---|
4991 | }
|
---|
4992 | *kernel = (unsigned long) (kernel64 / 10000);
|
---|
4993 | *user = (unsigned long) (user64 / 10000);
|
---|
4994 | return TRUE;
|
---|
4995 | } else {
|
---|
4996 | __int64 creation64, exit64, kernel64, user64;
|
---|
4997 | int rc = GetThreadTimes (GetCurrentThread (),
|
---|
4998 | (FILETIME *) &creation64,
|
---|
4999 | (FILETIME *) &exit64,
|
---|
5000 | (FILETIME *) &kernel64,
|
---|
5001 | (FILETIME *) &user64);
|
---|
5002 | if (! rc) {
|
---|
5003 | *kernel = 0;
|
---|
5004 | *user = 0;
|
---|
5005 | return FALSE;
|
---|
5006 | }
|
---|
5007 | *kernel = (unsigned long) (kernel64 / 10000);
|
---|
5008 | *user = (unsigned long) (user64 / 10000);
|
---|
5009 | return TRUE;
|
---|
5010 | }
|
---|
5011 | }
|
---|
5012 |
|
---|
5013 | #endif /* WIN32 */
|
---|
5014 |
|
---|
5015 | /*
|
---|
5016 |
|
---|
5017 | History:
|
---|
5018 |
|
---|
5019 | V2.7.0
|
---|
5020 | * new WIN32 sbrk, mmap, munmap, lock code from <[email protected]>.
|
---|
5021 | * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for
|
---|
5022 | helping test this.)
|
---|
5023 | * memalign: check alignment arg
|
---|
5024 | * realloc: use memmove when regions may overlap.
|
---|
5025 | * Collect all cases in malloc requiring system memory into sYSMALLOc
|
---|
5026 | * Use mmap as backup to sbrk, if available; fold these mmap-related
|
---|
5027 | operations into others.
|
---|
5028 | * Place all internal state in malloc_state
|
---|
5029 | * Introduce fastbins (although similar to 2.5.1)
|
---|
5030 | * Many minor tunings and cosmetic improvements
|
---|
5031 | * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK
|
---|
5032 | * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS
|
---|
5033 | Thanks to Tony E. Bennett <[email protected]> and others.
|
---|
5034 | * Adjust request2size to fit with MALLOC_FAILURE_ACTION.
|
---|
5035 | * Include errno.h to support default failure action.
|
---|
5036 | * Further improve WIN32 'sbrk()' emulation's 'findRegion()' routine
|
---|
5037 | to avoid infinite loop when allocating initial memory larger
|
---|
5038 | than RESERVED_SIZE and/or subsequent memory larger than
|
---|
5039 | NEXT_SIZE. Thanks to Andreas Mueller <a.mueller at paradatec.de>
|
---|
5040 | for finding this one.
|
---|
5041 |
|
---|
5042 | V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee)
|
---|
5043 | * return null for negative arguments
|
---|
5044 | * Added Several WIN32 cleanups from Martin C. Fong <mcfong at yahoo.com>
|
---|
5045 | * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h'
|
---|
5046 | (e.g. WIN32 platforms)
|
---|
5047 | * Cleanup header file inclusion for WIN32 platforms
|
---|
5048 | * Cleanup code to avoid Microsoft Visual C++ compiler complaints
|
---|
5049 | * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing
|
---|
5050 | memory allocation routines
|
---|
5051 | * Set 'malloc_getpagesize' for WIN32 platforms (needs more work)
|
---|
5052 | * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to
|
---|
5053 | usage of 'assert' in non-WIN32 code
|
---|
5054 | * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to
|
---|
5055 | avoid infinite loop
|
---|
5056 | * Always call 'fREe()' rather than 'free()'
|
---|
5057 |
|
---|
5058 | V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee)
|
---|
5059 | * Fixed ordering problem with boundary-stamping
|
---|
5060 |
|
---|
5061 | V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee)
|
---|
5062 | * Added pvalloc, as recommended by H.J. Liu
|
---|
5063 | * Added 64bit pointer support mainly from Wolfram Gloger
|
---|
5064 | * Added anonymously donated WIN32 sbrk emulation
|
---|
5065 | * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen
|
---|
5066 | * malloc_extend_top: fix mask error that caused wastage after
|
---|
5067 | foreign sbrks
|
---|
5068 | * Add linux mremap support code from HJ Liu
|
---|
5069 |
|
---|
5070 | V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee)
|
---|
5071 | * Integrated most documentation with the code.
|
---|
5072 | * Add support for mmap, with help from
|
---|
5073 | Wolfram Gloger ([email protected]).
|
---|
5074 | * Use last_remainder in more cases.
|
---|
5075 | * Pack bins using idea from [email protected]
|
---|
5076 | * Use ordered bins instead of best-fit threshhold
|
---|
5077 | * Eliminate block-local decls to simplify tracing and debugging.
|
---|
5078 | * Support another case of realloc via move into top
|
---|
5079 | * Fix error occuring when initial sbrk_base not word-aligned.
|
---|
5080 | * Rely on page size for units instead of SBRK_UNIT to
|
---|
5081 | avoid surprises about sbrk alignment conventions.
|
---|
5082 | * Add mallinfo, mallopt. Thanks to Raymond Nijssen
|
---|
5083 | ([email protected]) for the suggestion.
|
---|
5084 | * Add `pad' argument to malloc_trim and top_pad mallopt parameter.
|
---|
5085 | * More precautions for cases where other routines call sbrk,
|
---|
5086 | courtesy of Wolfram Gloger ([email protected]).
|
---|
5087 | * Added macros etc., allowing use in linux libc from
|
---|
5088 | H.J. Lu ([email protected])
|
---|
5089 | * Inverted this history list
|
---|
5090 |
|
---|
5091 | V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee)
|
---|
5092 | * Re-tuned and fixed to behave more nicely with V2.6.0 changes.
|
---|
5093 | * Removed all preallocation code since under current scheme
|
---|
5094 | the work required to undo bad preallocations exceeds
|
---|
5095 | the work saved in good cases for most test programs.
|
---|
5096 | * No longer use return list or unconsolidated bins since
|
---|
5097 | no scheme using them consistently outperforms those that don't
|
---|
5098 | given above changes.
|
---|
5099 | * Use best fit for very large chunks to prevent some worst-cases.
|
---|
5100 | * Added some support for debugging
|
---|
5101 |
|
---|
5102 | V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee)
|
---|
5103 | * Removed footers when chunks are in use. Thanks to
|
---|
5104 | Paul Wilson ([email protected]) for the suggestion.
|
---|
5105 |
|
---|
5106 | V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee)
|
---|
5107 | * Added malloc_trim, with help from Wolfram Gloger
|
---|
5108 | ([email protected]).
|
---|
5109 |
|
---|
5110 | V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g)
|
---|
5111 |
|
---|
5112 | V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g)
|
---|
5113 | * realloc: try to expand in both directions
|
---|
5114 | * malloc: swap order of clean-bin strategy;
|
---|
5115 | * realloc: only conditionally expand backwards
|
---|
5116 | * Try not to scavenge used bins
|
---|
5117 | * Use bin counts as a guide to preallocation
|
---|
5118 | * Occasionally bin return list chunks in first scan
|
---|
5119 | * Add a few optimizations from [email protected]
|
---|
5120 |
|
---|
5121 | V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g)
|
---|
5122 | * faster bin computation & slightly different binning
|
---|
5123 | * merged all consolidations to one part of malloc proper
|
---|
5124 | (eliminating old malloc_find_space & malloc_clean_bin)
|
---|
5125 | * Scan 2 returns chunks (not just 1)
|
---|
5126 | * Propagate failure in realloc if malloc returns 0
|
---|
5127 | * Add stuff to allow compilation on non-ANSI compilers
|
---|
5128 | from [email protected]
|
---|
5129 |
|
---|
5130 | V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu)
|
---|
5131 | * removed potential for odd address access in prev_chunk
|
---|
5132 | * removed dependency on getpagesize.h
|
---|
5133 | * misc cosmetics and a bit more internal documentation
|
---|
5134 | * anticosmetics: mangled names in macros to evade debugger strangeness
|
---|
5135 | * tested on sparc, hp-700, dec-mips, rs6000
|
---|
5136 | with gcc & native cc (hp, dec only) allowing
|
---|
5137 | Detlefs & Zorn comparison study (in SIGPLAN Notices.)
|
---|
5138 |
|
---|
5139 | Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu)
|
---|
5140 | * Based loosely on libg++-1.2X malloc. (It retains some of the overall
|
---|
5141 | structure of old version, but most details differ.)
|
---|
5142 |
|
---|
5143 | */
|
---|
5144 |
|
---|
5145 |
|
---|