Memory allocators are responsible for efficiently distributing available memory to meet the demands of the application.
Memory allocation is the process of reserving space in memory to store data. This is a crucial task in embedded systems where memory is often limited.
Various algorithms and mechanisms are designed to optimize this process, ensuring efficient use of memory.
This post marks the start of a series of explorations of memory allocators in embedded systems.
1. Sequential Fits:
Best Fit:
– Description: The algorithm searches through the entire list of free blocks and selects the smallest block that is large enough. The remainder of the block might be split off and returned to the free list.
– Pros: Can reduce external fragmentation.
– Cons: Tends to be slow because it has to search the entire list. Can lead to small memory blocks left over, which can cause internal fragmentation.
– Usage: Best for systems where allocation size is unpredictable and varies widely.
Example: Given memory blocks of sizes 5KB, 8KB, and 12KB, a request for 6KB would choose the 8KB block.
First Fit:
– Description: The algorithm selects the first block that is large enough. The search is linear and starts from the beginning each time.
– Pros: Faster than best fit because it stops as soon as it finds a suitable block.
– Cons: Can lead to external fragmentation over time.
– Usage: Useful in systems where allocation and deallocation operations are frequent.
Example: Given memory blocks of sizes 5KB, 8KB, and 12KB, a request for 6KB would choose the first block that fits, which is the 8KB block
Next Fit:
– Description: Similar to first fit but starts the search from the location of the last allocation.
– Pros: Can be slightly faster than first fit in certain scenarios.
– Cons: Can lead to more external fragmentation than first fit.
– Usage: Beneficial when memory allocations of similar sizes occur sequentially.
Example: If the previous allocation selected the 8KB block, the next search would start from after the 8KB block.
Good Fit:
– Description: A compromise between best fit and first fit. It often uses heuristics to determine a “good enough” block.
– Pros: Tries to balance the speed of first fit with the efficiency of best fit.
– Cons: More complex than either best or first fit.
– Usage: Suitable for systems where a balance between speed and memory efficiency is needed.
Example: It might select a block that is “good enough” but not necessarily the “best” fit.
2. Segregated Free Lists:
Simple Segregated Storage:
– Description: Memory is divided into fixed-size blocks. Each size has its own free list.
– Pros: Fast allocation and deallocation. Reduces external fragmentation.
– Cons: Can lead to internal fragmentation if block sizes aren’t well-matched to common allocation sizes.
– Usage: Effective when allocation sizes are predictable.
Example: Separate lists for 4KB, 8KB, and 16KB blocks.
Segregated Fits:
– Description: Combines the idea of segregated lists with fitting strategies. A size class might use a first fit or best fit strategy within its list.
– Pros: Offers more flexibility and can better adapt to different allocation patterns.
– Cons: More complex than simple segregated storage.
– Usage: Useful when there’s a need for more fine-tuned memory allocation strategies.
Example: For a 8KB size class, use First Fit to find the best block within that class.
3. Buddy System:
– Description: Memory is divided into blocks of size 2^n . Blocks can be coalesced with their “buddy” if both are free.
– Pros: Fast allocation and deallocation. Natural coalescing reduces external fragmentation.
– Cons: Can lead to internal fragmentation if requests don’t match power-of-two sizes.
– Usage: Often used in kernel memory allocators due to its speed and coalescing properties.
Example: If a 8KB block is split, two 4KB “buddies” are created.
4. Indexed Fits:
– Description: Uses an index (like a tree) to quickly find free blocks of a suitable size.
– Pros: Faster searches than sequential fits.
– Cons: The index structure consumes additional memory.
– Usage: Beneficial in systems where fast allocation is more critical than memory overhead.
Example: A binary tree where each node points to a free block of memory, with tree operations optimizing the search.
5. Bitmapped Fits:
– Description: Uses a bitmap to represent the occupation status of blocks.
– Pros: Provides a compact representation of free and occupied blocks. Fast searches.
– Cons: Not as flexible as other methods; often requires memory blocks to be of uniform size.
– Usage: Effective in systems with limited memory and fixed-size allocations.
Example: If memory is divided into 8 blocks, a bitmap like 11001010 might indicate that blocks 1, 2, 5, and 7 are free.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
An Article by: Yashwanth Naidu Tikkisetty
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~