Falsehoods programmers believe about memory

Thayne Mc Combs

Reading time: about 5 min

Topics:

  • Uncategorized
As a programmer, you deal with memory all the time—it’s where you store variables and data. Most of the time, you probably don’t think too much about it, except maybe to not use too much of it. But behind the simple interface of getting and setting variables and allocating and deallocating blocks of memory lies a surprising amount of complexity. In the tradition of other “Falsehoods Programmers Believe” lists, here is a list of assumptions programmers might make about the way memory works. 
  1. Allocating memory never fails.
  2. If allocating memory succeeds, then writing to the allocated memory will always succeed.
  3. If your system is out of memory, then allocating will always cause your program to crash.
  4. If your system crashes due to running out of memory, it will always happen during an allocation.
  5. A single metric can tell you how much memory your process is using.
  6. It is safe to allocate memory in the child process after forking.
  7. Allocating memory is always fast.
  8. Allocating memory is always slow.
  9. The garbage collector will return memory to the system.
  10. If you free memory, it immediately returns to the operating system.
  11. Garbage collection means you can't leak memory.
  12. No other processes can directly read the memory in your process.
  13. Processes run by the same user or by root are able to inspect the memory of your process for debugging or similar in all environments.
  14. If you store sensitive data in memory, it won't ever be written to disk.
  15. The address space of a process is contiguous.
  16. Dereferencing an invalid pointer will always cause a segfault.
  17. The "free" memory is the maximum amount of memory that is available for additional allocations.
  18. Accessing memory randomly is just as fast as accessing it sequentially.
  19. Writing to a location in memory doesn't require the entire page to be copied.
  20. Writing to memory doesn't increase the amount of actual memory used by the process.
  21. Single-event upsets (SEU) never happen.
  22. Single-event upsets only happen in space.
  23. Hardware and/or the OS will protect you from single-event upsets.
  24. The RAM will never have a hardware fault.
  25. Variables are always stored in main memory.
  26. You will never run out of stack space.
  27. You will never run out of heap space.
  28. The stack is always the same size.
  29. The stack is always the same size on a given machine.
  30. All threads within a process always have the same size of stack.
  31. Frames on the stack all follow the same standard format.
  32. How data is organized in memory doesn't really matter.
  33. Using more memory is always bad.
  34. When address space layout randomization is used, it is impossible to exploit memory bugs.
  35. When a W^X policy is used, it is impossible to exploit memory bugs.
  36. Memory is never aliased.
  37. You can do a better job of caching files than the OS.
  38. The OS will do a better job of caching than you in all situations.
  39. The stack always grows down.
  40. The stack always grows up.
  41. Other threads immediately see changes to memory.
  42. Writing primitive values to memory is atomic.
  43. The garbage collector will free the memory (and call the finalizer) as soon as you are done with it.
  44. The garbage collector will free the memory (and call the finalizer) soon after it is no longer live.
  45. ECC RAM will never become corrupted.
  46. The virtual address space of a process exactly corresponds with how much physical RAM that process is using.
  47. The total memory in use is the sum of the memory used by all processes.
  48. Accessing memory won’t have to wait for disk I/O.
Did I miss anything? Are there any assumptions that you have made about memory that ended up biting you? Is there anything on this list you disagree with? Leave them in the comments, and I’ll add them to the list.   See also: Thanks to the following for contributing: Ella Moskun Ben Dilts Stephen Rollins Daniel James

About Lucid

Lucid Software is a pioneer and leader in visual collaboration dedicated to helping teams build the future. With its products—Lucidchart, Lucidspark, and Lucidscale—teams are supported from ideation to execution and are empowered to align around a shared vision, clarify complexity, and collaborate visually, no matter where they are. Lucid is proud to serve top businesses around the world, including customers such as Google, GE, and NBC Universal, and 99% of the Fortune 500. Lucid partners with industry leaders, including Google, Atlassian, and Microsoft. Since its founding, Lucid has received numerous awards for its products, business, and workplace culture. For more information, visit lucid.co.

Get Started

  • Contact Sales

Products

  • Lucidspark
  • Lucidchart
  • Lucidscale
PrivacyLegalCookie privacy choicesCookie policy
  • linkedin
  • twitter
  • instagram
  • facebook
  • youtube
  • glassdoor
  • tiktok

© 2024 Lucid Software Inc.