Why can a 352GB NumPy ndarray be used on an 8GB memory macOS computer?MemoryError on Windows but not MacOS...

Rear brake cable temporary fix possible?

What to do if authors don't respond to my serious concerns about their paper?

What is the wife of a henpecked husband called?

Can we use the stored gravitational potential energy of a building to produce power?

Does the "particle exchange" operator have any validity?

Word or phrase for showing great skill at something without formal training in it

Can you earn endless XP using a Flameskull and its self-revival feature?

How experienced do I need to be to go on a photography workshop?

Why did Jodrell Bank assist the Soviet Union to collect data from their spacecraft in the mid 1960's?

What's a good word to describe a public place that looks like it wouldn't be rough?

Why did Bush enact a completely different foreign policy to that which he espoused during the 2000 Presidential election campaign?

How should I handle players who ignore the session zero agreement?

Can I become debt free or should I file for bankruptcy? How do I manage my debt and finances?

How would an AI self awareness kill switch work?

A flower in a hexagon

En passant for beginners

What do you call a fact that doesn't match the settings?

Using only 1s, make 29 with the minimum number of digits

What makes the Forgotten Realms "forgotten"?

Program that converts a number to a letter of the alphabet

Getting a UK passport renewed when you have dual nationality and a different name in your second country?

What is the time complexity of enqueue and dequeue of a queue implemented with a singly linked list?

Does 'rm -fr` remove the boot loader?

Pendulum Rotation



Why can a 352GB NumPy ndarray be used on an 8GB memory macOS computer?


MemoryError on Windows but not MacOS when using np.zerosHow can the Euclidean distance be calculated with NumPy?What is the difference between ndarray and array in numpy?Pandas MemoryError on server with more MemoryNumpy and memory allocation on Mac OS X vs. LinuxLarge numpy arrays in shared memory for multiprocessing: Is something wrong with this approach?Numpy array error MemoryErrorI use Python 3.6 on windows and I can't install numpy even using bash on ubuntuUsing numpy and scipy from Anaconda in xcode on a macWhy does importing numpy add 1 GB of virtual memory on Linux?Python Memory Error using Numpy only with Ubuntu?













14















import numpy as np

array = np.zeros((210000, 210000)) # default numpy.float64
array.nbytes


When I run the above code on my 8GB memory MacBook with macOS, no error occurs. But running the same code on a 16GB memory PC with Windows 10, or a 12GB memory Ubuntu laptop, or even on a 128GB memory Linux supercomputer, the Python interpreter will raise a MemoryError. All the test environments have 64-bit Python 3.6 or 3.7 installed.










share|improve this question





























    14















    import numpy as np

    array = np.zeros((210000, 210000)) # default numpy.float64
    array.nbytes


    When I run the above code on my 8GB memory MacBook with macOS, no error occurs. But running the same code on a 16GB memory PC with Windows 10, or a 12GB memory Ubuntu laptop, or even on a 128GB memory Linux supercomputer, the Python interpreter will raise a MemoryError. All the test environments have 64-bit Python 3.6 or 3.7 installed.










    share|improve this question



























      14












      14








      14


      3






      import numpy as np

      array = np.zeros((210000, 210000)) # default numpy.float64
      array.nbytes


      When I run the above code on my 8GB memory MacBook with macOS, no error occurs. But running the same code on a 16GB memory PC with Windows 10, or a 12GB memory Ubuntu laptop, or even on a 128GB memory Linux supercomputer, the Python interpreter will raise a MemoryError. All the test environments have 64-bit Python 3.6 or 3.7 installed.










      share|improve this question
















      import numpy as np

      array = np.zeros((210000, 210000)) # default numpy.float64
      array.nbytes


      When I run the above code on my 8GB memory MacBook with macOS, no error occurs. But running the same code on a 16GB memory PC with Windows 10, or a 12GB memory Ubuntu laptop, or even on a 128GB memory Linux supercomputer, the Python interpreter will raise a MemoryError. All the test environments have 64-bit Python 3.6 or 3.7 installed.







      python macos numpy memory






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited 10 hours ago









      Boann

      37.1k1290121




      37.1k1290121










      asked 16 hours ago









      Blaise WangBlaise Wang

      1179




      1179
























          1 Answer
          1






          active

          oldest

          votes


















          14














          @Martijn Pieters' answer is on the right track, but not quite right: this has nothing to do with memory compression, but instead it has to do with virtual memory.



          For example, try running the following code on your machine:



          arrays = [np.zeros((21000, 21000)) for _ in range(0, 10000)]


          This code allocates 32TiB of memory, but you won't get an error (at least I didn't, on Linux). If I check htop, I see the following:



            PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
          31362 user 20 0 32.1T 69216 12712 S 0.0 0.4 0:00.22 python


          This because the OS is perfectly willing to overcommit on virtual memory. It won't actually assign pages to physical memory until it needs to. The way it works is:





          • calloc asks the OS for some memory to use

          • the OS looks in the process's page tables, and finds a chunk of memory that it's willing to assign. This is fast operation, the OS just stores the memory address range in an internal data structure.

          • the program writes to one of the addresses.

          • the OS receives a page fault, at which point it looks and actually assigns the page to physical memory. A page is usually a few KiB in size.

          • the OS passes control back to the program, which proceeds without noticing the interruption.


          I have no idea why creating a single huge array doesn't work on Linux or Windows, but I'd expect it to have more to do with the platform's libc's calloc() implementation and the limits imposed there than the operating system.



          For fun, try running arrays = [np.ones((21000, 21000)) for _ in range(0, 10000)]. You'll definitely get an out of memory error, even on MacOs or Linux with swap compression. Yes, certain OSes can compress RAM, but they can't compress it to the level that you wouldn't run out of memory.






          share|improve this answer


























          • I tried your first example which indeed the Linux allocated 32t virtual memory on a 128GB memory server. However, MemoryError raised with my example array = np.zeros((210000, 210000)). My example will only need 352GB virtual memory which seems more reasonable than the 32t virtual memory.

            – Blaise Wang
            9 hours ago













          • @BlaiseWang Right, I addressed that in my answer "I have no idea why creating a single huge array doesn't work on Linux or Windows, but I'd expect it to have more to do with the platform's implementation of libc and the limits imposed there than the operating system." If you'd really like to know why, I'd suggest you review the code in code.woboq.org/userspace/glibc/malloc/malloc.c.html (I can't be bothered to do so)

            – user60561
            8 hours ago











          • @BlaiseWang It's because NumPy is filling the array with zeroes. malloc() typically doesn't care about what's in the allocated memory when it gets it, so Unix might do the same with the pages it returns - as soon as you write zeroes, though, it gets allocated. Some versions of Unix, like the one on user60651's machine, might also guarantee that new pages are zeroed out, and not allocate memory unless the written value is not a zero (and just return zero if an unwritten page is read).

            – TheHansinator
            7 hours ago











          • @TheHansinator It's a reasonable guess but still this cannot explain why 32TiB virtual memory can be allocated but 350GB cannot be.

            – Blaise Wang
            6 hours ago











          • Indeed, my interpretation of the compression system was wrong, it applies to inactive pages in physical memory, not to VM unaddressed pages of the current process. The VM overcommitting behaviour of the MacOS kernel is very hard to track down (Apple stopped updating their Virtual Memory and kernel internals documentation in 2013), but since the behaviour is configurable in Linux perhaps the OP should experiment with adjusting the overcommit policy.

            – Martijn Pieters
            1 hour ago











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54961554%2fwhy-can-a-352gb-numpy-ndarray-be-used-on-an-8gb-memory-macos-computer%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          14














          @Martijn Pieters' answer is on the right track, but not quite right: this has nothing to do with memory compression, but instead it has to do with virtual memory.



          For example, try running the following code on your machine:



          arrays = [np.zeros((21000, 21000)) for _ in range(0, 10000)]


          This code allocates 32TiB of memory, but you won't get an error (at least I didn't, on Linux). If I check htop, I see the following:



            PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
          31362 user 20 0 32.1T 69216 12712 S 0.0 0.4 0:00.22 python


          This because the OS is perfectly willing to overcommit on virtual memory. It won't actually assign pages to physical memory until it needs to. The way it works is:





          • calloc asks the OS for some memory to use

          • the OS looks in the process's page tables, and finds a chunk of memory that it's willing to assign. This is fast operation, the OS just stores the memory address range in an internal data structure.

          • the program writes to one of the addresses.

          • the OS receives a page fault, at which point it looks and actually assigns the page to physical memory. A page is usually a few KiB in size.

          • the OS passes control back to the program, which proceeds without noticing the interruption.


          I have no idea why creating a single huge array doesn't work on Linux or Windows, but I'd expect it to have more to do with the platform's libc's calloc() implementation and the limits imposed there than the operating system.



          For fun, try running arrays = [np.ones((21000, 21000)) for _ in range(0, 10000)]. You'll definitely get an out of memory error, even on MacOs or Linux with swap compression. Yes, certain OSes can compress RAM, but they can't compress it to the level that you wouldn't run out of memory.






          share|improve this answer


























          • I tried your first example which indeed the Linux allocated 32t virtual memory on a 128GB memory server. However, MemoryError raised with my example array = np.zeros((210000, 210000)). My example will only need 352GB virtual memory which seems more reasonable than the 32t virtual memory.

            – Blaise Wang
            9 hours ago













          • @BlaiseWang Right, I addressed that in my answer "I have no idea why creating a single huge array doesn't work on Linux or Windows, but I'd expect it to have more to do with the platform's implementation of libc and the limits imposed there than the operating system." If you'd really like to know why, I'd suggest you review the code in code.woboq.org/userspace/glibc/malloc/malloc.c.html (I can't be bothered to do so)

            – user60561
            8 hours ago











          • @BlaiseWang It's because NumPy is filling the array with zeroes. malloc() typically doesn't care about what's in the allocated memory when it gets it, so Unix might do the same with the pages it returns - as soon as you write zeroes, though, it gets allocated. Some versions of Unix, like the one on user60651's machine, might also guarantee that new pages are zeroed out, and not allocate memory unless the written value is not a zero (and just return zero if an unwritten page is read).

            – TheHansinator
            7 hours ago











          • @TheHansinator It's a reasonable guess but still this cannot explain why 32TiB virtual memory can be allocated but 350GB cannot be.

            – Blaise Wang
            6 hours ago











          • Indeed, my interpretation of the compression system was wrong, it applies to inactive pages in physical memory, not to VM unaddressed pages of the current process. The VM overcommitting behaviour of the MacOS kernel is very hard to track down (Apple stopped updating their Virtual Memory and kernel internals documentation in 2013), but since the behaviour is configurable in Linux perhaps the OP should experiment with adjusting the overcommit policy.

            – Martijn Pieters
            1 hour ago
















          14














          @Martijn Pieters' answer is on the right track, but not quite right: this has nothing to do with memory compression, but instead it has to do with virtual memory.



          For example, try running the following code on your machine:



          arrays = [np.zeros((21000, 21000)) for _ in range(0, 10000)]


          This code allocates 32TiB of memory, but you won't get an error (at least I didn't, on Linux). If I check htop, I see the following:



            PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
          31362 user 20 0 32.1T 69216 12712 S 0.0 0.4 0:00.22 python


          This because the OS is perfectly willing to overcommit on virtual memory. It won't actually assign pages to physical memory until it needs to. The way it works is:





          • calloc asks the OS for some memory to use

          • the OS looks in the process's page tables, and finds a chunk of memory that it's willing to assign. This is fast operation, the OS just stores the memory address range in an internal data structure.

          • the program writes to one of the addresses.

          • the OS receives a page fault, at which point it looks and actually assigns the page to physical memory. A page is usually a few KiB in size.

          • the OS passes control back to the program, which proceeds without noticing the interruption.


          I have no idea why creating a single huge array doesn't work on Linux or Windows, but I'd expect it to have more to do with the platform's libc's calloc() implementation and the limits imposed there than the operating system.



          For fun, try running arrays = [np.ones((21000, 21000)) for _ in range(0, 10000)]. You'll definitely get an out of memory error, even on MacOs or Linux with swap compression. Yes, certain OSes can compress RAM, but they can't compress it to the level that you wouldn't run out of memory.






          share|improve this answer


























          • I tried your first example which indeed the Linux allocated 32t virtual memory on a 128GB memory server. However, MemoryError raised with my example array = np.zeros((210000, 210000)). My example will only need 352GB virtual memory which seems more reasonable than the 32t virtual memory.

            – Blaise Wang
            9 hours ago













          • @BlaiseWang Right, I addressed that in my answer "I have no idea why creating a single huge array doesn't work on Linux or Windows, but I'd expect it to have more to do with the platform's implementation of libc and the limits imposed there than the operating system." If you'd really like to know why, I'd suggest you review the code in code.woboq.org/userspace/glibc/malloc/malloc.c.html (I can't be bothered to do so)

            – user60561
            8 hours ago











          • @BlaiseWang It's because NumPy is filling the array with zeroes. malloc() typically doesn't care about what's in the allocated memory when it gets it, so Unix might do the same with the pages it returns - as soon as you write zeroes, though, it gets allocated. Some versions of Unix, like the one on user60651's machine, might also guarantee that new pages are zeroed out, and not allocate memory unless the written value is not a zero (and just return zero if an unwritten page is read).

            – TheHansinator
            7 hours ago











          • @TheHansinator It's a reasonable guess but still this cannot explain why 32TiB virtual memory can be allocated but 350GB cannot be.

            – Blaise Wang
            6 hours ago











          • Indeed, my interpretation of the compression system was wrong, it applies to inactive pages in physical memory, not to VM unaddressed pages of the current process. The VM overcommitting behaviour of the MacOS kernel is very hard to track down (Apple stopped updating their Virtual Memory and kernel internals documentation in 2013), but since the behaviour is configurable in Linux perhaps the OP should experiment with adjusting the overcommit policy.

            – Martijn Pieters
            1 hour ago














          14












          14








          14







          @Martijn Pieters' answer is on the right track, but not quite right: this has nothing to do with memory compression, but instead it has to do with virtual memory.



          For example, try running the following code on your machine:



          arrays = [np.zeros((21000, 21000)) for _ in range(0, 10000)]


          This code allocates 32TiB of memory, but you won't get an error (at least I didn't, on Linux). If I check htop, I see the following:



            PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
          31362 user 20 0 32.1T 69216 12712 S 0.0 0.4 0:00.22 python


          This because the OS is perfectly willing to overcommit on virtual memory. It won't actually assign pages to physical memory until it needs to. The way it works is:





          • calloc asks the OS for some memory to use

          • the OS looks in the process's page tables, and finds a chunk of memory that it's willing to assign. This is fast operation, the OS just stores the memory address range in an internal data structure.

          • the program writes to one of the addresses.

          • the OS receives a page fault, at which point it looks and actually assigns the page to physical memory. A page is usually a few KiB in size.

          • the OS passes control back to the program, which proceeds without noticing the interruption.


          I have no idea why creating a single huge array doesn't work on Linux or Windows, but I'd expect it to have more to do with the platform's libc's calloc() implementation and the limits imposed there than the operating system.



          For fun, try running arrays = [np.ones((21000, 21000)) for _ in range(0, 10000)]. You'll definitely get an out of memory error, even on MacOs or Linux with swap compression. Yes, certain OSes can compress RAM, but they can't compress it to the level that you wouldn't run out of memory.






          share|improve this answer















          @Martijn Pieters' answer is on the right track, but not quite right: this has nothing to do with memory compression, but instead it has to do with virtual memory.



          For example, try running the following code on your machine:



          arrays = [np.zeros((21000, 21000)) for _ in range(0, 10000)]


          This code allocates 32TiB of memory, but you won't get an error (at least I didn't, on Linux). If I check htop, I see the following:



            PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
          31362 user 20 0 32.1T 69216 12712 S 0.0 0.4 0:00.22 python


          This because the OS is perfectly willing to overcommit on virtual memory. It won't actually assign pages to physical memory until it needs to. The way it works is:





          • calloc asks the OS for some memory to use

          • the OS looks in the process's page tables, and finds a chunk of memory that it's willing to assign. This is fast operation, the OS just stores the memory address range in an internal data structure.

          • the program writes to one of the addresses.

          • the OS receives a page fault, at which point it looks and actually assigns the page to physical memory. A page is usually a few KiB in size.

          • the OS passes control back to the program, which proceeds without noticing the interruption.


          I have no idea why creating a single huge array doesn't work on Linux or Windows, but I'd expect it to have more to do with the platform's libc's calloc() implementation and the limits imposed there than the operating system.



          For fun, try running arrays = [np.ones((21000, 21000)) for _ in range(0, 10000)]. You'll definitely get an out of memory error, even on MacOs or Linux with swap compression. Yes, certain OSes can compress RAM, but they can't compress it to the level that you wouldn't run out of memory.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited 8 hours ago

























          answered 9 hours ago









          user60561user60561

          1,04011025




          1,04011025













          • I tried your first example which indeed the Linux allocated 32t virtual memory on a 128GB memory server. However, MemoryError raised with my example array = np.zeros((210000, 210000)). My example will only need 352GB virtual memory which seems more reasonable than the 32t virtual memory.

            – Blaise Wang
            9 hours ago













          • @BlaiseWang Right, I addressed that in my answer "I have no idea why creating a single huge array doesn't work on Linux or Windows, but I'd expect it to have more to do with the platform's implementation of libc and the limits imposed there than the operating system." If you'd really like to know why, I'd suggest you review the code in code.woboq.org/userspace/glibc/malloc/malloc.c.html (I can't be bothered to do so)

            – user60561
            8 hours ago











          • @BlaiseWang It's because NumPy is filling the array with zeroes. malloc() typically doesn't care about what's in the allocated memory when it gets it, so Unix might do the same with the pages it returns - as soon as you write zeroes, though, it gets allocated. Some versions of Unix, like the one on user60651's machine, might also guarantee that new pages are zeroed out, and not allocate memory unless the written value is not a zero (and just return zero if an unwritten page is read).

            – TheHansinator
            7 hours ago











          • @TheHansinator It's a reasonable guess but still this cannot explain why 32TiB virtual memory can be allocated but 350GB cannot be.

            – Blaise Wang
            6 hours ago











          • Indeed, my interpretation of the compression system was wrong, it applies to inactive pages in physical memory, not to VM unaddressed pages of the current process. The VM overcommitting behaviour of the MacOS kernel is very hard to track down (Apple stopped updating their Virtual Memory and kernel internals documentation in 2013), but since the behaviour is configurable in Linux perhaps the OP should experiment with adjusting the overcommit policy.

            – Martijn Pieters
            1 hour ago



















          • I tried your first example which indeed the Linux allocated 32t virtual memory on a 128GB memory server. However, MemoryError raised with my example array = np.zeros((210000, 210000)). My example will only need 352GB virtual memory which seems more reasonable than the 32t virtual memory.

            – Blaise Wang
            9 hours ago













          • @BlaiseWang Right, I addressed that in my answer "I have no idea why creating a single huge array doesn't work on Linux or Windows, but I'd expect it to have more to do with the platform's implementation of libc and the limits imposed there than the operating system." If you'd really like to know why, I'd suggest you review the code in code.woboq.org/userspace/glibc/malloc/malloc.c.html (I can't be bothered to do so)

            – user60561
            8 hours ago











          • @BlaiseWang It's because NumPy is filling the array with zeroes. malloc() typically doesn't care about what's in the allocated memory when it gets it, so Unix might do the same with the pages it returns - as soon as you write zeroes, though, it gets allocated. Some versions of Unix, like the one on user60651's machine, might also guarantee that new pages are zeroed out, and not allocate memory unless the written value is not a zero (and just return zero if an unwritten page is read).

            – TheHansinator
            7 hours ago











          • @TheHansinator It's a reasonable guess but still this cannot explain why 32TiB virtual memory can be allocated but 350GB cannot be.

            – Blaise Wang
            6 hours ago











          • Indeed, my interpretation of the compression system was wrong, it applies to inactive pages in physical memory, not to VM unaddressed pages of the current process. The VM overcommitting behaviour of the MacOS kernel is very hard to track down (Apple stopped updating their Virtual Memory and kernel internals documentation in 2013), but since the behaviour is configurable in Linux perhaps the OP should experiment with adjusting the overcommit policy.

            – Martijn Pieters
            1 hour ago

















          I tried your first example which indeed the Linux allocated 32t virtual memory on a 128GB memory server. However, MemoryError raised with my example array = np.zeros((210000, 210000)). My example will only need 352GB virtual memory which seems more reasonable than the 32t virtual memory.

          – Blaise Wang
          9 hours ago







          I tried your first example which indeed the Linux allocated 32t virtual memory on a 128GB memory server. However, MemoryError raised with my example array = np.zeros((210000, 210000)). My example will only need 352GB virtual memory which seems more reasonable than the 32t virtual memory.

          – Blaise Wang
          9 hours ago















          @BlaiseWang Right, I addressed that in my answer "I have no idea why creating a single huge array doesn't work on Linux or Windows, but I'd expect it to have more to do with the platform's implementation of libc and the limits imposed there than the operating system." If you'd really like to know why, I'd suggest you review the code in code.woboq.org/userspace/glibc/malloc/malloc.c.html (I can't be bothered to do so)

          – user60561
          8 hours ago





          @BlaiseWang Right, I addressed that in my answer "I have no idea why creating a single huge array doesn't work on Linux or Windows, but I'd expect it to have more to do with the platform's implementation of libc and the limits imposed there than the operating system." If you'd really like to know why, I'd suggest you review the code in code.woboq.org/userspace/glibc/malloc/malloc.c.html (I can't be bothered to do so)

          – user60561
          8 hours ago













          @BlaiseWang It's because NumPy is filling the array with zeroes. malloc() typically doesn't care about what's in the allocated memory when it gets it, so Unix might do the same with the pages it returns - as soon as you write zeroes, though, it gets allocated. Some versions of Unix, like the one on user60651's machine, might also guarantee that new pages are zeroed out, and not allocate memory unless the written value is not a zero (and just return zero if an unwritten page is read).

          – TheHansinator
          7 hours ago





          @BlaiseWang It's because NumPy is filling the array with zeroes. malloc() typically doesn't care about what's in the allocated memory when it gets it, so Unix might do the same with the pages it returns - as soon as you write zeroes, though, it gets allocated. Some versions of Unix, like the one on user60651's machine, might also guarantee that new pages are zeroed out, and not allocate memory unless the written value is not a zero (and just return zero if an unwritten page is read).

          – TheHansinator
          7 hours ago













          @TheHansinator It's a reasonable guess but still this cannot explain why 32TiB virtual memory can be allocated but 350GB cannot be.

          – Blaise Wang
          6 hours ago





          @TheHansinator It's a reasonable guess but still this cannot explain why 32TiB virtual memory can be allocated but 350GB cannot be.

          – Blaise Wang
          6 hours ago













          Indeed, my interpretation of the compression system was wrong, it applies to inactive pages in physical memory, not to VM unaddressed pages of the current process. The VM overcommitting behaviour of the MacOS kernel is very hard to track down (Apple stopped updating their Virtual Memory and kernel internals documentation in 2013), but since the behaviour is configurable in Linux perhaps the OP should experiment with adjusting the overcommit policy.

          – Martijn Pieters
          1 hour ago





          Indeed, my interpretation of the compression system was wrong, it applies to inactive pages in physical memory, not to VM unaddressed pages of the current process. The VM overcommitting behaviour of the MacOS kernel is very hard to track down (Apple stopped updating their Virtual Memory and kernel internals documentation in 2013), but since the behaviour is configurable in Linux perhaps the OP should experiment with adjusting the overcommit policy.

          – Martijn Pieters
          1 hour ago




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54961554%2fwhy-can-a-352gb-numpy-ndarray-be-used-on-an-8gb-memory-macos-computer%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Why does my Macbook overheat and use so much CPU and energy when on YouTube?Why do so many insist on using...

          Puerta de Hutt Referencias Enlaces externos Menú de navegación15°58′00″S 5°42′00″O /...

          How to prevent page numbers from appearing on glossaries?How to remove a dot and a page number in the...