Memory error bad allocation

Исключение MemoryError в Python часто ставит начинающих разработчиков в тупик, однако решить эту проблему можно в течение 10 минут.

Впервые я столкнулся с Memory Error, когда работал с огромным массивом ключевых слов. Там было около 40 млн. строк, воодушевленный своим гениальным скриптом я нажал Shift + F10 и спустя 20 секунд получил Memory Error.

Memory Error — исключение вызываемое в случае переполнения выделенной ОС памяти, при условии, что ситуация может быть исправлена путем удаления объектов. Оставим ссылку на доку, кому интересно подробнее разобраться с этим исключением и с формулировкой. Ссылка на документацию по Memory Error.

Если вам интересно как вызывать это исключение, то попробуйте исполнить приведенный ниже код.

print('a' * 1000000000000)

Почему возникает MemoryError?

В целом существует всего лишь несколько основных причин, среди которых:

  • 32-битная версия Python, так как для 32-битных приложений Windows выделяет лишь 4 гб, то серьезные операции приводят к MemoryError
  • Неоптимизированный код
  • Чрезмерно большие датасеты и иные инпут файлы
  • Ошибки в установке пакетов

Как исправить MemoryError?

Ошибка связана с 32-битной версией

Тут все просто, следуйте данному гайдлайну и уже через 10 минут вы запустите свой код.

Как посмотреть версию Python?

Идем в cmd (Кнопка Windows + R -> cmd) и пишем python. В итоге получим что-то похожее на

Python 3.8.8 (tags/v3.8.8:024d805, Feb 19 2021, 13:18:16) [MSC v.1928 64 bit (AMD64)]

Нас интересует эта часть [MSC v.1928 64 bit (AMD64)], так как вы ловите MemoryError, то скорее всего у вас будет 32 bit.

Как установить 64-битную версию Python?

Идем на официальный сайт Python и качаем установщик 64-битной версии. Ссылка на сайт с официальными релизами. В скобках нужной нам версии видим 64-bit. Удалять или не удалять 32-битную версию — это ваш выбор, я обычно удаляю, чтобы не путаться в IDE. Все что останется сделать, просто поменять интерпретатор.

Идем в PyCharm в File -> Settings -> Project -> Python Interpreter -> Шестеренка -> Add -> New environment -> Base Interpreter и выбираем python.exe из только что установленной директории. У меня это

C:/Users/Core/AppData/LocalPrograms/Python/Python38

Все, запускаем скрипт и видим, что все выполняется как следует.

Оптимизация кода

Пару раз я встречался с ситуацией когда мои костыли приводили к MemoryError. К этому приводили избыточные условия, циклы и буферные переменные, которые не удаляются после потери необходимости в них. Если вы понимаете, что проблема может быть в этом, вероятно стоит закостылить пару del, мануально удаляя ссылки на объекты. Но помните о том, что проблема в архитектуре вашего проекта, и по настоящему решить эту проблему можно лишь правильно проработав структуру проекта.

Явно освобождаем память с помощью сборщика мусора

В целом в 90% случаев проблема решается переустановкой питона, однако, я  просто обязан рассказать вам про библиотеку gc. В целом почитать про Garbage Collector стоит отдельно на авторитетных ресурсах в статьях профессиональных программистов. Вы просто обязаны знать, что происходит под капотом управления памятью. GC — это не только про Python, управление памятью в Java и других языках базируется на технологии сборки мусора. Ну а вот так мы можем мануально освободить память в Python:

What is Memory Error?

Python Memory Error or in layman language is exactly what it means, you have run out of memory in your RAM for your code to execute.

When this error occurs it is likely because you have loaded the entire data into memory. For large datasets, you will want to use batch processing. Instead of loading your entire dataset into memory you should keep your data in your hard drive and access it in batches.

A memory error means that your program has run out of memory. This means that your program somehow creates too many objects. In your example, you have to look for parts of your algorithm that could be consuming a lot of memory.

If an operation runs out of memory it is known as memory error.

Types of Python Memory Error

Unexpected Memory Error in Python

If you get an unexpected Python Memory Error and you think you should have plenty of rams available, it might be because you are using a 32-bit python installation.

The easy solution for Unexpected Python Memory Error

Your program is running out of virtual address space. Most probably because you’re using a 32-bit version of Python. As Windows (and most other OSes as well) limits 32-bit applications to 2 GB of user-mode address space.

We Python Pooler’s recommend you to install a 64-bit version of Python (if you can, I’d recommend upgrading to Python 3 for other reasons); it will use more memory, but then, it will have access to a lot more memory space (and more physical RAM as well).

The issue is that 32-bit python only has access to ~4GB of RAM. This can shrink even further if your operating system is 32-bit, because of the operating system overhead.

For example, in Python 2 zip function takes in multiple iterables and returns a single iterator of tuples. Anyhow, we need each item from the iterator once for looping. So we don’t need to store all items in memory throughout looping. So it’d be better to use izip which retrieves each item only on next iterations. Python 3’s zip functions as izip by default.

Must Read: Python Print Without Newline

Python Memory Error Due to Dataset

Like the point, about 32 bit and 64-bit versions have already been covered, another possibility could be dataset size, if you’re working with a large dataset. Loading a large dataset directly into memory and performing computations on it and saving intermediate results of those computations can quickly fill up your memory. Generator functions come in very handy if this is your problem. Many popular python libraries like Keras and TensorFlow have specific functions and classes for generators.

Python Memory Error Due to Improper Installation of Python

Improper installation of Python packages may also lead to Memory Error. As a matter of fact, before solving the problem, We had installed on windows manually python 2.7 and the packages that I needed, after messing almost two days trying to figure out what was the problem, We reinstalled everything with Conda and the problem was solved.

We guess Conda is installing better memory management packages and that was the main reason. So you can try installing Python Packages using Conda, it may solve the Memory Error issue.

Most platforms return an “Out of Memory error” if an attempt to allocate a block of memory fails, but the root cause of that problem very rarely has anything to do with truly being “out of memory.” That’s because, on almost every modern operating system, the memory manager will happily use your available hard disk space as place to store pages of memory that don’t fit in RAM; your computer can usually allocate memory until the disk fills up and it may lead to Python Out of Memory Error(or a swap limit is hit; in Windows, see System Properties > Performance Options > Advanced > Virtual memory).

Making matters much worse, every active allocation in the program’s address space can cause “fragmentation” that can prevent future allocations by splitting available memory into chunks that are individually too small to satisfy a new allocation with one contiguous block.

1 If a 32bit application has the LARGEADDRESSAWARE flag set, it has access to s full 4gb of address space when running on a 64bit version of Windows.

2 So far, four readers have written to explain that the gcAllowVeryLargeObjects flag removes this .NET limitation. It does not. This flag allows objects which occupy more than 2gb of memory, but it does not permit a single-dimensional array to contain more than 2^31 entries.

How can I explicitly free memory in Python?

If you wrote a Python program that acts on a large input file to create a few million objects representing and it’s taking tons of memory and you need the best way to tell Python that you no longer need some of the data, and it can be freed?

The Simple answer to this problem is:

Force the garbage collector for releasing an unreferenced memory with gc.collect(). 

Like shown below:

import gc

gc.collect()

Memory Error in Python, Python Pool

Memory error in Python when 50+GB is free and using 64bit python?

On some operating systems, there are limits to how much RAM a single CPU can handle. So even if there is enough RAM free, your single thread (=running on one core) cannot take more. But I don’t know if this is valid for your Windows version, though.

How do you set the memory usage for python programs?

Python uses garbage collection and built-in memory management to ensure the program only uses as much RAM as required. So unless you expressly write your program in such a way to bloat the memory usage, e.g. making a database in RAM, Python only uses what it needs.

Which begs the question, why would you want to use more RAM? The idea for most programmers is to minimize resource usage.

if you wanna limit the python vm memory usage, you can try this:
1、Linux, ulimit command to limit the memory usage on python
2、you can use resource module to limit the program memory usage;

 if u wanna speed up ur program though giving more memory to ur application, you could try this:
1threading, multiprocessing
2pypy
3pysco on only python 2.5

How to put limits on Memory and CPU Usage

To put limits on the memory or CPU use of a program running. So that we will not face any memory error. Well to do so, Resource module can be used and thus both the task can be performed very well as shown in the code given below:

Code #1: Restrict CPU time

# importing libraries 
import signal 
import resource 
import os 

# checking time limit exceed 
def time_exceeded(signo, frame): 
	print("Time's up !") 
	raise SystemExit(1) 

def set_max_runtime(seconds): 
	# setting up the resource limit 
	soft, hard = resource.getrlimit(resource.RLIMIT_CPU) 
	resource.setrlimit(resource.RLIMIT_CPU, (seconds, hard)) 
	signal.signal(signal.SIGXCPU, time_exceeded) 

# max run time of 15 millisecond 
if __name__ == '__main__': 
	set_max_runtime(15) 
	while True: 
		pass

Code #2: In order to restrict memory use, the code puts a limit on the total address space

# using resource 
import resource 

def limit_memory(maxsize): 
	soft, hard = resource.getrlimit(resource.RLIMIT_AS) 
	resource.setrlimit(resource.RLIMIT_AS, (maxsize, hard)) 

Ways to Handle Python Memory Error and Large Data Files

1. Allocate More Memory

Some Python tools or libraries may be limited by a default memory configuration.

Check if you can re-configure your tool or library to allocate more memory.

That is, a platform designed for handling very large datasets, that allows you to use data transforms and machine learning algorithms on top of it.

A good example is Weka, where you can increase the memory as a parameter when starting the application.

2. Work with a Smaller Sample

Are you sure you need to work with all of the data?

Take a random sample of your data, such as the first 1,000 or 100,000 rows. Use this smaller sample to work through your problem before fitting a final model on all of your data (using progressive data loading techniques).

I think this is a good practice in general for machine learning to give you quick spot-checks of algorithms and turnaround of results.

You may also consider performing a sensitivity analysis of the amount of data used to fit one algorithm compared to the model skill. Perhaps there is a natural point of diminishing returns that you can use as a heuristic size of your smaller sample.

3. Use a Computer with More Memory

Do you have to work on your computer?

Perhaps you can get access to a much larger computer with an order of magnitude more memory.

For example, a good option is to rent compute time on a cloud service like Amazon Web Services that offers machines with tens of gigabytes of RAM for less than a US dollar per hour.

4. Use a Relational Database

Relational databases provide a standard way of storing and accessing very large datasets.

Internally, the data is stored on disk can be progressively loaded in batches and can be queried using a standard query language (SQL).

Free open-source database tools like MySQL or Postgres can be used and most (all?) programming languages and many machine learning tools can connect directly to relational databases. You can also use a lightweight approach, such as SQLite.

5. Use a Big Data Platform

In some cases, you may need to resort to a big data platform.

Summary

In this post, you discovered a number of tactics and ways that you can use when dealing with Python Memory Error.

Are there other methods that you know about or have tried?
Share them in the comments below.

Have you tried any of these methods?
Let me know in the comments.

If your problem is still not solved and you need help regarding Python Memory Error. Comment Down below, We will try to solve your issue asap.

What exactly is a Memory Error?

Python Memory Error or, in layman’s terms, you’ve run out of Random access memory (RAM) to sustain the running of your code. This error indicates that you have loaded all of the data into memory. For large datasets, batch processing is advised. Instead of packing your complete dataset into memory, please save it to your hard disk and access it in batches.

Your software has run out of memory, resulting in a memory error. It indicates that your program generates an excessive number of items. You’ll need to check for parts of your algorithm that consume a lot of memory in your case.

A memory error occurs when an operation runs out of memory.

Python has a fallback exception, as do all programming languages, for when the interpreter runs out of memory and must abandon the current execution. Python issues a MemoryError in these (hopefully infrequent) cases, giving the script a chance to catch up and break free from the present memory dearth. However, because Python’s memory management architecture is based on the C language’s malloc() function, It is unlikely that all processes will recover – in some situations, a MemoryError will result in an unrecoverable crash.

A MemoryError usually signals a severe fault in the present application. A program that takes files or user data input, for example, may encounter MemoryErrors if it lacks proper sanity checks. Memory restrictions might cause problems in various situations, but we’ll stick with a simple allocation in local memory utilizing strings and arrays for our code example.

The computer architecture on which the executing system runs is the most crucial element in whether your applications are likely to incur MemoryErrors. Or, to be more particular, the architecture of the Python version you’re using. The maximum memory allocation granted to the Python process is meager if you’re running a 32-bit Python. The maximum memory allocation limit fluctuates and is dependent on your system. However, it is generally around 2 GB and never exceeds 4 GB.

64-bit Python versions, on the other hand, are essentially restricted only by the amount of memory available on your system. Thus, in practice, a 64-bit Python interpreter is unlikely to have memory problems, and if it does, the pain is much more severe because it would most likely affect the rest of the system.

To verify this, we’ll use psutil to get information about the running process, specifically the psutil.virtual memory() method, which returns current memory consumption statistics when called. The print() memory usage method prints the latter information:

Python Memory Errors There are Several Types of Python Memory Errors

In Python, an unexpected memory error occurs

Even if you have enough RAM, you could get an unexpected Python Memory Error, and you may be using a 32-bit Python installation.

Unexpected Python Memory Error: A Simple Solution

Your software has used up all of the virtual address space available to it. It’s most likely because you’re using a 32-bit Python version. Because 32-bit applications are limited to 2 GB of user-mode address space in Windows (and most other operating systems),

We Python Poolers recommend installing a 64-bit version of Python (if possible, update to Python 3 for various reasons); it will use more memory, but it will also have much more memory space available (and more physical RAM as well).

The problem is Python 32-bit only has 4GB of RAM. So it can be reduced due to operating system overhead even more if your operating system is 32-bit.

For example, the zip function in Python 2 accepts many iterables and produces a single tuple iterator. In any case, for looping, we only require each item from the iterator once. As a result, we don’t need to keep all of the things in memory while looping. As a result, it’s preferable to utilize izip, which retrieves each item only on subsequent cycles. Thus, by default, Python 3’s zip routines are called izip.

Memory Error in Python Because of the Dataset

Another choice, if you’re working with a huge dataset, is dataset size. The latter has already been mentioned concerning 32-bit and 64-bit versions. Loading a vast dataset into memory and running computations on it, and preserving intermediate results of such calculations can quickly consume memory. If this is the case, generator functions can be pretty helpful. Many major Python libraries, such as Keras and TensorFlow, include dedicated generator methods and classes.

Memory Error in Python Python was installed incorrectly

Improper Python package installation can also result in a Memory Error. In fact, before resolving the issue, we had manually installed python 2.7 and the programs that I need on Windows. We replaced everything using Conda after spending nearly two days attempting to figure out what was wrong, and the issue was resolved.

Conda is probably installing improved memory management packages, which is the main reason. So you might try installing Python Packages with Conda to see if that fixes the Memory Error.

Conda is a free and open-source package management and environment management system for Windows, Mac OS X, and Linux. Conda is a package manager that installs, runs, and updates packages and their dependencies in a matter of seconds.

Python Out of Memory Error

When an attempt to allocate a block of memory fails, most systems return an “Out of Memory” error, but the core cause of the problem rarely has anything to do with being “out of memory.” That’s because the memory manager on almost every modern operating system will gladly use your available hard disk space for storing memory pages that don’t fit in RAM. In addition, your computer can usually allocate memory until the disk fills up, which may result in a Python Out of Memory Error (or a swap limit is reached; in Windows, see System Properties > Performance Options > Advanced > Virtual memory).

To make matters worse, every current allocation in the program’s address space can result in “fragmentation,” which prevents further allocations by dividing available memory into chunks that are individually too small to satisfy a new allocation with a single contiguous block.

  • When operating on a 64bit version of Windows, a 32bit application with the LARGEADDRESSAWARE flag set has access to the entire 4GB of address space.
  • Four readers have contacted in to say that the gcAllowVeryLargeObjects setting removes the.NET restriction. No, it doesn’t. This setting permits objects to take up more than 2GB of memory, limiting the number of elements in a single-dimensional array to 231 entries.

In Python, how do I manually free memory?

If you’ve written a Python program that uses a large input file to generate a few million objects, and it’s eating up a lot of memory, what’s the best approach to tell Python that some of the data is no longer needed and may be freed?

This problem has a simple solution:

You can cause the garbage collector to release an unreferenced memory() by using gc.collect.
As illustrated in the example below:

Do you get a memory error when there are more than 50GB of free space in Python, and you’re using 64-bit Python?
On some operating systems, the amount of RAM that a single CPU can handle is limited. So, even if there is adequate RAM available, your single thread (=one core) will not be able to take it anymore. However, we are not certain that this applies to your Windows version.

How can you make python scripts use less memory?

Python uses garbage collection and built-in memory management to ensure that the application consumes as much memory as needed. So, unless you explicitly construct your program to balloon memory utilization, such as creating a RAM database, Python only utilizes what it requires.

Which begs the question of why you’d want to do it in the first place – consume more RAM in the first place. For most programmers, the goal is to use as few resources as possible.

If you wish to keep Python’s memory use low, virtual machine to a minimum, try this:

  • On Linux, use the ulimit command to set a memory limit for Python.
  • You can use the resource module to limit how much memory the program uses

Consider the following if you wish to speed up your software by giving it more memory: Multiprocessing, threading.
On only python 2.5, use pysco

How can I set memory and CPU usage limits?

To limit the amount of memory or CPU used by an application while it is running. So that we don’t have any memory problems. To accomplish so, the Resource module can be used, and both tasks can be completed successfully, as demonstrated in the code below:

Code 1: Limit CPU usage

# libraries being imported
import signal
import resource
import os

# confirm_exceed_in_time.py 
# confirm if there is an exceed in time limit
def exceeded_time(sig_number, frame):
    print("Time is finally up !")
    raise SystemExit(1)
 
def maximum_runtime(count_seconds):
    # resource limit setup 
    if_soft, if_hard = resource.getrlimit(resource.RLIMIT_CPU)
    resource.setrlimit(resource.RLIMIT_CPU, (count_seconds, if_hard))
    signal.signal(signal.SIGXCPU, exceeded_time)
 
# set a maximum running time of about 25 millisecond
if __name__ == '__main__':
    maximum_runtime(25)
    while True:
        pass

Code #2: To minimize memory usage, the code restricts the total address space available.

# using resource
import resource
 
def limit_memory(maxsize):
    if_soft, if_hard = resource.getrlimit(resource.RLIMIT_AS)
    resource.setrlimit(resource.RLIMIT_AS, (maxsize, if_hard))

How to Deal with Python Memory Errors and Big Data Files

Increase the amount of memory available

A default memory setup may limit some Python tools or modules.

Check to see if your tool or library may be re-configured to allocate more RAM.

That is a platform built to handle massive datasets and allow data transformations. On top of that and machine learning algorithms will be applied.
Weka is a fantastic example of this, as you may increase memory as a parameter when running the app.

Use a Smaller Sample Size

Are you sure you require all of the data?

Take a random sample of your data, such as the first 5,000 or 100,000 rows, before fitting a final model to Use this smaller sample to work through your problem instead of all of your data (using progressive data loading techniques).

It is an excellent practice for machine learning in general, as it allows for quick spot-checks of algorithms and results turnaround.

You may also compare the amount of data utilized to fit one algorithm to the model skill in a sensitivity analysis. Perhaps you can use a natural point of declining returns as a guideline for the size of your smaller sample.

Make use of a computer that has more memory

Is it necessary for you to use your computer? Of course, – it is possible to lay your hand’s on a considerably larger PC with significantly more memory. A good example is renting computing time from a cloud provider like Amazon Web Services, which offers workstations with tens of gigabytes of RAM for less than a dollar per hour.

Make use of a database that is relational

Relational databases are a standard method of storing and retrieving massive datasets.

Internally, data is saved on a disk, loaded in batches, and searched using a standard query language (SQL).

Most (all?) programming languages and many machine learning tools can connect directly to relational databases, as can free open-source database solutions like MySQL or Postgres. You can also use SQLite, which is a lightweight database.

Use a big data platform to your advantage

In some cases, you may need to use a big data platform.

Error Monitoring Software

Airbrake’s powerful error monitoring software delivers real-time error monitoring and automatic exception reporting for all of your development projects. Airbrake’s cutting-edge web dashboard keeps you up to current on your application’s health and error rates at all times. Airbrake effortlessly interacts with all of the most common languages and frameworks, no matter what you’re working on. Furthermore, Airbrake makes it simple to adjust exception parameters while providing you complete control over the active error filter system, ensuring that only the most critical errors are collected.

Check out Airbrake’s error monitoring software for yourself and see why so many of the world’s greatest engineering teams rely on it to transform their exception handling techniques!

Summary

In this article, you learned about various strategies and methods for coping with Python Memory Error.

Would you mind letting us know in the comments section if you have used any of these methods?

Pages: [1] Print
Author Topic: Crash on export of a specific file  (Read 8525 times)

Trying to export Battlegrounds’ map (Link to the file: https://ufile.io/wst7r )
When trying to export the model viewer crashes with the following error:

Loading package: /TslGame/Content/UI/HUD/Erangel_Minimap.uasset Ver: 508/0 Engine: 0 [Unversioned] Names: 15 Exports: 1 Imports: 3 Game: 10000E0
Memory: allocated 754084 bytes in 5165 blocks
Loading Texture2D Erangel_Minimap from package /TslGame/Content/UI/HUD/Erangel_Minimap.uasset
Loading data for format PF_DXT1 …
Loaded in 0.22 sec, 1035 allocs, 15.39 MBytes serialized in 511 calls.
Exporting objects …
Exporting Texture2D Erangel_Minimap to C:/Users/Chaos/Desktop/UE4/UmodelExport/UI/HUD
******** /TslGame/Content/UI/HUD/Erangel_Minimap.uasset ********
*** ERROR: Memory: bad allocation size 268435456 bytes
appMalloc:size=268435456 (total=32 Mbytes) <- CTextureData::Decompress:fmt=PF_DXT1(5) <- ExportTexture <- ExportObject:Texture2D’Erangel_Minimap’ <- ExportObjects <- CUmodelApp::ShowPackageUI <- Main:umodel_version=612


Logged

View Profile
WWW


UModel’s code not allowing to allocate memory blocks larger or equal of 256Mb. This is USUALLY just not a good allocation. For your case, it allocates exactly 256Mb memory block. I’ve made some calculations, and it seems there’s a texture 16384×16384. This texture size might be unsupported by rendering hardware, so it is also bad.

If UModel will try to save such texture in uncompressed (tga) format, it will allocate 1Gb of memory for DXT decompression (16k x 16k x 4 bytes per pixel), so allocation will fail again. Even more — UModel is 32-bit application, and any 32-bit applications usually fails to allocate memory blocks which are 1Gb of size — probably because of memory fragmentation, I don’t know.

Possible solution for you.
1. download UModel’s source code.
2. disable check for «bad memory block size».
3. compile it by yourself for 64-bit platform.

Or — just ignore this texture.


Logged

View Profile


Well, I need this specific texture, so I can’t ignore it Smiley
I will try your solution later today and will post again! Thanks


Logged

View Profile


I disabled the upper limit, but can’t compile

void *appMalloc(int size, int alignment)
{
guard(appMalloc);
if (size < 0/* || size >= (256<<20)*/) // upper limit for single allocation is 256Mb
appError(«Memory: bad allocation size %d bytes», size);

./../Libs/msvcrt/startup/crtexe2.obj : fatal error LNK1112: module machine type ‘X86’ conflicts with target machine type ‘x64’

EDIT:
I tried compiling as x86 and it was able to export my texture  Grin Grin Grin Grin

« Last Edit: August 07, 2017, 17:15 by Chaoss »
Logged

View Profile


New question tho, if the texture is 16384 why does it export a 8192 tga?


Logged

View Profile
WWW


I don’t know if texture should be 16k — it was just my assumption.


Logged

View Profile


Hi, how to disable upper limit? And where in the source code??


Logged

View Profile


I know this is a little old but, I had this issue when trying to open files for AA2. Each time I would select the root folder, and try to open a file, it would crash with the » ERROR: Memory: bad allocation size xx bytes «.  I moved the folder, containing the .usx files I was trying to open, to the desktop and gave the folder a short name(sMesh). It opened every file without issue. I relayed my information to a friend, who was also having the same issue. Same result as I had. I hope this helps anyone else experiencing this error.


Logged
Pages: [1] Print 

The default memory allocation operator, ::operator new(std::size_t), throws a std::bad_alloc exception if the allocation fails. Therefore, you need not check whether calling ::operator new(std::size_t) results in nullptr. The nonthrowing form, ::operator new(std::size_t, const std::nothrow_t &), does not throw an exception if the allocation fails but instead returns nullptr. The same behaviors apply for the operator new[] versions of both allocation functions. Additionally, the default allocator object (std::allocator) uses ::operator new(std::size_t) to perform allocations and should be treated similarly.

T *p1 = new T; // Throws std::bad_alloc if allocation fails
T *p2 = new (std::nothrow) T; // Returns nullptr if allocation fails

T *p3 = new T[1]; // Throws std::bad_alloc if the allocation fails
T *p4 = new (std::nothrow) T[1]; // Returns nullptr if the allocation fails

Furthermore, operator new[] can throw an error of type std::bad_array_new_length, a subclass of std::bad_alloc, if the size argument passed to new is negative or excessively large.

When using the nonthrowing form, it is imperative to check that the return value is not nullptr before accessing the resulting pointer. When using either form, be sure to comply with ERR50-CPP. Do not abruptly terminate the program.

Noncompliant Code Example

In this noncompliant code example, an array of int is created using ::operator new[](std::size_t) and the results of the allocation are not checked. The function is marked as noexcept, so the caller assumes this function does not throw any exceptions. Because ::operator new[](std::size_t) can throw an exception if the allocation fails, it could lead to abnormal termination of the program.

#include <cstring>
 
void f(const int *array, std::size_t size) noexcept {
  int *copy = new int[size];
  std::memcpy(copy, array, size * sizeof(*copy));
  // ...
  delete [] copy;
}

Compliant Solution (std::nothrow)

When using std::nothrow, the new operator returns either a null pointer or a pointer to the allocated space. Always test the returned pointer to ensure it is not nullptr before referencing the pointer. This compliant solution handles the error condition appropriately when the returned pointer is nullptr.

#include <cstring>
#include <new>
 
void f(const int *array, std::size_t size) noexcept {
  int *copy = new (std::nothrow) int[size];
  if (!copy) {
    // Handle error
    return;
  }
  std::memcpy(copy, array, size * sizeof(*copy));
  // ...
  delete [] copy;
}

Compliant Solution (std::bad_alloc)

Alternatively, you can use ::operator new[] without std::nothrow and instead catch a std::bad_alloc exception if sufficient memory cannot be allocated.

#include <cstring>
#include <new>
 
void f(const int *array, std::size_t size) noexcept {
  int *copy;
  try {
    copy = new int[size];
  } catch(std::bad_alloc) {
    // Handle error
    return;
  }
  // At this point, copy has been initialized to allocated memory
  std::memcpy(copy, array, size * sizeof(*copy));
  // ...
  delete [] copy;
}

Compliant Solution (noexcept(false))

If the design of the function is such that the caller is expected to handle exceptional situations, it is permissible to mark the function explicitly as one that may throw, as in this compliant solution. Marking the function is not strictly required, as any function without a noexcept specifier is presumed to allow throwing.

#include <cstring>
 
void f(const int *array, std::size_t size) noexcept(false) {
  int *copy = new int[size];
  // If the allocation fails, it will throw an exception which the caller
  // will have to handle.
  std::memcpy(copy, array, size * sizeof(*copy));
  // ...
  delete [] copy;
}

Noncompliant Code Example

In this noncompliant code example, two memory allocations are performed within the same expression. Because the memory allocations are passed as arguments to a function call, an exception thrown as a result of one of the calls to new could result in a memory leak.

struct A { /* ... */ };
struct B { /* ... */ }; 
 
void g(A *, B *);
void f() {
  g(new A, new B);
}

Consider the situation in which A is allocated and constructed first, and then B is allocated and throws an exception. Wrapping the call to g() in a try/catch block is insufficient because it would be impossible to free the memory allocated for A.

This noncompliant code example also violates EXP50-CPP. Do not depend on the order of evaluation for side effects, because the order in which the arguments to g() are evaluated is unspecified.

Compliant Solution (std::unique_ptr)

In this compliant solution, a std::unique_ptr is used to manage the resources for the A and B objects with RAII. In the situation described by the noncompliant code example, B throwing an exception would still result in the destruction and deallocation of the A object when then std::unique_ptr<A> was destroyed.

#include <memory>
 
struct A { /* ... */ };
struct B { /* ... */ }; 
 
void g(std::unique_ptr<A> a, std::unique_ptr<B> b);
void f() {
  g(std::make_unique<A>(), std::make_unique<B>());
}

Compliant Solution (References)

When possible, the more resilient compliant solution is to remove the memory allocation entirely and pass the objects by reference instead.

struct A { /* ... */ };
struct B { /* ... */ }; 
 
void g(A &a, B &b);
void f() {
  A a;
  B b;
  g(a, b);
}

Risk Assessment

Failing to detect allocation failures can lead to abnormal program termination and denial-of-service attacks.

If the vulnerable program references memory offset from the return value, an attacker can exploit the program to read or write arbitrary memory. This vulnerability has been used to execute arbitrary code [VU#159523].

Rule

Severity

Likelihood

Remediation Cost

Priority

Level

MEM52-CPP

High

Likely

Medium

P18

L1

Automated Detection

Tool

Version

Checker

Description

Compass/ROSE
Coverity 7.5 CHECKED_RETURN Finds inconsistencies in how function call return values are handled
Helix QAC

2022.4

C++3225, C++3226, C++3227, C++3228, C++3229, C++4632

Klocwork

2022.4

NPD.CHECK.CALL.MIGHT
NPD.CHECK.CALL.MUST
NPD.CHECK.MIGHT
NPD.CHECK.MUST
NPD.CONST.CALL
NPD.CONST.DEREF
NPD.FUNC.CALL.MIGHT
NPD.FUNC.CALL.MUST
NPD.FUNC.MIGHT
NPD.FUNC.MUST
NPD.GEN.CALL.MIGHT
NPD.GEN.CALL.MUST
NPD.GEN.MIGHT
NPD.GEN.MUST
RNPD.CALL
RNPD.DEREF
LDRA tool suite

9.7.1

45 D

Partially implemented

Parasoft C/C++test

2022.2

CERT_CPP-MEM52-a
CERT_CPP-MEM52-b

Check the return value of new
Do not allocate resources in function argument list because the order of evaluation of a function’s parameters is undefined

Parasoft Insure++ Runtime detection
Polyspace Bug Finder

R2022b

CERT C++: MEM52-CPP Checks for unprotected dynamic memory allocation (rule partially covered)
PRQA QA-C++

4.4 

3225, 3226, 3227, 3228, 3229, 4632 
PVS-Studio

7.23

V522, V668

The vulnerability in Adobe Flash [VU#159523] arises because Flash neglects to check the return value from calloc(). Even though calloc() returns NULL, Flash does not attempt to read or write to the return value. Instead, it attempts to write to an offset from the return value. Dereferencing NULL usually results in a program crash, but dereferencing an offset from NULL allows an exploit to succeed without crashing the program.

Search for vulnerabilities resulting from the violation of this rule on the CERT website.

Bibliography

[ISO/IEC 9899:2011] Subclause 7.20.3, «Memory Management Functions»
[ISO/IEC 14882-2014]

Subclause 18.6.1.1, «Single-Object Forms»
Subclause 18.6.1.2, «Array Forms»
Subclause 20.7.9.1, «Allocator Members»

[Meyers 1996] Item 7, «Be Prepared for Out-of-Memory Conditions»
[Seacord 2013] Chapter 4, «Dynamic Memory Management»

Если ты их кладёшь в список, очевидно что ты ограничен размером памяти. Поэтому очевидно что не нужно класть их в список, а нужно обрабатывать по одному.

slovazap ★★★★★

(07.11.19 22:25:31 MSK)

  • Показать ответ
  • Ссылка

Ответ на:

комментарий
от slovazap 07.11.19 22:25:31 MSK

Поподробей можно?

Сколько элементов можно положить в список на python x32?
Сколько элементов можно положить в список на python x64?
Какие еще нъюансы есть?

KRex

(07.11.19 22:40:04 MSK)

  • Показать ответы
  • Ссылка

Ответ на:

комментарий
от Jopich1 07.11.19 22:30:06 MSK

Ответ на:

В курсе
от KRex 07.11.19 22:40:35 MSK

Re: В курсе

import sys

KiB = 1024
MiB = 1024 * KiB
GiB = 1024 * MiB

with open('/proc/meminfo') as f:
    ram = f.readline() 
    ram = ram[ram.index(' '):ram.rindex(' ')].strip()    
    ram = int(ram) * KiB / GiB 
    
required = sys.getsizeof(42) * 1_000_000_000 / GiB

print('Надо:', required, 'GiB')
print('Есть:', ram, 'GiB')

if required > ram:
    print('У тебя совесть есть, сцуко?!')

anonymous

(07.11.19 23:02:05 MSK)

  • Показать ответы
  • Ссылка

Ответ на:

Re: В курсе
от anonymous 07.11.19 23:02:05 MSK

os.sysconf('SC_PAGE_SIZE')*os.sysconf('SC_PHYS_PAGES')

anonymous

(07.11.19 23:14:46 MSK)

  • Показать ответ
  • Ссылка

Ответ на:

комментарий
от anonymous 07.11.19 23:14:46 MSK

Спасибо.

anonymous

(07.11.19 23:16:03 MSK)

  • Ссылка

Неизвестно даже приблизительно — зависит от платформы, аллокатора, настроек системы, фрагментации кучи, объектов которые вы собираетесь туда класть (а под них самих тоже нужна память) и много чего ещё.

slovazap ★★★★★

(07.11.19 23:19:59 MSK)

  • Ссылка

python как справиться с memory error?

Это не проблема питона, это проблема системы. Нужно разрешить memory overcommit (выставить в единицу) на используемой системе.

А memory error можешь просто обрабатывать как исключение.

anonymous

(08.11.19 03:44:04 MSK)

  • Показать ответ
  • Ссылка

Какие еще нъюансы есть

Если физической оперативки не хватит программа зависнет и придёт OOM killer, или не придёт… Если элементов слишком много, то работать надо с файлами или удалять их… Я вот с CSV переодически сталкиваюсь, которые по 10 Гб + и в оперативке будут в разобранном виде весить гигов 500-600. Работаю с файлами в результате, читаю обрабатываю 1000 записей, потом опять читаю. А что поделать? И да, про БД знаю, но иногда лучше файл руками читать, чем с БД возиться.

peregrine ★★★★★

(08.11.19 03:47:44 MSK)



Последнее исправление: peregrine 08.11.19 03:51:16 MSK
(всего

исправлений: 2)

  • Ссылка

Ответ на:

Re: В курсе
от anonymous 07.11.19 23:02:05 MSK

Re: В курсе

with open(‘/proc/meminfo’) as f:

Это не даст хорошего результата во многих случаях: meminfo не показываем объем свободной виртуальной памяти. memory error возникает при невозможности выделить виртуальную, а не при нехватке физической памяти.

anonymous

(08.11.19 03:48:13 MSK)

  • Показать ответ
  • Ссылка

Ответ на:

Re: В курсе
от anonymous 08.11.19 03:48:13 MSK

Re: В курсе

Это не даст хорошего результата во многих случаях: meminfo не показываем объем свободной виртуальной памяти. memory error возникает при невозможности выделить виртуальную, а не при нехватке физической памяти.

Какого «хорошего результата» ты ждёшь? Это даст примерный объём физической памяти, как и задумано. Чел пытается аллоцировать список на 25 гигабайтов. Невозможность выделить виртуальную память немного предсказуема, как следсвие острой нехватки физической.

О существовании величины «свободная виртуальная память» и о том, как её измерять, можно поспорить отдельно.

anonymous

(08.11.19 04:58:02 MSK)

  • Показать ответ
  • Ссылка

Ответ на:

Re: В курсе
от anonymous 08.11.19 04:58:02 MSK

Re: В курсе

О существовании величины «свободная виртуальная память» и о том, как её измерять, можно поспорить отдельно.

Если оверкоммит запрещен, то легко измеряется.

Невозможность выделить виртуальную память немного предсказуема

Именно немного. Физической памяти может быть под ноль, но memory error не произойдет по причине разрешения оверкоммита. И наоброт, при запрете оверкоммита может призойти ошибка выделения задолго до того как физическая память будет исчерпана.

anonymous

(08.11.19 06:31:09 MSK)

  • Показать ответ
  • Ссылка

Ответ на:

Re: В курсе
от anonymous 08.11.19 06:31:09 MSK

Re: В курсе

Если оверкоммит запрещен, то легко измеряется.

Если оверкоммит запрещён, то ты будешь легко измерять свободную физическую.

Именно немного. Физической памяти может быть под ноль, но memory error не произойдет по причине разрешения оверкоммита.

Произойдёт, потому что он в неё пишет.

И наоброт, при запрете оверкоммита может призойти ошибка выделения задолго до того как физическая память будет исчерпана.

Может произойти и задолго. Только влез ты с этим замечанием не в тему, потому что чел явно исчерпывает физическую память, а цель кода, который ты полез поправлять – наглядно это продемонстрировать.

anonymous

(08.11.19 07:34:02 MSK)

  • Показать ответ
  • Ссылка

Ответ на:

Re: В курсе
от anonymous 08.11.19 07:34:02 MSK

Re: В курсе

Если оверкоммит запрещён, то ты будешь легко измерять свободную физическую.

              CommitLimit %lu (since Linux 2.6.10)
                     This is the total amount of memory currently available
                     to be allocated on the system, expressed in kilobytes.
                     This limit is adhered to only if strict overcommit
                     accounting is enabled (mode 2 in /proc/sys/vm/overcom‐
                     mit_memory).  The limit is calculated according to the
                     formula described under /proc/sys/vm/overcommit_memory.
                     For further details, see the kernel source file Docu‐
                     mentation/vm/overcommit-accounting.rst.

              Committed_AS %lu
                     The amount of memory presently allocated on the system.
                     The committed memory is a sum of all of the memory
                     which has been allocated by processes, even if it has
                     not been "used" by them as of yet.  A process which
                     allocates 1GB of memory (using malloc(3) or similar),
                     but touches only 300MB of that memory will show up as
                     using only 300MB of memory even if it has the address
                     space allocated for the entire 1GB.

                     This 1GB is memory which has been "committed" to by the
                     VM and can be used at any time by the allocating appli‐
                     cation.  With strict overcommit enabled on the system
                     (mode 2 in /proc/sys/vm/overcommit_memory), allocations
                     which would exceed the CommitLimit will not be permit‐
                     ted.  This is useful if one needs to guarantee that
                     processes will not fail due to lack of memory once that
                     memory has been successfully allocated.

О бредовости остальных ваших тезисов догадайтесь сами.

anonymous

(08.11.19 07:57:41 MSK)

  • Показать ответ
  • Ссылка

Ответ на:

Re: В курсе
от anonymous 08.11.19 07:57:41 MSK

Re: В курсе

Если ты нашёл способ писать в память больше данных, чем в неё влазит, то поделись с человечеством, пожалуйста.

Если нет – догадываться, что не так в голове очередного чудака, мне лень.

anonymous

(08.11.19 08:05:15 MSK)

  • Показать ответ
  • Ссылка

Ответ на:

Re: В курсе
от anonymous 08.11.19 08:05:15 MSK

Re: В курсе

писать в память больше данных, чем в неё влазит

Речь не об этом. Речь о том, что Commitet_AS (выделенная) может быть больше, чем физическая (MemTotal).

Можно иметь 4 гига физической, 2 гига свопа, и 12 гиг выделенной памяти. Для возникновения MemoryError процесс должен попытаться выделить ВИРТУАльНУЮ память при ее ограниченности. Если процесс уперся в лимит физической памяти раньше, чем в лимит виртуальной, то он не упадет на MemoryError — тут будет или зависание системы, или приход оом киллера, в редком случае возможен OSError.

anonymous

(08.11.19 11:16:12 MSK)

  • Ссылка

Ответ на:

комментарий
от anonymous 08.11.19 03:44:04 MSK

Это не проблема питона, это проблема питонокодера

fixed. Ему говорят как делать правильно, он — нет, хочу научиться говнокодить.

anonymous

(08.11.19 11:39:32 MSK)

  • Ссылка

Вы не можете добавлять комментарии в эту тему. Тема перемещена в архив.

Improve Article

Save Article

  • Read
  • Discuss
  • Improve Article

    Save Article

    In this article, if memory allocation using new is failed in C++ then how it should be handled? When an object of a class is created dynamically using new operator, the object occupies memory in the heap. Below are the major thing that must be keep in mind:

    • What if sufficient memory is not available in the heap memory, and how it should be handled?
    • If memory is not allocated then how to avoid the project crash?

    Below is the program that occupies a large amount of memory so that the problem will occur. Use memory allocation statements in the try and catch block and for preventing memory crash and throw the exception when memory allocation is failed.

    Program 1:

    C++

    #include <iostream>

    using namespace std;

    int main()

    {

        long MEMORY_SIZE = 0x7fffffff;

        try {

            char* ptr = new char[MEMORY_SIZE];

            cout << "Memory is allocated"

                 << " Successfully" << endl;

        }

        catch (const bad_alloc& e) {

            cout << "Memory Allocation"

                 << " is failed: "

                 << e.what()

                 << endl;

        }

        return 0;

    }

    Output:

    Memory Allocation is failed: std::bad_alloc

    The above memory failure issue can be resolved without using the try-catch block. It can be fixed by using nothrow version of the new operator:

    • The nothrow constant value is used as an argument for operator new and operator new[] to indicate that these functions shall not throw an exception on failure but return a null pointer instead.
    • By default, when the new operator is used to attempt to allocate memory and the handling function is unable to do so, a bad_alloc exception is thrown.
    • But when nothrow is used as an argument for new, and it returns a null pointer instead.
    • This constant (nothrow) is just a value of type nothrow_t, with the only purpose of triggering an overloaded version of the function operator new (or operator new[]) that takes an argument of this type.

    Below is the implementation of memory allocation using nothrow operator:

    Program 2:

    C++

    #include <iostream>

    using namespace std;

    int main()

    {

        long MEMORY_SIZE = 0x7fffffff;

        char* addr = new (std::nothrow) char[MEMORY_SIZE];

        if (addr) {

            cout << "Memory is allocated"

                 << " Successfully" << endl;

        }

        else {

            cout << "Memory  allocation"

                 << " fails" << endl;

        }

        return 0;

    }

    Output:

    Memory  allocation fails

    Понравилась статья? Поделить с друзьями:
  • Memory error 8 1803 call of duty wwii
  • Memory error 8 1803 call of duty ww2
  • Memory dump captured fifa 23 ошибка
  • Memory card error memory locked
  • Memory card error 1010 fanuc что это