By Ben Dilts

2011-02-13 08:00:34 8 Comments

I'm running pdftoppm to convert a user-provided PDF into a 300DPI image. This works great, except if the user provides an PDF with a very large page size. pdftoppm will allocate enough memory to hold a 300DPI image of that size in memory, which for a 100 inch square page is 100*300 * 100*300 * 4 bytes per pixel = 3.5GB. A malicious user could just give me a silly-large PDF and cause all kinds of problems.

So what I'd like to do is put some kind of hard limit on memory usage for a child process I'm about to run--just have the process die if it tries to allocate more than, say, 500MB of memory. Is that possible?

I don't think ulimit can be used for this, but is there a one-process equivalent?


@Hi-Angel 2019-08-17 18:02:15

On any systemd-based distro you can also use cgroups indirectly through systemd-run. E.g. for your case of limiting pdftoppm to 500M of RAM, use:

systemd-run --scope -p MemoryLimit=500M pdftoppm

Note: this gonna ask you for a password but the app gets launched as your user. Do not allow this to delude you into thinking that the command needs sudo, because that would cause the command to run under root, which hardly was your intention.

If you want to not enter the password (after all, as a user you own your memory, why would you need a password to limit it), you could use --user option, however for this to work you gonna need cgroupsv2 support enabled, which right now requires to boot with systemd.unified_cgroup_hierarchy kernel parameter.

@Geradlus_RU 2019-10-14 16:07:18

Thank you, made my day

@Andrei Sinitson 2019-12-13 07:31:35

Short and sweet. Before I tried firejail, but that seemed to be overkill with too many side effects for just limiting memory consumption. Thanks!

@Andrei Sinitson 2019-12-13 07:33:18

Another note: this is using cgroups under the hood and I think there is lot more sense doing it with this command instead of fiddling with cgroups manually as suggested in other answers. This should have more upvotes.

@user65369 2014-04-16 08:36:40

Another way to limit this is to use Linux's control groups. This is especially useful if you want to limit a process's (or group of processes') allocation of physical memory distinctly from virtual memory. For example:

cgcreate -g memory:myGroup
echo 500M > /sys/fs/cgroup/memory/myGroup/memory.limit_in_bytes
echo 5G > /sys/fs/cgroup/memory/myGroup/memory.memsw.limit_in_bytes

will create a control group named myGroup, cap the set of processes run under myGroup up to 500 MB of physical memory and up to 5000 MB of swap. To run a process under the control group:

cgexec -g memory:myGroup pdftoppm

Note that on a modern Ubuntu distribution this example requires installing the cgroup-bin package and editing /etc/default/grub to change GRUB_CMDLINE_LINUX_DEFAULT to:

GRUB_CMDLINE_LINUX_DEFAULT="cgroup_enable=memory swapaccount=1"

and then running sudo update-grub and rebooting to boot with the new kernel boot parameters.

@Ned64 2018-02-15 12:20:29

The firejail program will also let you start a process with memory limits (using cgroups and namespaces to limit more than just memory). On my systems I did not have to change the kernel command line for this to work!

@stason 2018-08-05 18:36:38

Do you need the GRUB_CMDLINE_LINUX_DEFAULT modification to make the setting persistent? I found another way to make it persistent here.

@Martin Thoma 2019-07-22 11:04:01

@stewbasic 2019-07-23 05:28:51

It would be useful to note in this answer that on some distributions (eg Ubuntu) sudo is required for cgcreate, and also the later commands unless permission is given to the current user. This would save the reader from having to find this information elsewhere (eg I suggested an edit to this effect but it was rejected.

@d9ngle 2019-05-02 06:51:45

I'm running Ubuntu 18.04.2 LTS and JanKanis script doesn't work for me quite as he suggests. Running limitmem 100M script is limiting 100MB of RAM with unlimited swap.

Running limitmem 100M -s 100M script fails silently as cgget -g "memory:$cgname" has no parameter named memory.memsw.limit_in_bytes.

So I disabled swap:

# create cgroup
sudo cgcreate -g "memory:$cgname"
sudo cgset -r memory.limit_in_bytes="$limit" "$cgname"
sudo cgset -r memory.swappiness=0 "$cgname"
bytes_limit=`cgget -g "memory:$cgname" | grep memory.limit_in_bytes | cut -d\  -f2`

@d9ngle 2019-05-02 12:59:24

@sourcejedi added it :)

@JanKanis 2019-05-02 13:18:34

Right, I edited my answer. To enable swap limits you need to enable swap accounting on your system. There's a small runtime overhead to that so it isn't enabled by default on Ubuntu. See my edit.

@JanKanis 2016-04-26 12:53:52

I'm using the below script, which works great. It uses cgroups through cgmanager. Update: it now uses the commands from cgroup-tools. Name this script limitmem and put it in your $PATH and you can use it like limitmem 100M bash. This will limit both memory and swap usage. To limit just memory remove the line with memory.memsw.limit_in_bytes.

edit: On default Linux installations this only limits memory usage, not swap usage. To enable swap usage limiting, you need to enable swap accounting on your Linux system. Do that by setting/adding swapaccount=1 in /etc/default/grub so it looks something like


Then run sudo update-grub and reboot.

Disclaimer: I wouldn't be surprised if cgroup-tools also breaks in the future. The correct solution would be to use the systemd api's for cgroup management but there are no command line tools for that a.t.m.


# This script uses commands from the cgroup-tools package. The cgroup-tools commands access the cgroup filesystem directly which is against the (new-ish) kernel's requirement that cgroups are managed by a single entity (which usually will be systemd). Additionally there is a v2 cgroup api in development which will probably replace the existing api at some point. So expect this script to break in the future. The correct way forward would be to use systemd's apis to create the cgroups, but afaik systemd currently (feb 2018) only exposes dbus apis for which there are no command line tools yet, and I didn't feel like writing those.

# strict mode: error if commands fail or if unset variables are used
set -eu

if [ "$#" -lt 2 ]
    echo Usage: `basename $0` "<limit> <command>..."
    echo or: `basename $0` "<memlimit> -s <swaplimit> <command>..."
    exit 1


# parse command line args and find limits


if [ "$1" = "-s" ]

if [ "$1" = -- ]

if [ "$limit" = "$swaplimit" ]
    echo "limiting memory to $limit (cgroup $cgname) for command [email protected]" >&2
    echo "limiting memory to $limit and total virtual memory to $swaplimit (cgroup $cgname) for command [email protected]" >&2

# create cgroup
sudo cgcreate -g "memory:$cgname"
sudo cgset -r memory.limit_in_bytes="$limit" "$cgname"
bytes_limit=`cgget -g "memory:$cgname" | grep memory.limit_in_bytes | cut -d\  -f2`

# try also limiting swap usage, but this fails if the system has no swap
if sudo cgset -r memory.memsw.limit_in_bytes="$swaplimit" "$cgname"
    bytes_swap_limit=`cgget -g "memory:$cgname" | grep memory.memsw.limit_in_bytes | cut -d\  -f2`
    echo "failed to limit swap"

# create a waiting sudo'd process that will delete the cgroup once we're done. This prevents the user needing to enter their password to sudo again after the main command exists, which may take longer than sudo's timeout.
mkfifo --mode=u=rw,go= "$fifo"
sudo -b sh -c "head -c1 '$fifo' >/dev/null ; cgdelete -g 'memory:$cgname'"

# spawn subshell to run in the cgroup. If the command fails we still want to remove the cgroup so unset '-e'.
set +e
set -e
# move subshell into cgroup
sudo cgclassify -g "memory:$cgname" --sticky `sh -c 'echo $PPID'`  # $$ returns the main shell's pid, not this subshell's.
exec "[email protected]"

# grab exit code 

set -e

# show memory usage summary

peak_mem=`cgget -g "memory:$cgname" | grep memory.max_usage_in_bytes | cut -d\  -f2`
failcount=`cgget -g "memory:$cgname" | grep memory.failcnt | cut -d\  -f2`
percent=`expr "$peak_mem" / \( "$bytes_limit" / 100 \)`

echo "peak memory used: $peak_mem ($percent%); exceeded limit $failcount times" >&2

if [ "$memsw" = 1 ]
    peak_swap=`cgget -g "memory:$cgname" | grep memory.memsw.max_usage_in_bytes | cut -d\  -f2`
    swap_failcount=`cgget -g "memory:$cgname" |grep memory.memsw.failcnt | cut -d\  -f2`
    swap_percent=`expr "$peak_swap" / \( "$bytes_swap_limit" / 100 \)`

    echo "peak virtual memory used: $peak_swap ($swap_percent%); exceeded limit $swap_failcount times" >&2

# remove cgroup by sending a byte through the pipe
echo 1 > "$fifo"
rm "$fifo"

exit $exitcode

@Aaron Franke 2017-03-12 09:58:30

call to cgmanager_create_sync failed: invalid request for every process I try to run with limitmem 100M processname. I'm on Xubuntu 16.04 LTS and that package is installed.

@R Kiselev 2018-02-15 07:19:58

Ups, I get this error message: $ limitmem 400M rstudio limiting memory to 400M (cgroup limitmem_24575) for command rstudio Error org.freedesktop.DBus.Error.InvalidArgs: invalid request any idea?

@JanKanis 2018-02-15 11:39:23

@RKiselev cgmanager is deprecated now, and not even available in Ubuntu 17.10. The systemd api that it uses was changed at some point, so that's probably the reason. I have updated the script to use cgroup-tools commands.

@Willi Ballenthin 2018-05-23 22:46:08

if the calculation for percent results in zero, the expr status code is 1, and this script exits prematurely. recommend changing the line to: percent=$(( "$peak_mem" / $(( "$bytes_limit" / 100 )) )) (ref:…)

@d9ngle 2019-05-01 18:29:50

how can I config cgroup to kill my process if I go above the limit?

@JanKanis 2019-05-02 09:24:13

@d9ngle You don't really need to, Linux won't allow the process to go above the limit. Memory allocations will fail or the OOM killer will kill the process. Otherwise there's the memory.failcnt and memory.memsw.failcnt files in the cgroup directory that you could monitor to see if the count gets above 0, but that may not be what you want as when that first happens there is usually memory in your process somewhere that the OS can reclaim without harm (e.g. mapped files).

@d9ngle 2019-05-02 13:00:01

@JanKanis I forgot to tag you. Please see my answer below.

@kvz 2011-11-30 09:42:33

There's some problems with ulimit. Here's a useful read on the topic: Limiting time and memory consumption of a program in Linux, which lead to the timeout tool, which lets you cage a process (and its forks) by time or memory consumption.

The timeout tool requires Perl 5+ and the /proc filesystem mounted. After that you copy the tool to e.g. /usr/local/bin like so:

curl | \
  sudo tee /usr/local/bin/timeout && sudo chmod 755 /usr/local/bin/timeout

After that, you can 'cage' your process by memory consumption as in your question like so:

timeout -m 500 pdftoppm Sample.pdf

Alternatively you could use -t <seconds> and -x <hertz> to respectively limit the process by time or CPU constraints.

The way this tool works is by checking multiple times per second if the spawned process has not oversubscribed its set boundaries. This means there actually is a small window where a process could potentially be oversubscribing before timeout notices and kills the process.

A more correct approach would hence likely involve cgroups, but that is much more involved to set up, even if you'd use Docker or runC, which among things, offer a more user-friendly abstraction around cgroups.

@kvz 2017-04-27 12:32:33

Seems to be working for me now (again?) but here's the google cache version:…

@ransh 2017-10-24 12:47:24

Can we use timeout together with taskset (we need to limit both memory and cores) ?

@user1404316 2018-04-08 07:03:37

It should be noted that this answer is not referring to the linux standard coreutils utility of the same name! Thus, the answer is potentially dangerous if anywhere on your system, some package has a script expecting timeout to be the linux standard coreutils package! I am unaware of this tool being packaged for distributions such as debian.

@xxx374562 2018-11-26 02:05:09

Does -t <seconds> constraint kill the process after that many seconds?

@Daniel 2019-11-04 13:34:17

Might also be helpful that -m is accepting kilobytes. The example above suggests its using MB.

@Oz123 2015-07-09 12:26:58

In addition to the tools from daemontools, suggested by Mark Johnson, you can also consider chpst which is found in runit. Runit itself is bundled in busybox, so you might already have it installed.

The man page of chpst shows the option:

-m bytes limit memory. Limit the data segment, stack segment, locked physical pages, and total of all segment per process to bytes bytes each.

@P Shved 2011-02-13 08:11:37

If your process doesn't spawn more children that consume the most memory, you may use setrlimit function. More common user interface for that is using ulimit command of the shell:

$ ulimit -Sv 500000     # Set ~500 mb limit
$ pdftoppm ...

This will only limit "virtual" memory of your process, taking into account—and limiting—the memory the process being invoked shares with other processes, and the memory mapped but not reserved (for instance, Java's large heap). Still, virtual memory is the closest approximation for processes that grow really large, making the said errors insignificant.

If your program spawns children, and it's them which allocate memory, it becomes more complex, and you should write auxiliary scripts to run processes under your control. I wrote in my blog, why and how.

@akira 2011-02-13 08:13:48

why is setrlimit more complex for more children? man setrlimit tells me that "A child process created via fork(2) inherits its parents resource limits. Resource limits are preserved across execve(2)"

@MarkR 2011-02-13 08:17:02

Because the kernel does not sum the vm size for all child processes; if it did it would get the answer wrong anyway. The limit is per-process, and is virtual address space, not memory usage. Memory usage is harder to measure.

@akira 2011-02-13 08:21:07

if i understand the question correctly then OP whats the limit per subprocess (child) .. not in total.

@Pavel Shved 2011-02-13 08:23:51

@MarkR, anyway, virtual address space is a good approximation for the memory used, especially if you run a program that's not controlled by a virtual machine (say, Java). At least I don't know any better metric.

@MarkR 2011-02-13 08:35:22

Virtual address space is the best we really have; there isn't an easily measurable alternative. The pages measured don't need to be in core, they don't need to be private to that process, they're still counted.

@sdaau 2013-04-04 15:51:14

Just wanted to say thanks - this ulimit approach helped me with firefox's bug 622816 – Loading a large image can "freeze" firefox, or crash the system; which on a USB boot (from RAM) tends to freeze the OS, requiring hard restart; now at least firefox crashes itself, leaving the OS alive... Cheers!

@Lee 2016-08-29 17:16:21

What are the soft and hard limits?

@user2167582 2016-12-01 04:10:46

do you just run the command after this?

@Totor 2017-07-28 10:25:30

On 32 bits machines, the maximum limit is 2 GiB (or unlimited), cf RLIMIT_AS in man (2) setrlimit.

@nerkn 2018-02-13 12:43:12

I want to use this approach for chrome. As far I understand, only child process will fail that is 16G, not the whole chrome?

Related Questions

Sponsored Content

1 Answered Questions

cgroup to limit memory usage of users

  • 2019-11-16 11:06:41
  • Abolfazl
  • 24 View
  • 0 Score
  • 1 Answer
  • Tags:   memory

2 Answered Questions

2 Answered Questions

[SOLVED] Process memory usage on Linux

  • 2018-03-10 18:19:42
  • Lev
  • 333 View
  • 0 Score
  • 2 Answer
  • Tags:   linux

1 Answered Questions

[SOLVED] GNU Parallel Limit Memory Usage

2 Answered Questions

[SOLVED] How to limit application memory usage?

  • 2012-02-22 08:40:42
  • Dragomir Ivanov
  • 13675 View
  • 9 Score
  • 2 Answer
  • Tags:   linux memory

2 Answered Questions

[SOLVED] Linux memory usage?

2 Answered Questions

[SOLVED] process memory usage

  • 2013-05-23 14:52:34
  • wrong-about-everything
  • 393 View
  • 2 Score
  • 2 Answer
  • Tags:   process memory

Sponsored Content