By Lukas


2014-01-16 13:26:37 8 Comments

Is it possible to specify a custom thread pool for Java 8 parallel stream? I can not find it anywhere.

Imagine that I have a server application and I would like to use parallel streams. But the application is large and multi-threaded so I want to compartmentalize it. I do not want a slow running task in one module of the applicationblock tasks from another module.

If I can not use different thread pools for different modules, it means I can not safely use parallel streams in most of real world situations.

Try the following example. There are some CPU intensive tasks executed in separate threads. The tasks leverage parallel streams. The first task is broken, so each step takes 1 second (simulated by thread sleep). The issue is that other threads get stuck and wait for the broken task to finish. This is contrived example, but imagine a servlet app and someone submitting a long running task to the shared fork join pool.

public class ParallelTest {
    public static void main(String[] args) throws InterruptedException {
        ExecutorService es = Executors.newCachedThreadPool();

        es.execute(() -> runTask(1000)); //incorrect task
        es.execute(() -> runTask(0));
        es.execute(() -> runTask(0));
        es.execute(() -> runTask(0));
        es.execute(() -> runTask(0));
        es.execute(() -> runTask(0));


        es.shutdown();
        es.awaitTermination(60, TimeUnit.SECONDS);
    }

    private static void runTask(int delay) {
        range(1, 1_000_000).parallel().filter(ParallelTest::isPrime).peek(i -> Utils.sleep(delay)).max()
                .ifPresent(max -> System.out.println(Thread.currentThread() + " " + max));
    }

    public static boolean isPrime(long n) {
        return n > 1 && rangeClosed(2, (long) sqrt(n)).noneMatch(divisor -> n % divisor == 0);
    }
}

13 comments

@assylias 2014-01-16 20:58:02

The parallel streams use the default ForkJoinPool.commonPool which by default has one less threads as you have processors, as returned by Runtime.getRuntime().availableProcessors() (This means that parallel streams use all your processors because they also use the main thread):

For applications that require separate or custom pools, a ForkJoinPool may be constructed with a given target parallelism level; by default, equal to the number of available processors.

This also means if you have nested parallel streams or multiple parallel streams started concurrently, they will all share the same pool. Advantage: you will never use more than the default (number of available processors). Disadvantage: you may not get "all the processors" assigned to each parallel stream you initiate (if you happen to have more than one). (Apparently you can use a ManagedBlocker to circumvent that.)

To change the way parallel streams are executed, you can either

  • submit the parallel stream execution to your own ForkJoinPool: yourFJP.submit(() -> stream.parallel().forEach(soSomething)).get(); or
  • you can change the size of the common pool using system properties: System.setProperty("java.util.concurrent.ForkJoinPool.common.parallelism", "20") for a target parallelism of 20 threads. However, this no longer works after the backported patch https://bugs.openjdk.java.net/browse/JDK-8190974.

Example of the latter on my machine which has 8 processors. If I run the following program:

long start = System.currentTimeMillis();
IntStream s = IntStream.range(0, 20);
//System.setProperty("java.util.concurrent.ForkJoinPool.common.parallelism", "20");
s.parallel().forEach(i -> {
    try { Thread.sleep(100); } catch (Exception ignore) {}
    System.out.print((System.currentTimeMillis() - start) + " ");
});

The output is:

215 216 216 216 216 216 216 216 315 316 316 316 316 316 316 316 415 416 416 416

So you can see that the parallel stream processes 8 items at a time, i.e. it uses 8 threads. However, if I uncomment the commented line, the output is:

215 215 215 215 215 216 216 216 216 216 216 216 216 216 216 216 216 216 216 216

This time, the parallel stream has used 20 threads and all 20 elements in the stream have been processed concurrently.

@Marko Topolnik 2014-04-04 08:58:18

The commonPool has actually one less than availableProcessors, resulting in total parallelism equal to availableProcessors because the calling thread counts as one.

@assylias 2014-04-04 09:11:25

I didn't check the code to be honest.

@GKislin 2017-10-23 16:18:47

submit return ForkJoinTask. To imitate parallel() get() is needed: stream.parallel().forEach(soSomething)).get();

@Frederic Leitenberger 2018-01-23 17:29:23

I am not convinced that ForkJoinPool.submit(() -> stream.forEach(...)) will run my Stream actions with the given ForkJoinPool. I would expect that the whole Stream-Action is executed in the ForJoinPool as ONE action, but internally still using the default/common ForkJoinPool. Where did you see, that the ForkJoinPool.submit() would do what you say it does?

@assylias 2018-01-23 17:51:36

@FredericLeitenberger You probably meant to place your comment below Lukas' answer.

@Frederic Leitenberger 2018-01-23 17:58:33

I see now stackoverflow.com/a/34930831/1520422 shows nicely that it actually works as announced. Yet i still don't understand HOW it works. But i'm fine with "it works". Thanks!

@Tod Casasent 2016-08-26 18:15:08

The original solution (setting the ForkJoinPool common parallelism property) no longer works. Looking at the links in the original answer, an update which breaks this has been back ported to Java 8. As mentioned in the linked threads, this solution was not guaranteed to work forever. Based on that, the solution is the forkjoinpool.submit with .get solution discussed in the accepted answer. I think the backport fixes the unreliability of this solution also.

ForkJoinPool fjpool = new ForkJoinPool(10);
System.out.println("stream.parallel");
IntStream range = IntStream.range(0, 20);
fjpool.submit(() -> range.parallel()
        .forEach((int theInt) ->
        {
            try { Thread.sleep(100); } catch (Exception ignore) {}
            System.out.println(Thread.currentThread().getName() + " -- " + theInt);
        })).get();
System.out.println("list.parallelStream");
int [] array = IntStream.range(0, 20).toArray();
List<Integer> list = new ArrayList<>();
for (int theInt: array)
{
    list.add(theInt);
}
fjpool.submit(() -> list.parallelStream()
        .forEach((theInt) ->
        {
            try { Thread.sleep(100); } catch (Exception ignore) {}
            System.out.println(Thread.currentThread().getName() + " -- " + theInt);
        })).get();

@d-coder 2019-06-12 12:52:21

I don't see the change in parallelism when I do ForkJoinPool.commonPool().getParallelism() in debug mode.

@Tod Casasent 2019-06-13 14:09:58

Thanks. I did some testing/research and updated the answer. Looks like an update changed it, as it works in older versions.

@Rocky Li 2019-08-06 15:16:54

Why do I keep getting this: unreported exception InterruptedException; must be caught or declared to be thrown even with all the catch exceptions in the loop.

@Tod Casasent 2019-08-07 16:11:52

Rocky, I'm not seeing any errors. Knowing the Java version and the exact line will help. The "InterruptedException" suggests the try/catch around the sleep is not closed properly in your version.

@KayV 2019-02-22 06:59:02

We can change the default parallelism using the following property:

-Djava.util.concurrent.ForkJoinPool.common.parallelism=16

which can set up to use more parallelism.

@Grzegorz Piwowarek 2019-02-01 13:51:24

If you don't want to rely on implementation hacks, there's always a way to achieve the same by implementing custom collectors that will combine map and collect semantics... and you wouldn't be limited to ForkJoinPool:

list.stream()
  .collect(parallelToList(i -> fetchFromDb(i), executor))
  .join()

Luckily, it's done already here and available on Maven Central: http://github.com/pivovarit/parallel-collectors

Disclaimer: I wrote it and take responsibility for it.

@user_3380739 2016-12-02 03:26:08

Go to get AbacusUtil. Thread number can by specified for parallel stream. Here is the sample code:

LongStream.range(4, 1_000_000).parallel(threadNum)...

Disclosure: I'm the developer of AbacusUtil.

@Martin Vseticka 2018-11-01 10:10:50

If you don't need a custom ThreadPool but you rather want to limit the number of concurrent tasks, you can use:

List<Path> paths = List.of("/path/file1.csv", "/path/file2.csv", "/path/file3.csv").stream().map(e -> Paths.get(e)).collect(toList());
List<List<Path>> partitions = Lists.partition(paths, 4); // Guava method

partitions.forEach(group -> group.parallelStream().forEach(csvFilePath -> {
       // do your processing   
}));

(Duplicate question asking for this is locked, so please bear me here)

@Scott Langley 2018-06-13 20:09:32

Note: There appears to be a fix implemented in JDK 10 that ensures the Custom Thread Pool uses the expected number of threads.

Parallel stream execution within a custom ForkJoinPool should obey the parallelism https://bugs.openjdk.java.net/browse/JDK-8190974

@Hearen 2018-05-29 01:11:32

I tried the custom ForkJoinPool as follows to adjust the pool size:

private static Set<String> ThreadNameSet = new HashSet<>();
private static Callable<Long> getSum() {
    List<Long> aList = LongStream.rangeClosed(0, 10_000_000).boxed().collect(Collectors.toList());
    return () -> aList.parallelStream()
            .peek((i) -> {
                String threadName = Thread.currentThread().getName();
                ThreadNameSet.add(threadName);
            })
            .reduce(0L, Long::sum);
}

private static void testForkJoinPool() {
    final int parallelism = 10;

    ForkJoinPool forkJoinPool = null;
    Long result = 0L;
    try {
        forkJoinPool = new ForkJoinPool(parallelism);
        result = forkJoinPool.submit(getSum()).get(); //this makes it an overall blocking call

    } catch (InterruptedException | ExecutionException e) {
        e.printStackTrace();
    } finally {
        if (forkJoinPool != null) {
            forkJoinPool.shutdown(); //always remember to shutdown the pool
        }
    }
    out.println(result);
    out.println(ThreadNameSet);
}

Here is the output saying the pool is using more threads than the default 4.

50000005000000
[ForkJoinPool-1-worker-8, ForkJoinPool-1-worker-9, ForkJoinPool-1-worker-6, ForkJoinPool-1-worker-11, ForkJoinPool-1-worker-10, ForkJoinPool-1-worker-1, ForkJoinPool-1-worker-15, ForkJoinPool-1-worker-13, ForkJoinPool-1-worker-4, ForkJoinPool-1-worker-2]

But actually there is a weirdo, when I tried to achieve the same result using ThreadPoolExecutor as follows:

BlockingDeque blockingDeque = new LinkedBlockingDeque(1000);
ThreadPoolExecutor fixedSizePool = new ThreadPoolExecutor(10, 20, 60, TimeUnit.SECONDS, blockingDeque, new MyThreadFactory("my-thread"));

but I failed.

It will only start the parallelStream in a new thread and then everything else is just the same, which again proves that the parallelStream will use the ForkJoinPool to start its child threads.

@omjego 2019-06-07 10:45:26

What could be the possible reason behind not allowing other executors?

@Hearen 2019-06-20 05:57:48

@omjego That’s a good question perhaps you could start a a new question and provide more details to elaborate your ideas ;)

@John McClean 2017-03-10 12:04:19

If you don't mind using a third-party library, with cyclops-react you can mix sequential and parallel Streams within the same pipeline and provide custom ForkJoinPools. For example

 ReactiveSeq.range(1, 1_000_000)
            .foldParallel(new ForkJoinPool(10),
                          s->s.filter(i->true)
                              .peek(i->System.out.println("Thread " + Thread.currentThread().getId()))
                              .max(Comparator.naturalOrder()));

Or if we wished to continue processing within a sequential Stream

 ReactiveSeq.range(1, 1_000_000)
            .parallel(new ForkJoinPool(10),
                      s->s.filter(i->true)
                          .peek(i->System.out.println("Thread " + Thread.currentThread().getId())))
            .map(this::processSequentially)
            .forEach(System.out::println);

[Disclosure I am the lead developer of cyclops-react]

@Stefan Ferstl 2016-08-09 20:06:57

Until now, I used the solutions described in the answers of this question. Now, I came up with a little library called Parallel Stream Support for that:

ForkJoinPool pool = new ForkJoinPool(NR_OF_THREADS);
ParallelIntStreamSupport.range(1, 1_000_000, pool)
    .filter(PrimesPrint::isPrime)
    .collect(toList())

But as @PabloMatiasGomez pointed out in the comments, there are drawbacks regarding the splitting mechanism of parallel streams which depends heavily on the size of the common pool. See Parallel stream from a HashSet doesn't run in parallel .

I am using this solution only to have separate pools for different types of work but I can not set the size of the common pool to 1 even if I don't use it.

@charlie 2016-01-21 17:49:58

To measure the actual number of used threads, you can check Thread.activeCount():

    Runnable r = () -> IntStream
            .range(-42, +42)
            .parallel()
            .map(i -> Thread.activeCount())
            .max()
            .ifPresent(System.out::println);

    ForkJoinPool.commonPool().submit(r).join();
    new ForkJoinPool(42).submit(r).join();

This can produce on a 4-core CPU an output like:

5 // common pool
23 // custom pool

Without .parallel() it gives:

3 // common pool
4 // custom pool

@keyoxy 2016-09-21 08:05:33

The Thread.activeCount() doesn't tell you what threads are processing your stream. Map to Thread.currentThread().getName() instead, followed by a distinct(). Then you will realize that not every thread in the pool will be used... Add a delay to your processing and all threads in the pool will be utilized.

@Lukas 2014-03-08 13:12:23

There actually is a trick how to execute a parallel operation in a specific fork-join pool. If you execute it as a task in a fork-join pool, it stays there and does not use the common one.

ForkJoinPool forkJoinPool = new ForkJoinPool(2);
forkJoinPool.submit(() ->
    //parallel task here, for example
    IntStream.range(1, 1_000_000).parallel().filter(PrimesPrint::isPrime).collect(toList())
).get();

The trick is based on ForkJoinTask.fork which specifies: "Arranges to asynchronously execute this task in the pool the current task is running in, if applicable, or using the ForkJoinPool.commonPool() if not inForkJoinPool()"

@Lukas 2014-03-29 13:48:51

Details on the solution are described here blog.krecan.net/2014/03/18/…

@Nicolai 2014-11-17 18:53:24

But is it also specified that streams use the ForkJoinPool or is that an implementation detail? A link to the documentation would be nice.

@Lukas 2014-11-18 19:52:22

@NicolaiParlog You are right, I can't find it written anywhere. There is a related question here

@jck 2015-02-18 19:31:30

@Lukas Thanks for the snippet. I will add that the ForkJoinPool instance should be shutdown() when it's not needed any longer to avoid a thread leak. (example)

@Pablo Matias Gomez 2016-04-29 23:37:21

@Lukas what about using more than 2? Why doesn't it work if I use 100 for example?

@Lukas 2016-05-01 09:24:34

@PabloMatiasGomez You mean the "2" used as the ForkJoinPool constructor argument? It works (if I remember it correctly) and actually makes sense for blocking tasks.

@Pablo Matias Gomez 2016-05-01 12:48:05

@Lukas no, it does not work. Check stackoverflow.com/q/36947336/3645944

@roborative 2016-12-02 16:58:37

There's a nice test case here as well.

@Nicole 2018-01-17 22:52:16

Using join() instead of get() will more closely match the default behavior without the custom pool. (It will not throw checked exceptions that get() does.)

@Terran 2019-04-01 21:56:28

Note that there's a bug in Java 8 that even though tasks are running on a custom pool instance, they are still coupled to the shared pool: the size of the computation remains in proportion to the common pool and not the custom pool. Was fixed in Java 10: JDK-8190974

@omjego 2019-06-07 10:39:53

Is there any way to achieve this using ThreadPoolExecutor? I found out that the first task is executed by ThreadPool but others will be executed by common fork join pool.

@Cutberto Ocampo 2019-06-12 20:03:26

@terran This issue has also been fixed for Java 8 bugs.openjdk.java.net/browse/JDK-8224620

@Terran 2019-06-14 08:32:33

@CutbertoOcampo Nice. Thought that Oracle doesn't downstream any longer. Maybe RedHat took over.

@Mario Fusco 2015-01-03 08:05:57

Alternatively to the trick of triggering the parallel computation inside your own forkJoinPool you can also pass that pool to the CompletableFuture.supplyAsync method like in:

ForkJoinPool forkJoinPool = new ForkJoinPool(2);
CompletableFuture<List<Integer>> primes = CompletableFuture.supplyAsync(() ->
    //parallel task here, for example
    range(1, 1_000_000).parallel().filter(PrimesPrint::isPrime).collect(toList()), 
    forkJoinPool
);

Related Questions

Sponsored Content

42 Answered Questions

[SOLVED] How do I convert a String to an int in Java?

58 Answered Questions

[SOLVED] How do I read / convert an InputStream into a String in Java?

83 Answered Questions

[SOLVED] Is Java "pass-by-reference" or "pass-by-value"?

20 Answered Questions

[SOLVED] Java 8 List<V> into Map<K, V>

42 Answered Questions

[SOLVED] "implements Runnable" vs "extends Thread" in Java

54 Answered Questions

[SOLVED] Creating a memory leak with Java

65 Answered Questions

[SOLVED] How do I generate random integers within a specific range in Java?

  • 2008-12-12 18:20:57
  • user42155
  • 3850203 View
  • 3302 Score
  • 65 Answer
  • Tags:   java random integer

31 Answered Questions

[SOLVED] When to use LinkedList over ArrayList in Java?

9 Answered Questions

[SOLVED] How to Convert a Java 8 Stream to an Array?

32 Answered Questions

[SOLVED] What is the difference between concurrency and parallelism?

Sponsored Content