The reason is that we can have a huge number of virtual threads, and each virtual thread will have its own ThreadLocal. This means that the memory footprint of the application may quickly become very high. Moreover, the ThreadLocal will be useless in a one-thread-per-request java virtual threads scenario since data won’t be shared between different requests. How the JVM schedules virtual threads on their carrier threads? As we said, both projects are still evolving, so the final version of the features might differ from what we will see here.
If the virtual thread makes a blocking network call from inside
a synchronized block, the virtual thread may also remain pinned to the platform thread. A platform thread can only execute one virtual thread at a time. While the virtual thread is being executed by a platform thread – the virtual thread is said to be mounted
to that thread. With a virtual thread, the request can be issued asynchronously and park the virtual thread and schedule another virtual thread. Once the response is received, the virtual thread is rescheduled and this is done completely transparently.
This is great, we achieve concurrency without introducing any special words like async and await, we don’t have a colored function yet still operate on a single Thread. To solve our scalability problems, we often just scale and spawn multiple nodes of the server. This works, we can now handle as many requests as we like if we pay enough to our cloud vendor, but with cloud technologies, one of the main driving factors is reducing the cost of operation. Sometimes we can’t afford the extra spending and we end up with a slow and barely usable system.
The JVM maintains a pool of platform threads, created and maintained by a dedicated ForkJoinPool. Initially, the number of platform threads equals the number of CPU cores, and it cannot increase more than 256. The way we start threads is a little different since we’re using the ExecutorService. Every call to the submit method requires a Runnable or a Callable instance.
In other words, it’s a pointer to the advance of an execution that can be yielded and resumed later. When Java 1.0 was released in 1995, its API had about a hundred classes, among them java.lang.Thread. Java was the first mainstream programming language that directly supported concurrent programming. The readAllBytes method is a bulk synchronous read operation that reads all of the response bytes. Under the hood, readAllBytes eventually bottoms-out in the read method of a java.net socket input stream.
Loom’s goal is to overhaul the concurrency model of the language. They aim to bring virtual threads, structured concurrency, and a few other smaller things (for now). There are other situations that may currently pin a virtual thread to a platform thread.
Stack size can be tuned both with command-line switches and Thread constructors, but tuning is risky in both directions. If stacks are overprovisioned, we will use even more memory; if they are underprovisioned, we risk StackOverflowException if the wrong code is called at the wrong time. We generally lean towards overprovisioning thread stacks as being the lesser of evils, but the result is a relatively low limit on how many concurrent threads we can have for a given amount of memory.
For application programmers, they represent an alternative to asynchronous-style coding such as using callbacks or futures. All told, we could see virtual threads as a pendulum swing back towards a synchronous https://www.globalcloudteam.com/ programming paradigm in Java, when dealing with concurrency. This is roughly analogous in programming style (though not at all in implementation) to JavaScript’s introduction of async/await.
Note that there is no way to find the platform thread on which a virtual thread executes. These might seem like small optimizations and indeed they are insignificant with small applications or servers with low load. These things matter when you need to process millions of requests per day they can be a game changer and drastically increase your throughput in some situations.
For simplicity, only tasks that complete successfully are returned. Notice how now the task is executed by two threads, the first one executing the code before the blocking call and the second one after that. For example, Task5 is executed firstly by ForkJoinPool-1-worker-5 and then by ForkJoinPool-1-worker-1. This pool has a size equal to the number of cores and is managed by the JVM. New virtual threads are queued up until a platform thread is ready to execute it. When a platform thread becomes
ready, it will take a virtual thread and start executing it.
Keeping the OS threads free means that many virtual threads can run their Java code on the same OS thread, effectively sharing it. However, for virtual threads, we have the JVM support directly. So, continuations execution is implemented using a lot of native calls to the JVM, and it’s less understandable when looking at the JDK code. However, we can still look at some concepts at the roots of virtual threads. When using threads before Java 19 and Project Loom, creating a thread using the constructor was relatively uncommon.
Synchronous APIs are for the most part easier to work with; the code is easier to write, easier to read, and easier to debug (with stack traces that make sense!). One of the compelling value propositions of Project Loom is to avoid having to make this choice – it should be possible for the synchronous code to scale. I don’t like reactors (I also don’t like the actor model but it at least performs better and is easier to understand). I am a huge fan of blocking code and the thread-per-request model.
So we may get scalability from this model, but we have to give up on using parts of the language and ecosystem to get it. The first time the virtual thread blocks on a blocking operation, the carrier thread is released, and the stack chunk of the virtual thread is copied back to the heap. This way, the carrier thread can execute any other eligible virtual threads. Once the blocked virtual thread finishes the blocking operation, the scheduler schedules it again for execution.
Java 21, the Next LTS Release, Delivers Virtual Threads, Record ….
Posted: Tue, 19 Sep 2023 07:00:00 GMT [source]