3.2 Default threading model

Overview

In the realm of JavaScript, there exists a concept that stands out, or some might even say, has gained a bit of a notorious reputation: the idea of single-threaded execution. When we talk about JavaScript, we often refer to its characteristic of running on a single thread. Imagine this single thread as a hardworking individual shouldering all the tasks. And when we say all tasks, we mean just about everything you can think of within the program's operation. This uniqueness of JavaScript's approach was popularized by Node.js, which labeled it as the "single-threaded event loop." This term captures the essence of how this lone thread handles and cycles through various events and actions. This way, even though it's just one thread doing the work, it manages to keep things moving by efficiently managing its tasks.

Single-threaded execution

This one thread manages the JavaScript code and handles all necessary tasks. To grasp this concept more thoroughly, consider an example: a basic JavaScript HTTP server that reflects whatever it gets from its clients. The lone thread handles various responsibilities, such as:
  • Running the JavaScript code.
  • Monitoring the socket for fresh TCP connections.
  • Managing multiple incoming requests from the socket simultaneously.
  • Generating higher-level request and response objects.
  • Parsing data in JSON format.
  • Assigning a unique request-id (similar to a uuid) to each request.
  • Converting the response into JSON format.
  • Transmitting the JSON-encoded response over the TCP connection.
  • Closing the socket
  • etc, etc, etc.
It's important to highlight that even though it has a small presence, this single thread plays a crucial role in efficiently managing these complex tasks.
All the tasks that need to be handled concurrently are managed by a single thread. This solitary thread becomes a battleground where these tasks compete for their share of time. If you take a look at the diagram provided above, you'll see that all these requests are being processed within the confines of one thread. It's a fierce struggle among these requests to be included in the thread's schedule.
Now, if we explore other programming languages like Java, they often employ a different approach. They utilize what's known as a manager with a fixed-sized thread pool. Imagine this manager as someone who listens for incoming requests. Upon receiving a new TCP connection, it selects an available thread from the pre-defined pool of threads and delegates the request for further processing. So, for every incoming request, the manager assigns a free thread to handle it. If all the threads are occupied, the manager patiently queues the request until a thread becomes available.
Upon comparing these two methods – the single thread approach versus the thread pool approach – one might get the impression that JavaScript is relatively inefficient. However, the reality isn't as grim as it might seem. While it's true that JavaScript's execution takes place within a single thread, it compensates for this limitation in a clever manner. This thread manages all incoming requests from various clients. The key to how this single thread copes with the workload lies in its utilization of asynchronicity and callbacks.
Unlike some other languages, JavaScript employs an asynchronous programming model. This means that instead of waiting for a task to finish before moving on to the next one, JavaScript can initiate a task, move to another one while the first one is being processed in the background, and then come back to handle the results once they're ready. This approach enables JavaScript to juggle multiple tasks without getting overwhelmed by the limitations of a single thread. So, while it might not be lightning-fast, JavaScript's unique approach allows it to manage concurrency effectively.
Does this imply the presence of only one thread operating within the Deno process? The response to this question is negative. We'll get into this shortly.

Additional threads

Similar to all JavaScript runtimes, Deno operates within a single-threaded environment. This means that there's just one thread responsible for handling all the tasks we've covered in the previous section. Nonetheless, when the situation demands, there exists a method to generate extra application threads. These additional threads are known as web workers.
Web workers offer Deno the capability to perform multiple tasks concurrently. Imagine having separate workers that can execute different tasks independently, such as processing data or handling user interactions. These workers operate independently from the main thread, enabling more efficient use of system resources and enhanced performance. Each web worker has its own isolated execution context, preventing potential conflicts that might arise in a multi-threaded scenario.
Web workers operate with independence, yet they maintain a connection with their parent thread. This connection enables bidirectional communication, as they exchange messages with the parent thread. Apart from this communication pathway linking the main thread and the web workers, all workers possess complete autonomy.
When it comes to each individual web worker, a fresh V8 snapshot is assigned. This snapshot serves as a unique starting point for the worker's execution. Additionally, each worker functions with its own event loop or tokio runtime. These distinct event loops allow workers to manage their tasks separately from the main thread, enhancing efficiency and organization.
The significance of web workers lies in their ability to manage CPU-intensive tasks effectively. These tasks, which might otherwise overwhelm the main thread and disrupt its processes, can be seamlessly delegated to web workers. This delegation ensures that the main thread can carry on with its ongoing operations without being hindered by resource-intensive tasks. This separation of duties enhances overall performance and responsiveness, making web workers a valuable tool in optimizing the utilization of computational resources.

Deno's default threading model

Deno operates on similar principles by executing JavaScript programs within a solitary loop. At first glance, it might seem that the Deno process operates with just one thread. However, the reality is a bit more nuanced. While Deno indeed runs JavaScript code using a single thread, it functions as a multithreaded process beneath the surface. This duality might appear contradictory at first, but it's an essential aspect of Deno's architecture. Let's explore this intricacy in more detail.
When you initiate a JavaScript program in Deno, it enters a single-threaded environment. This means that the program's execution occurs sequentially, step by step. However, Deno doesn't stop there. It leverages its multithreaded nature to enhance efficiency and resource management.
The Deno process operates with two distinct types of threads. The primary thread is responsible for executing the JavaScript code, while the additional threads are known as V8 threads, primarily utilized for garbage collection (GC) purposes.
By default, Deno leverages additional threads provided by the V8 engine. This engine is responsible for executing JavaScript code within the main thread's context. However, for the purpose of enhancing JavaScript execution speed, V8 employs supplementary threads to manage tasks that demand high CPU usage, such as garbage collection and ahead-of-time compilation.
While it is technically feasible to configure V8 to operate with just a single thread, this approach would inevitably lead to a performance decline. In such a scenario, this lone thread would be burdened with handling all the resource-intensive tasks, which would ultimately impede overall performance. Consequently, the availability of the main thread for executing user programs would be severely curtailed, potentially causing delays and disruptions in the program's execution.
The consequences of operating with a single-threaded V8 configuration could be dire. It could result in sluggish responsiveness, prolonged execution times, and an overall unsatisfactory user experience. Therefore, the current utilization of multiple threads by V8 plays a pivotal role in ensuring efficient and smooth-running JavaScript execution within the Deno runtime environment.
When you initiate a Deno process using its default settings, it operates with a default of 8 threads. This configuration applies when no web workers are present. Web workers, as we already know, are specialized scripts that can be run in the background to perform tasks without blocking the main program's execution.
Thread Distribution in Deno: In Deno, the allocation of threads is organized as follows:
  1. 1.
    Main Worker Thread: At the heart of every Deno application, there exists a single primary worker thread, commonly referred to as the "main thread." This main thread acts as the central coordinator, orchestrating various tasks and facilitating communication between different parts of the program.
  2. 2.
    V8 Threads - The Power Multipliers: Surrounding the main thread, Deno harnesses the power of seven V8 threads. These V8 threads work collaboratively to execute JavaScript code efficiently and manage the underlying runtime operations. By utilizing multiple threads, Deno enhances its performance and responsiveness, allowing for smoother execution of tasks and better utilization of available resources.

The main thread

This section focuses on the primary thread, which is under the ownership and supervision of the Tokio Runtime. This particular thread is responsible for overseeing the entire operation. Below, you'll find the stack trace for the main thread, offering insights into its sequence of actions and functions. This stack trace provides a clear view of how tasks are executed and controlled within this central thread.
2731 Thread_8461543 DispatchQueue_1: com.apple.main-thread (serial)
+ 2731 start (in libdyld.dylib) + 1 [0x7fff6dd53cc9]
+ 2731 main (in deno) + 418 [0x108ecfada]
+ 2731 std::sys_common::backtrace::__rust_begin_short_backtrace::h4e8d5235f9254db6 (in deno) + 10 [0x108d368d1]
+ 2731 deno::main::h7d1b5a97f8aef853 (in deno) + 10706 [0x108ec8676]
+ 2731 tokio::runtime::Runtime::block_on::h9f5c6c3dddbd431f (in deno) + 1768 [0x108d9fa5c]
+ 2731 _$LT$tokio..park..either..Either$LT$A$C$B$GT$$u20$as$u20$tokio..park..Park$GT$::park::h123a854f0de101c3 (in deno) + 204 [0x109439078]
+ 2731 _$LT$tokio..park..either..Either$LT$A$C$B$GT$$u20$as$u20$tokio..park..Park$GT$::park_timeout::h6834b5b57b34a394 (in deno) + 78 [0x109439308]
+ 2731 tokio::io::driver::Driver::turn::h1e669b6b05307d9f (in deno) + 72 [0x1094394df]
+ 2731 mio::poll::Poll::poll::hbd211acf9552cbd4 (in deno) + 763 [0x109148978]
+ 2731 kevent (in libsystem_kernel.dylib) + 10 [0x7fff6de99766]
As evident from the stack trace, the primary thread is under the management of the tokio runtime. This thread is responsible for executing tokio's event loop, a crucial mechanism that drives the flow of operations. In this setup, the tokio runtime takes charge of coordinating tasks and ensuring efficient execution.

V8's worker threads

By default, Deno is equipped with a total of seven v8 worker threads. These specialized threads are responsible for handling various essential tasks such as garbage collection, runtime optimizations, and Just-In-Time (JIT) compilation, among others. However, it's important to note that these worker threads do not directly execute the JavaScript code itself.
To provide further insight, let's delve into the role of these v8 worker threads in Deno's operation. When your Deno application runs, these worker threads take on crucial behind-the-scenes responsibilities. They manage memory cleanup through garbage collection, enhancing the overall efficiency of your code. Additionally, they contribute to runtime optimizations, ensuring that your code performs as smoothly and quickly as possible. Another vital function they fulfill is Just-In-Time compilation, a process that translates your high-level JavaScript code into lower-level machine code on the fly, facilitating faster execution.
Intriguingly, the stack trace of one of these worker threads provides a snapshot of its activity. This stack trace, similar among all worker threads, gives you a glimpse into the intricate processes and functions being managed by these threads.
2731 Thread_8461546: V8 DefaultWorke
+ 2731 thread_start (in libsystem_pthread.dylib) + 15 [0x7fff6df53b8b]
+ 2731 _pthread_start (in libsystem_pthread.dylib) + 148 [0x7fff6df58109]
+ 2731 v8::base::ThreadEntry(void*) (in deno) + 87 [0x10948c547]
+ 2731 v8::platform::DefaultWorkerThreadsTaskRunner::WorkerThread::Run() (in deno) + 31 [0x109491e6f]
+ 2731 v8::platform::DelayedTaskQueue::GetNext() (in deno) + 708 [0x109492644]
+ 2731 _pthread_cond_wait (in libsystem_pthread.dylib) + 698 [0x7fff6df58425]
+ 2731 __psynch_cvwait (in libsystem_kernel.dylib) + 10 [0x7fff6de97882]

Deno's threading model with web workers

If the application has generated a web worker, the allocation of threads functions in the following manner:
Well, things are taking a fascinating turn here. Once a web worker enters the picture, the thread count rises to a grand total of 9. The threads are distributed in the following manner:
  1. 1.
    A solitary main worker thread, often called the main thread.
  2. 2.
    An individual web worker thread.
  3. 3.
    A collective of seven V8 threads.
In sum, that gives us 9 threads in total. What's intriguing is that the quantity of V8 worker threads remains consistent even when an extra web worker is thrown into the mix. However, it's worth noting that the option to increase the number of V8 worker threads exists. But, and this is crucial, any such increase must be initiated by the user and is determined by the specific requirements of their use case.
Now, let's talk about the stack trace of this fresh new worker thread:
2718 Thread_8525355: deno-worker-0
2718 thread_start (in libsystem_pthread.dylib) + 15 [0x7fff6df53b8b]
2718 _pthread_start (in libsystem_pthread.dylib) + 148 [0x7fff6df58109]
2718 std::sys::unix::thread::Thread::new::thread_start::hd4805e9612a32deb (in deno) + 45 [0x10a2e1b9d]
2718 core::ops::function::FnOnce::call_once$u7b$$u7b$vtable.shim$u7d$$u7d$::h7e30e5a55a4af9fa (in deno) + 116 [0x109d60e85]
2718 std::sys_common::backtrace::__rust_begin_short_backtrace::h30b57608d94f7821 (in deno) + 2856 [0x109d507e6]
2718 tokio::runtime::Runtime::block_on::h9f5c6c3dddbd431f (in deno) + 1768 [0x109dbaa5c]
2718 _$LT$tokio..park..either..Either$LT$A$C$B$GT$$u20$as$u20$tokio..park..Park$GT$::park::h123a854f0de101c3 (in deno) + 204 [0x10a454078]
2718 _$LT$tokio..park..either..Either$LT$A$C$B$GT$$u20$as$u20$tokio..park..Park$GT$::park_timeout::h6834b5b57b34a394 (in deno) + 78 [0x10a454308]
2718 tokio::io::driver::Driver::turn::h1e669b6b05307d9f (in deno) + 72 [0x10a4544df]
2718 mio::poll::Poll::poll::hbd211acf9552cbd4 (in deno) + 763 [0x10a163978]
2718 kevent (in libsystem_kernel.dylib) + 10 [0x7fff6de99766]
The stack trace exhibited by the web worker above mirrors the stack trace in the main thread, differing only at the point where the thread commences within Deno's realm. In the main thread, the thread initiates within deno::main, whereas in the web worker, it commences within core::ops. Core's ops serves as the birthplace of the web worker's inception.
Now, let's delve into a final illustrative example that showcases the distribution of threads when an application generates five web worker threads:
The pattern stands out distinctly. When utilizing a single web worker, the system engages a sum of 13 threads. These threads are distributed in the following manner:
  • One of the threads serves as the primary worker thread or main thread.
  • Five threads are allocated to web workers.
  • Seven threads are designated for V8 tasks.
In total, this arrangement accounts for a collective of 13 threads in operation. This interplay of threads facilitates the execution of tasks and processes within the system.

More threads

In the previous section, we delved into Deno's threading model, exploring both its utilization with and without web workers. However, are these the only threads that Deno employs? The answer is no. The threads we've examined so far fall under the category of OS-aware static threads, which are essentially created almost at the inception of the program.
Naturally, it's important to note that applications have the capability to generate web workers during runtime, although this can incur substantial costs. But there's more to the story. Anticipate the potential emergence of additional threads in Deno's future as the process tackles asynchronous operations. These particular threads, known as dynamic threads or green threads, come to life through the facilitation of tokio. It's crucial to understand that these threads operate differently from the ones we've previously discussed – they're not orchestrated by the operating system's scheduler, rather, they are spawned and managed by tokio itself.