By Ms. Corlib


2016-05-24 16:51:31 8 Comments

Time and time again, I see it said that using async-await doesn't create any additional threads. That doesn't make sense because the only ways that a computer can appear to be doing more than 1 thing at a time is

  • Actually doing more than 1 thing at a time (executing in parallel, making use of multiple processors)
  • Simulating it by scheduling tasks and switching between them (do a little bit of A, a little bit of B, a little bit of A, etc.)

So if async-await does neither of those, then how can it make an application responsive? If there is only 1 thread, then calling any method means waiting for the method to complete before doing anything else, and the methods inside that method have to wait for the result before proceeding, and so forth.

10 comments

@Calmarius 2019-08-27 21:23:53

I try to explain it bottom up. Maybe someone find it helpful. I was there, done that, reinvented it, when made simple games in DOS in Pascal (good old times...)

So... In an every event driven application has an event loop inside that's something like this:

while (getMessage(out message)) // pseudo-code
{
   dispatchMessage(message); // pseudo-code
}

Frameworks usually hide this detail from you but it's there. The getMessage function reads the next event from the event queue or waits until an event happens: mouse move, keydown, keyup, click, etc. And then dispatchMessage dispatches the event to the appropriate event handler. Then waits for the next event and so on until a quit event comes that exits the loops and finishes the application.

Event handlers should run fast so the event loop can poll for more events and the UI remains responsive. What happens if a button click triggers an expensive operation like this?

void expensiveOperation()
{
    for (int i = 0; i < 1000; i++)
    {
        Thread.Sleep(10);
    }
}

Well the UI becomes nonresponsive until the 10 second operation finishes as the control stays within the function. To solve this problem you need to break up the task into small parts that can execute quickly. This means you cannot handle the whole thing in a single event. You must do a small part of the work, then post another event to the event queue to ask for continuation.

So you would change this to:

void expensiveOperation()
{
    doIteration(0);
}

void doIteration(int i)
{
    if (i >= 1000) return;
    Thread.Sleep(10); // Do a piece of work.
    postFunctionCallMessage(() => {doIteration(i + 1);}); // Pseudo code. 
}

In this case only the first iteration runs then it posts a message to the event queue to run the next iteration and returns. It our example postFunctionCallMessage pseudo function puts a "call this function" event to the queue, so the event dispatcher will call it when it reaches it. This allows all other GUI events to be processed while continuously running pieces of a long running work as well.

As long as this long running task is running, its continuation event is always in the event queue. So you basically invented your own task scheduler. Where the continuation events in the queue are "processes" that are running. Actually this what operating systems do, except that the sending of the continuation events and returning to the scheduler loop is done via the CPU's timer interrupt where the OS registered the context switching code, so you don't need to care about it. But here you are writing your own scheduler so you do need to care about it - so far.

So we can run long running tasks in a single thread parallel with the GUI by breaking up them into small chunks and sending continuation events. This is the general idea of the Task class. It represents a piece of work and when you call .ContinueWith on it, you define what function to call as the next piece when the current piece finishes (and its return value is passed to the continuation). The Task class uses a thread pool, where there is an event loop in each thread waiting to do pieces of work similar to want I showed at the beginning. This way you can have millions of tasks running in parallel, but only a few threads to run them. But it would work just as well with a single thread - as long as your tasks are properly split up into small pieces each of them appears running in parellel.

But doing all this chaining splitting up work into small pieces manually is a cumbersome work and totally messes up the layout of the logic, because the entire background task code basically a .ContinueWith mess. So this is where the compiler helps you. It does all this chaining and continuation for you in the background. When you say await you tell tell the compiler that "stop here, add the rest of the function as a continuation task". The compiler takes care of the rest, so you don't have to.

@vaibhav kumar 2017-10-21 04:14:56

Summarizing other answers:

Async/await is primarily created for IO bound tasks since by using them, one can avoid blocking the calling thread. Their main use is with UI threads where it is not desired for the thread to be blocked on an IO bound operation.

Async doesn't create it's own thread. The thread of the calling method is used to execute the async method till it finds an awaitable. The same thread then continues to execute the rest of the calling method beyond the async method call. Within the called async method, after returning from the awaitable, the continuation can be executed on a thread from the thread pool - the only place a separate thread comes into picture.

@stojke 2018-12-23 14:05:47

Good summary, but I think it should answer 2 more questions in order to give the full picture: 1. What thread the awaited code is executed on? 2. Who controls/configures the mentioned thread pool - the developer or the runtime environment?

@vaibhav kumar 2018-12-28 03:20:55

1. In this case, mostly the awaited code is a IO bound operation which wouldn't use CPU threads. If it is desired to use await for CPU bound operation, a separate Task could be spawned. 2. The thread in the thread pool is managed by the the Task scheduler which is part of the TPL framework.

@Simon Mourier 2016-11-11 08:54:29

Here is how I view all this, it may not be super technically accurate but it helps me, at least :).

There are basically two types of processing (computation) that happen on a machine:

  • processing that happen on the CPU
  • processing that happen on other processors (GPU, network card, etc.), let's call them IO.

So, when we write a piece of source code, after compilation, depending on the object we use (and this is very important), processing will be CPU bound, or IO bound, and in fact, it can be bound to a combination of both.

Some examples:

  • if I use the Write method of the FileStream object (which is a Stream), processing will be say, 1% CPU bound, and 99% IO bound.
  • if I use the Write method of the NetworkStream object (which is a Stream), processing will be say, 1% CPU bound, and 99% IO bound.
  • if I use the Write method of the Memorystream object (which is a Stream), processing will be 100% CPU bound.

So, as you see, from an object-oriented programmer point-of-view, although I'm always accessing a Stream object, what happens beneath may depend heavily on the ultimate type of the object.

Now, to optimize things, it's sometimes useful to be able to run code in parallel (note I don't use the word asynchronous) if it's possible and/or necessary.

Some examples:

  • In a desktop app, I want to print a document, but I don't want to wait for it.
  • My web server servers many clients at the same time, each one getting his pages in parallel (not serialized).

Before async / await, we essentially had two solutions to this:

  • Threads. It was relatively easy to use, with Thread and ThreadPool classes. Threads are CPU bound only.
  • The "old" Begin/End/AsyncCallback asynchronous programming model. It's just a model, it doesn't tell you if you'll be CPU or IO bound. If you take a look at the Socket or FileStream classes, it's IO bound, which is cool, but we rarely use it.

The async / await is only a common programming model, based on the Task concept. It's a bit easier to use than threads or thread pools for CPU bound tasks, and much easier to use than the old Begin/End model. Undercovers, however, it's "just" a super sophisticated feature-full wrapper on both.

So, the real win is mostly on IO Bound tasks, task that don't use the CPU, but async/await is still only a programming model, it doesn't help you to determine how/where processing will happen in the end.

It means it's not because a class has a method "DoSomethingAsync" returning a Task object that you can presume it will be CPU bound (which means it maybe quite useless, especially if it doesn't have a cancellation token parameter), or IO Bound (which means it's probably a must), or a combination of both (since the model is quite viral, bonding and potential benefits can be, in the end, super mixed and not so obvious).

So, coming back to my examples, doing my Write operations using async/await on MemoryStream will stay CPU bound (I will probably not benefit from it), although I will surely benefit from it with files and network streams.

@davidcarr 2018-12-11 14:31:30

This is quite a good answer using the theadpool for cpu bound work is poor in sense that TP threads should be used to offload IO operations. CPU bound work imo should be blocking with caveats of course and nothing precludes the use of multiple threads.

@angry person 2016-05-24 17:07:10

Actually, async/await is not that magical. The full topic is quite broad but for a quick yet complete enough answer to your question I think we can manage.

Let's tackle a simple button click event in a Windows Forms application:

public async void button1_Click(object sender, EventArgs e)
{
    Console.WriteLine("before awaiting");
    await GetSomethingAsync();
    Console.WriteLine("after awaiting");
}

I'm going to explicitly not talk about whatever it is GetSomethingAsync is returning for now. Let's just say this is something that will complete after, say, 2 seconds.

In a traditional, non-asynchronous, world, your button click event handler would look something like this:

public void button1_Click(object sender, EventArgs e)
{
    Console.WriteLine("before waiting");
    DoSomethingThatTakes2Seconds();
    Console.WriteLine("after waiting");
}

When you click the button in the form, the application will appear to freeze for around 2 seconds, while we wait for this method to complete. What happens is that the "message pump", basically a loop, is blocked.

This loop continuously asks windows "Has anyone done something, like moved the mouse, clicked on something? Do I need to repaint something? If so, tell me!" and then processes that "something". This loop got a message that the user clicked on "button1" (or the equivalent type of message from Windows), and ended up calling our button1_Click method above. Until this method returns, this loop is now stuck waiting. This takes 2 seconds and during this, no messages are being processed.

Most things that deal with windows are done using messages, which means that if the message loop stops pumping messages, even for just a second, it is quickly noticeable by the user. For instance, if you move notepad or any other program on top of your own program, and then away again, a flurry of paint messages are sent to your program indicating which region of the window that now suddenly became visible again. If the message loop that processes these messages is waiting for something, blocked, then no painting is done.

So, if in the first example, async/await doesn't create new threads, how does it do it?

Well, what happens is that your method is split into two. This is one of those broad topic type of things so I won't go into too much detail but suffice to say the method is split into these two things:

  1. All the code leading up to await, including the call to GetSomethingAsync
  2. All the code following await

Illustration:

code... code... code... await X(); ... code... code... code...

Rearranged:

code... code... code... var x = X(); await X; code... code... code...
^                                  ^          ^                     ^
+---- portion 1 -------------------+          +---- portion 2 ------+

Basically the method executes like this:

  1. It executes everything up to await
  2. It calls the GetSomethingAsync method, which does its thing, and returns something that will complete 2 seconds in the future

    So far we're still inside the original call to button1_Click, happening on the main thread, called from the message loop. If the code leading up to await takes a lot of time, the UI will still freeze. In our example, not so much

  3. What the await keyword, together with some clever compiler magic, does is that it basically something like "Ok, you know what, I'm going to simply return from the button click event handler here. When you (as in, the thing we're waiting for) get around to completing, let me know because I still have some code left to execute".

    Actually it will let the SynchronizationContext class know that it is done, which, depending on the actual synchronization context that is in play right now, will queue up for execution. The context class used in a Windows Forms program will queue it using the queue that the message loop is pumping.

  4. So it returns back to the message loop, which is now free to continue pumping messages, like moving the window, resizing it, or clicking other buttons.

    For the user, the UI is now responsive again, processing other button clicks, resizing and most importantly, redrawing, so it doesn't appear to freeze.

  5. 2 seconds later, the thing we're waiting for completes and what happens now is that it (well, the synchronization context) places a message into the queue that the message loop is looking at, saying "Hey, I got some more code for you to execute", and this code is all the code after the await.
  6. When the message loop gets to that message, it will basically "re-enter" that method where it left off, just after await and continue executing the rest of the method. Note that this code is again called from the message loop so if this code happens to do something lengthy without using async/await properly, it will again block the message loop

There are many moving parts under the hood here so here are some links to more information, I was going to say "should you need it", but this topic is quite broad and it is fairly important to know some of those moving parts. Invariably you're going to understand that async/await is still a leaky concept. Some of the underlying limitations and problems still leak up into the surrounding code, and if they don't, you usually end up having to debug an application that breaks randomly for seemingly no good reason.


OK, so what if GetSomethingAsync spins up a thread that will complete in 2 seconds? Yes, then obviously there is a new thread in play. This thread, however, is not because of the async-ness of this method, it is because the programmer of this method chose a thread to implement asynchronous code. Almost all asynchronous I/O don't use a thread, they use different things. async/await by themselves do not spin up new threads but obviously the "things we wait for" may be implemented using threads.

There are many things in .NET that do not necessarily spin up a thread on their own but are still asynchronous:

  • Web requests (and many other network related things that takes time)
  • Asynchronous file reading and writing
  • and many more, a good sign is if the class/interface in question has methods named SomethingSomethingAsync or BeginSomething and EndSomething and there's an IAsyncResult involved.

Usually these things do not use a thread under the hood.


OK, so you want some of that "broad topic stuff"?

Well, let's ask Try Roslyn about our button click:

Try Roslyn

I'm not going to link in the full generated class here but it's pretty gory stuff.

@Bergi 2016-05-24 20:11:13

So it's basically what the OP described as "Simulating parallel execution by scheduling tasks and switching between them", isn't it?

@user4650451 2016-05-24 20:30:55

Seems like it, but instead of scheduling and switching back and forth, seeing if it can proceed, it waits until it's flagged and KNOWS that it can proceed.

@Luaan 2016-05-25 08:48:20

@Bergi Not quite. The execution is truly parallel - the asynchronous I/O task is ongoing, and requires no threads to proceed (this is something that has been used long before Windows came around - MS DOS also used asynchronous I/O, even though it didn't have multi-threading!). Of course, await can be used the way you describe it as well, but generally isn't. Only the callbacks are scheduled (on the thread pool) - between the callback and the request, no thread is needed.

@Bergi 2016-05-25 12:30:07

@Luaan: Ah, yes, that's the implementation of GetSomethingAsync, which I didn't consider. Only the code in the async function is scheduled like a task.

@angry person 2016-05-25 14:13:25

That's why I wanted to explicitly avoid talking too much about what that method did, as the question was about async/await specifically, which does not create its own threads. Obviously, they can be used to wait for threads to complete.

@rory.ap 2016-12-08 15:39:34

@LasseV.Karlsen -- I'm ingesting your great answer, but I'm still hung up on one detail. I understand that the event handler exists, as in step 4, which allows the message pump to continue pumping, but when and where does the "thing that takes two seconds" continue to execute if not on a separate thread? If it were to execute on the UI thread, then it would block the message pump anyway while it's executing because it has to execute some time on the same thread..[continued]...

@rory.ap 2016-12-08 15:41:08

If "the thing that takes 2 seconds" is one of the operations that Stephen Cleary describes in his answer (and referenced blog post, namely an IO operation, then I understand. But is that what your answer assumes?

@angry person 2016-12-08 15:59:50

It depends entirely on what kind of task this is but simplified here's what happens. Some code wakes up in response to the task completing. It may be that this code executes as part of whatever that just completed, it may be that it executes as part of an interrupt handling on the processor level. This code posts a message to the message queue of the UI by calling into the synchronization context tied to the task (which is then tied to the message queue), then the code exits.

@angry person 2016-12-08 16:01:02

The message loop is still pumping messages, and at some point it gets to the message posted by that task completion thingamajig. To process this message, it calls into the task system which then calls the continuation that represents the rest of your task, after the call to await. And yes, this will indeed "block" the message loop, but if the code is written properly, only for a very short time.

@angry person 2016-12-08 16:02:30

In fact, almost every message the message pump is processing is a "blocking" call to some code to process the message, like button clicks, mouse movement, window resizing, etc. The important part is that if this code is written properly it only takes a very short time to execute before returning to the message pump code. We only talk about blocking the ui when such code takes a long time. "long time" here is relative, it may be anywhere from 16ms to hours (16ms is about what you need to stay under to get 60fps refresh rate in games/similar).

@KevinBui 2018-03-18 05:22:55

@LasseVågsætherKarlsen if you said Web Request don't spin up a new thread, how do you explain about my question stackoverflow.com/questions/48366871/… ?

@angry person 2018-03-18 10:21:25

That I don't know enough about the actual implementation of these classes and methods, they don't use threads for their main work, but apparently they do use threads on the threadpool to schedule their continuation or similar. The important part is that this is not due to async/await, but rather just how they decided to implement these things.

@Puchacz 2018-07-10 13:44:22

I like your explanation with the message pump. How does your explanation differs when there is no message pump like in console application or web server? How the reentrace of a method is achieved?

@Soufien Hajji 2019-02-21 10:53:46

@LasseVågsætherKarlsen if i understand well, in all cases the main thread (in this case the UI thread) will never execute the 2 seconds Task right ? as if the 2 seconds Task is being delegated to another component ?

@angry person 2019-02-21 14:34:59

Exactly what the 2 second task does is up to the programmer of that task. It can be executed on the main thread, or a secondary thread, or may not even need to execute at all, it might be an I/O event that is pending.

@Switch386 2019-08-29 19:15:32

Curious what is responsible for notifying or listening...how is that nonblocking?

@Eike 2019-10-29 14:20:06

What IMHO is missing in your answer (apart from which it did help me), is to state clearly that all this is futile if GetSomethingAsync() doesn't do something really asynchronous, like starting a thread, doing asynchronous I/O or the like. Everything will be executed in order if not, no responsiveness would be gained. (If I understood all that correctly, that is.)

@Andrew Savinykh 2016-05-26 07:25:28

I'm not going to compete with Eric Lippert or Lasse V. Karlsen, and others, I just would like to draw attention to another facet of this question, that I think was not explicitly mentioned.

Using await on it's own does not make your app magically responsive. If whatever you do in the method you are awaiting on from the UI thread blocks, it will still block your UI the same way as non-awaitable version would.

You have to write your awaitable method specifically so it either spawn a new thread or use a something like a completion port (which will return execution in the current thread and call something else for continuation whenever completion port gets signaled). But this part is well explained in other answers.

@Eric Lippert 2016-05-26 17:43:16

It's not a competition in the first place; it's a collaboration!

@Steve Fan 2016-05-28 14:14:50

Actually, async await chains are state machine generated by CLR compiler.

async await however does use threads that TPL are using thread pool to execute tasks.

The reason the application is not blocked is that the state machine can decides which co-routine to execute, repeat, check, and decides again.

Further reading:

What does async & await generate?

Async Await and the Generated StateMachine

Asynchronous C# and F# (III.): How does it work? - Tomas Petricek

Edit:

Okay. It seems like my elaboration is incorrect. However I do have to point out that state machines are important assets for async awaits. Even if you take-in asynchronous I/O you still need a helper to check if the operation is complete therefore we still need a state machine and determine which routine can be executed asychronously together.

@Eric Lippert 2016-05-24 17:31:47

the only ways that a computer can appear to be doing more than 1 thing at a time is (1) Actually doing more than 1 thing at a time, (2) simulating it by scheduling tasks and switching between them. So if async-await does neither of those

It's not that await does neither of those. Remember, the purpose of await is not to make synchronous code magically asynchronous. It's to enable using the same techniques we use for writing synchronous code when calling into asynchronous code. Await is about making the code that uses high latency operations look like code that uses low latency operations. Those high latency operations might be on threads, they might be on special purpose hardware, they might be tearing their work up into little pieces and putting it in the message queue for processing by the UI thread later. They're doing something to achieve asynchrony, but they are the ones that are doing it. Await just lets you take advantage of that asynchrony.

Also, I think you are missing a third option. We old people -- kids today with their rap music should get off my lawn, etc -- remember the world of Windows in the early 1990s. There were no multi-CPU machines and no thread schedulers. You wanted to run two Windows apps at the same time, you had to yield. Multitasking was cooperative. The OS tells a process that it gets to run, and if it is ill-behaved, it starves all the other processes from being served. It runs until it yields, and somehow it has to know how to pick up where it left off the next time the OS hands control back to it. Single-threaded asynchronous code is a lot like that, with "await" instead of "yield". Awaiting means "I'm going to remember where I left off here, and let someone else run for a while; call me back when the task I'm waiting on is complete, and I'll pick up where I left off." I think you can see how that makes apps more responsive, just as it did in the Windows 3 days.

calling any method means waiting for the method to complete

There is the key that you are missing. A method can return before its work is complete. That is the essence of asynchrony right there. A method returns, it returns a task that means "this work is in progress; tell me what to do when it is complete". The work of the method is not done, even though it has returned.

Before the await operator, you had to write code that looked like spaghetti threaded through swiss cheese to deal with the fact that we have work to do after completion, but with the return and the completion desynchronized. Await allows you to write code that looks like the return and the completion are synchronized, without them actually being synchronized.

@JAB 2016-05-24 20:49:04

Other modern high-level languages support similar explicitly cooperative behavior, too (i.e. function does some stuff, yields [possibly sending some value/object to the caller], continues where it left off when control is handed back [possibly with additional input supplied]). Generators are plenty big in Python, for one thing.

@Eric Lippert 2016-05-24 20:53:31

@JAB: Absolutely. Generators are called "iterator blocks" in C# and use the yield keyword. Both async methods and iterators in C# are a form of coroutine, which is the general term for a function that knows how to suspend its current operation for resumption later. A number of languages have coroutines or coroutine-like control flows these days.

@user253751 2016-05-24 21:26:15

The analogy to yield is a good one - it's cooperative multitasking within one process. (and thereby avoiding the system stability issues of system-wide cooperative multitasking)

@Ian Ringrose 2016-05-25 09:03:16

I think the concept of "cpu interrupts" being used for IO, is not know about a lot of modem "programmers", hence they think a thread needs to wait for each bit of IO.

@KevinBui 2018-03-18 05:29:15

@EricLippert Async method of WebClient actually creates additional thread, see here stackoverflow.com/questions/48366871/…

@user469104 2019-05-14 13:14:00

I think the sentence 'A method can return before its work is complete.' is misleading. The method does not return in the traditional sense of 'reaching the end of the method scope and returning a value'. The method yields execution, essentially saying 'you can put me on hold, I will let you know when I want to continue executing'. I saw someone suggesting in another article that a better name for the 'await' keyword would have been 'yieldUntil' and I wholeheartedly agree, it would have been much more descriptive of what 'await' does

@Eric Lippert 2019-05-14 13:54:06

@user469104: The sentence is misleading when taken out of context, but the context is that it is embedded inside two paragraphs that explain what you just explained again in your comment, so I'm not sure how your critique is actionable. As for the choice of keyword: we considered variations on yield and many, many more alternatives; we rejected the yield forms as too likely to be confused with yield return in the minds of people new to the feature. It's rather late to complain now; the time to make that particular critique was 2011.

@user469104 2019-05-14 16:17:54

@EricLippert I added my comment as a hint for other people who like me have been thrown off by language of 'await allows the method to return early' when in reality no 'return' occurs (in the traditional sense of a method execution reaching its end of scope) but rather a yield of execution to later be resumed. I personally found your answer to be unclear on this particular point why I added the comment as a hint for future readers who might be in the same boat. Not a criticism of you or the answer as such, just that particular part / wording.

@Eric Lippert 2019-05-14 16:50:41

@user469104: The entire point of my answer's final paragraphs is to contrast completion of a workflow, which is a fact about the state of the workflow, with return which is a fact about flow of control. As you note, there is no requirement in general that a workflow be completed before it returns; in C# 2, yield return gave us workflows that returned before they completed. async workflows are the same; they return before they are complete.

@Margaret Bloom 2016-05-25 07:19:57

await and async use Tasks not Threads.

The framework has a pool of threads ready to execute some work in the form of Task objects; submitting a Task to the pool means selecting a free, already existing1, thread to call the task action method.
Creating a Task is matter of creating a new object, far way faster than creating a new thread.

Given a Task is possible to attach a Continuation to it, it is a new Task object to be executed once the thread ends.

Since async/await use Tasks they don't create a new thread.


While interrupt programming technique are used widely in every modern OS, I don't think they are relevant here.
You can have two CPU bonded tasks executing in parallel (interleaved actually) in a single CPU using aysnc/await.
That could not be explained simply with the fact that the OS support queuing IORP.


Last time I checked the compiler transformed async methods into DFA, the work is divided into steps, each one terminating with an await instruction.
The await starts its Task and attach it a continuation to execute the next step.

As a concept example, here is a pseudo-code example.
Things are being simplified for the sake of clarity and because I don't remember all the details exactly.

method:
   instr1                  
   instr2
   await task1
   instr3
   instr4
   await task2
   instr5
   return value

It get transformed into something like this

int state = 0;

Task nextStep()
{
  switch (state)
  {
     case 0:
        instr1;
        instr2;
        state = 1;

        task1.addContinuation(nextStep());
        task1.start();

        return task1;

     case 1:
        instr3;
        instr4;
        state = 2;

        task2.addContinuation(nextStep());
        task2.start();

        return task2;

     case 2:
        instr5;
        state = 0;

        task3 = new Task();
        task3.setResult(value);
        task3.setCompleted();

        return task3;
   }
}

method:
   nextStep();

1 Actually a pool can have its task creation policy.

@gardenhead 2016-05-24 23:44:37

I am really glad someone asked this question, because for the longest time I also believed threads were necessary to concurrency. When I first saw event loops, I thought they were a lie. I thought to myself "there's no way this code can be concurrent if it runs in a single thread". Keep in mind this is after I already had gone through the struggle of understanding the difference between concurrency and parallelism.

After research of my own, I finally found the missing piece: select(). Specifically, IO multiplexing, implemented by various kernels under different names: select(), poll(), epoll(), kqueue(). These are system calls that, while the implementation details differ, allow you to pass in a set of file descriptors to watch. Then you can make another call that blocks until the one of the watched file descriptors changes.

Thus, one can wait on a set of IO events (the main event loop), handle the first event that completes, and then yield control back to the event loop. Rinse and repeat.

How does this work? Well, the short answer is that it's kernel and hardware-level magic. There are many components in a computer besides the CPU, and these components can work in parallel. The kernel can control these devices and communicate directly with them to receive certain signals.

These IO multiplexing system calls are the fundamental building block of single-threaded event loops like node.js or Tornado. When you await a function, you are watching for a certain event (that function's completion), and then yielding control back to the main event loop. When the event you are watching completes, the function (eventually) picks up from where it left off. Functions that allow you to suspend and resume computation like this are called coroutines.

@Stephen Cleary 2016-05-24 17:06:53

I explain it in full in my blog post There Is No Thread.

In summary, modern I/O systems make heavy use of DMA (Direct Memory Access). There are special, dedicated processors on network cards, video cards, HDD controllers, serial/parallel ports, etc. These processors have direct access to the memory bus, and handle reading/writing completely independently of the CPU. The CPU just needs to notify the device of the location in memory containing the data, and then can do its own thing until the device raises an interrupt notifying the CPU that the read/write is complete.

Once the operation is in flight, there is no work for the CPU to do, and thus no thread.

@Yonatan Nir 2017-07-10 14:00:06

Just to make it clear.. I understand the high level of what happens when using async-await. Regarding the no thread creation - there is no thread only in I/O requests to devices which like you said have their own processors which handles the request itself? Can we assume ALL I/O requests are handled on such independent processors meaning use Task.Run ONLY on CPU bound actions?

@Stephen Cleary 2017-07-10 14:04:54

@YonatanNir: It's not just about separate processors; any kind of event-driven response is naturally asynchronous. Task.Run is most appropriate for CPU-bound actions, but it has a handful of other uses as well.

@Yonatan Nir 2017-07-10 14:31:35

I finished reading your article and there is still something basic I don't understand since I'm not really familiar with the lower level implementation of the OS. I got what you wrote up to where you wrote: "The write operation is now “in flight”. How many threads are processing it? None." . So if there are no threads, then how does the operation itself done if not on a thread?

@Stephen Cleary 2017-07-10 14:35:52

@YonatanNir: In the example of writing to a disk, the bytes are being written to disk by the disk controller using DMA. But there are other examples where there isn't a separate processor. E.g., timers, or waiting for an HTTP response.

@Yonatan Nir 2017-07-10 14:43:03

OK so what happens there where there is no separate processor? How is the action itself done such that there is no need for a thread?

@Stephen Cleary 2017-07-10 14:44:12

@YonatanNir: It just registers a callback.

@the_dark_destructor 2017-11-10 11:29:03

This is the missing piece in a thousands of explanation!!! There is actually someone doing the work in the background with I/O operations. It is not a thread but another dedicated hardware component doing its job!

@Prabu 2018-01-25 23:40:38

@StephenCleary Where is the state of the state machine in an async method stored before it is resumed?

@Stephen Cleary 2018-01-26 15:31:37

@PrabuWeerasinghe: The compiler creates a struct that holds the state and local variables. If an await needs to yield (i.e., return to its caller), that struct is boxed and lives on the heap.

@KevinBui 2018-03-18 05:25:25

@StephenCleary So how do you explain my question stackoverflow.com/questions/48366871/… WebClient async method create a new thread ???

@Stephen Cleary 2018-03-18 23:31:18

@KevinBui: Asynchronous work depends on the presence of thread pool threads (both worker threads and I/O threads). In particular, I/O Completion Ports require dedicated I/O threads to handle completion requests from the OS. All asynchronous I/O requires this, but the benefit of asynchrony is that you don't need a thread per request.

@Marcelo Flores 2019-06-03 20:08:31

Hi, one question, How do we know how many I / O threads are running in parallel on the separate I/o processor? And how does this work for HTTP async requests? Is there also another processor in charge of the request threads?

@Stephen Cleary 2019-06-03 21:17:41

@MarceloFlores: There isn't a separate I/O processor. I/O threads are special thread pool threads that only handle completion notifications. HTTP async requests are just like any other I/O; the request is made and when the response comes in, an I/O thread handles that completion notification. There are no request threads.

Related Questions

Sponsored Content

9 Answered Questions

[SOLVED] How to safely call an async method in C# without await

26 Answered Questions

[SOLVED] Why not inherit from List<T>?

2 Answered Questions

21 Answered Questions

[SOLVED] How and when to use ‘async’ and ‘await’

4 Answered Questions

[SOLVED] async-await threading internals

2 Answered Questions

[SOLVED] async and await are single threaded Really?

2 Answered Questions

[SOLVED] How does async-await "save threads"?

2 Answered Questions

[SOLVED] How does await async work in C#

3 Answered Questions

[SOLVED] How different is await/async from threading?

1 Answered Questions

Sponsored Content