Let's imagine a process that downloads a file from the Internet. Each time the process detects it downloaded the next 20% of the whole, it prints a message. The output from this process could look like this:
processA downloaded 20% Wed Apr 16 10:19:46 2025
processA downloaded 40% Wed Apr 16 10:19:47 2025
processA downloaded 60% Wed Apr 16 10:19:51 2025
processA downloaded 80% Wed Apr 16 10:19:54 2025
processA downloaded 100% Wed Apr 16 10:19:59 2025
Formally speaking, this output is called the trace.
Now let's imagine we have two processes that do something similar and they work concurrently. Each downloads a different file (possibly from a different source) and the time it takes to download each file will differ. When they work concurrently and they still print their trace, we will get something like this:
processA downloaded 20% Wed Apr 16 10:19:46 2025
processB downloaded 20% Wed Apr 16 10:19:46 2025
processA downloaded 40% Wed Apr 16 10:19:47 2025
processB downloaded 40% Wed Apr 16 10:19:49 2025
processA downloaded 60% Wed Apr 16 10:19:51 2025
processA downloaded 80% Wed Apr 16 10:19:54 2025
processB downloaded 60% Wed Apr 16 10:19:54 2025
processA downloaded 100% Wed Apr 16 10:19:59 2025
processB downloaded 80% Wed Apr 16 10:20:00 2025
processB downloaded 100% Wed Apr 16 10:20:05 2025
Note that the above is just one possible trace of the concurrent execution of these two processes. There could be many traces, depending of the progress of the download in each process (252 possible traces).
This behavior can be demonstrated nicely with a simple Python program using asyncio. It is a toy program in the sense that it does not really download anything, instead it emulates that some random time has elapsed for the download of part of the data. But more importantly, it actually employs two concurrent tasks that execute together and produce a similar trace.
The complete program:
import asyncio
from random import choice
from time import ctime
async def download(name: str) -> None:
for k in range(1,6):
await asyncio.sleep(choice([1,3,5]))
print(f'{name} downloaded {k*20}% {ctime()}')
async def main():
taskA = asyncio.create_task(download('processA'))
taskB = asyncio.create_task(download('processB'))
await taskA
await taskB
if __name__ == '__main__':
asyncio.run(main())
Explanatory notes:
The download function is the one that is actually the body of the task. It emulates some random wait time for downloading a chunk of data and then prints the current download status. Please note that it accepts a parameter, in this case the task name and that the definition is prepended with async.
Inside main we spawn two tasks. Both are based on the download function and will do exactly what it does, but they are spawned with different names. The create_task calls are non-blocking. Each will spawn the new asynchronous task and let the program continue.
Once both tasks are started, the main function needs to wait until both tasks have completed. Try running the program without these two awaits and see what happens. Also try commenting out just one of the awaits and run the program a few times - you should be able to note that the program terminates before the non-awaited task has completed.
This program is aimed to be a clear and concise example. It could be expanded to really do something that requires non-deterministic wait time, such as file download, REST API calls, etc. and also to use more than two tasks. However, the basic idea remains the same: with asyncio you can achieve asynchronous processing through tasks it can all be expressed very nicely, in the way that is much more readable than in most of the other programming languages that implemented the asynchronous processing on top of what they originally had.