Exploring Async, Concurrency, and Starlette

Let us take a tour of the Starlette framework as we delve into the world of Python concurrency and asynchronous programming. Here’s the place to be if you have ever wondered how to speed up and improve the responsiveness of your web apps!

Async and Await: A Quick Intro

Starlette is a cool player in the world of Python web frameworks; it focuses entirely on creating async web services. But what does “async” mean, and why should you care?

Asynchronous programming, or async programming, allows you to execute multiple tasks concurrently without having to wait for each one to complete. Introduced in Python 3.7, the async and await keywords are where the magic happens.

You want to tell a joke at a party that has a punchline after a brief moment of tension, so picture yourself in that situation. Using async and await is like telling the punchline without making everyone wait awkwardly. It’s a bit like multitasking for your code!

Starlette: Your Async Companion

Starlette is a lightweight ASGI framework/toolkit, that is ideal for building async web services in Python. Starlette, created by Tom Christie, is a lightweight ASGI framework that FastAPI is built upon. ASGI stands for Asynchronous Server Gateway Interface, and it’s the modern Python way of handling async web requests. Why does it matter? Because it makes your web applications faster, rivaling even Go and Node.js.

Like any other web framework, Starlette handles all the usual HTTP request parsing and response generation. It’s similar to Werkzeug, the package that underlies Flask.

Types of Concurrency

Before diving into the async world, let’s understand the different ways we can implement concurrency:

  • Parallel Computing: Tasks are spread across multiple dedicated CPUs, common in number-crunching applications like graphics and machine learning.
  • Concurrent Computing: CPUs switch among multiple tasks, useful when some tasks take longer than others. Web applications often deal with this type of concurrency.

Distributed and Parallel Computing 

When dealing with a substantial application that strains the capabilities of a single CPU, a viable solution is to divide it into manageable pieces and run these pieces on separate CPUs within a single machine or across multiple machines. Various strategies exist for achieving this distributed and parallel computing, and if you’re working on such an application, you likely have familiarity with several of these approaches. However, it’s crucial to acknowledge that managing these distributed components introduces complexity and additional costs compared to handling a single server.

Operating System Processes and Threads

In the world of operating systems, processes and threads are like workers handling different tasks. Processes are more heavyweight, while threads are lighter but trickier to work with.

Here comes the mysterious concept of Green Threads (greenlet, gevent, Eventlet). These are like threads but run in your program, not the OS kernel. They’re cooperative, giving up control when waiting for I/O. Threads are often recommended when your program is I/O bound, and multiple processes are recommended when you’re CPU bound. But threads are tricky to program and can cause errors that are hard to find.

Green Threads

Green threads, exemplified by libraries like greenlet, gevent, and Eventlet, operate in a cooperative multitasking manner. Unlike preemptive threads, green threads run in user space (within your program) instead of the OS kernel. These green threads are more lightweight than traditional OS threads, which, in turn, are lighter than OS processes. In some benchmarks, asynchronous methods, including green threads, have demonstrated superior performance compared to their synchronous counterparts.

Callbacks

Developers familiar with interactive applications, such as games and graphical user interfaces, often encounter callbacks. In this paradigm, functions are written and associated with specific events, such as mouse clicks, keypresses, or time-related triggers. A prominent Python package in this category is Twisted, designed for event-driven networking. Callbacks are functions associated with events, like a mouse click. 

Python Generators

In Python, code typically executes sequentially, line by line, when a function is called. However, Python introduces a powerful concept with generator functions, leveraging the yield keyword. Unlike traditional functions that use return to provide a list, generators allow you to stop and return from any point within the function, resuming from that point later. Python generators allow your code to pause and resume execution at any point. This is handy for handling large amounts of data without hogging memory.

Let’s explore these concepts with examples:

Example 1: Using return

def doh():

return [“Homer: D’oh!”, “Marge: A deer!”, “Lisa: A female deer!”]

for line in doh():

print(line)

Output:

Homer: D’oh! 

Marge: A deer! 

Lisa: A female deer! 

This works perfectly when lists are relatively small.

Example 2: Using Yield for Efficient Memory

def doh2():

yield “Homer: D’oh!”

yield “Marge: A deer!”

yield “Lisa: A female deer!”

for line in doh2():

print(line)

Output:

Homer: D’oh! 

Marge: A deer! 

Lisa: A female deer! 

In example 2, the generator function doh2 efficiently provides lines one at a time without consuming excessive memory, making it suitable for scenarios where memory usage is a concern.

Python Async, Await, and asyncio

Now, let’s get back to Python’s async features. The asyncio module, introduced with Python 3.7, lets you write asynchronous code using async and await. The real power comes when dealing with tasks that involve waiting, like accessing a database or downloading a web page.

Here’s a quick example:

import asyncio

async def joke():

print(“Why can’t programmers tell jokes?”)

await asyncio.sleep(3)

print(“Timing!”)

async def main():

await asyncio.gather(joke())

asyncio.run(main())

The punchline comes right after the question, followed by a three-second pause—a classic programmer move!

FastAPI and Async

FastAPI, built on Starlette, seamlessly integrates async capabilities. By making your web endpoints async, you let the server handle other requests while waiting for time-consuming tasks. Here’s a simple async endpoint in FastAPI:

from fastapi import FastAPI

import asyncio

app = FastAPI()

@app.get(“/hi”)

async def greet():

await asyncio.sleep(1)

return “Hello? World?”

To run that chunk of web code, you need a web server like Uvicorn

The first way is to run Uvicorn on the command line: 

$ uvicorn greet_async:app 

The second, is to call Unicorn from inside

from fastapi import FastAPI

import asyncio 

import uvicorn 

app = FastAPI() 

@app.get(“/hi”) async 

def greet(): 

await asyncio.sleep(1) 

return “Hello? World?”

 if __name__ == “__main__”: 

uvicorn.run(“greet_async_uvicorn:app”) 

This endpoint pauses for one second without blocking other requests.

So, Async and concurrency are tools that can help your applications run faster and more efficiently. These tools are brought into your web development toolkit by Starlette and FastAPI, allowing you to create responsive applications that can handle multiple tasks at the same time.

So, the next time you are developing a web application and want to wow your users with speed and responsiveness, think about the power of async and the awesomeness of frameworks like Starlette and FastAPI.

 

Visited 1 times, 1 visit(s) today