I love to understand how things work. I especially love when I peel back the curtain and have that “aha” moment. When the core ideas become clear and the implementation lets them shine—that’s delightful.
Client sessions in SDKs are like plumbing for most engineers—boring but essential. You just want to know it’s there when you need it, that the pipes go where they should, and that it’s reliable. You shouldn’t have to invest much energy in understanding what’s going on.
Background on sessions
MCP is about enabling LLMs with tools and resources. Those resources live on a server. The client session is the low-level worker responsible for exchanging messages with the server and deciding what to do with different message types. It forwards requests from the LLM, parses responses, logs notifications, and so on.
I peeled back the curtain on the official SDK and found
|
|
I recoiled. This is their base session that the client session inherits from.
A bunch of generics, type variables, a RequestResponder
context manager, and memory streams. A hairball.
I know it works. Thousands of people rely on it. And I know the maintainers are smart. All
that said, BaseSession
is convoluted. It’s a burden to think about and it’s distracting from higher
level work.
I wanted to build something simpler.
A Simpler Session
Our insight was
Separate how messages get from A to B (transport) from what to do with messages (session logic)
This separation means we
- Have a clean conceptual model—the implementation lets the protocol shine.
- Can test our session code extensively (60 tests vs ~10 in the official SDK)
Diving into the code
Our session looks like :
|
|
We listen for messages from the transport with
|
|
and handle the session logic with
|
|
Compare the basic setup
|
|
vs
|
|
The high level APIs look similar, but take a step below the surface and you feel a big difference.
The Testing Win
We can argue about aesthetics. But tests tell a clearer story. Separating transport from session means it’s easy to write a fake transport and test all failure modes we can think of. So far we have 60 tests of the client session. The official SDK has about 10. We tested the session lifecycle, initialization flow, request-response matching, message handlers, transport failures, and so on.
For example, with a little mocking, we can do things like
|
|
I haven’t found a similar test in the official SDK suite. The closest I could find is:
|
|
The test is a bear to set up and follow. 80 lines to verify something about cancelled requests throwing errors. It’s no surprise the official SDK doesn’t have as many tests.
The insight is
Clean architecture enables thorough testing
As I flesh out the SDK, I could realize I’m wrong about the architecture, but I feel good knowing I can test exactly where my current ideas go wrong. I hope users will feel the same way.
Wrapping up
Going forward I want to keep the same focus on simplicity. My bet is that the returns to clarity will compound. We see hints of it in how easy it is to write tests. Those tests reveal what’s broken or where usage doesn’t feel right. That leads to further clarity and a better design. A virtuous cycle.
We’ve got to build the server session (nearly a mirror image), the transports, and then the high level APIs. Daunting—but the path is clear.
MCP has a lot of simple, powerful ideas built in to it. A beautiful SDK can help us find out if MCP encapsulates the right ideas for an AI driven future. Let’s build one and see!