Hey there! So I’ve been diving into architectural patterns lately, and it’s been quite the journey. I wanted to share some thoughts on how the design decisions we make can have these ripple effects throughout our apps – especially when it comes to performance.
Disclaimer: This article is not a review of TCA’s current state or implementation. The TCA framework has evolved significantly since some of the issues discussed here were first identified, with many improvements already released and more in development. Rather, this is an analysis of how architectural decisions influence performance over time, using TCA’s evolution as a case study. My goal is to examine the relationship between design principles and performance outcomes, and extract lessons that apply to any architecture we might create or adopt.
Why I’m Looking at TCA
Full disclosure: I haven’t personally used The Composable Architecture (TCA) in production, and honestly, I probably won’t in the future. But that’s not because it’s bad – it’s just not aligned with my preferred approach to building apps.
That said, TCA makes for a fascinating case study. After 5 years of SwiftUI, we’ve all learned that performance isn’t something that just happens automatically. You really have to work for it, right? And architectural choices can either make that easier or… well, much harder.
So I thought it would be interesting to look at what TCA users have been reporting, see where they’ve struggled with performance, and connect those struggles back to the core architectural decisions. Not to bash TCA, but to learn from it – because these lessons apply to any architecture we might design or adopt.
During Krzysztof Zabłocki’s talk at Swift Heroes 2023, he showed that in a large-scale application, processing just five lines of text took 9 seconds of CPU time in vanilla TCA. That’s… not great. Noted that this was for a macOS app “The Arc browser”. But he also said that people have reported on similar problems for iOS apps.
It’s worth noting that Krzysztof isn’t just any developer criticizing TCA – he’s one of the most experienced TCA developers in the community. He’s built five full applications with TCA, including working at The Browser Company on what was likely the largest TCA codebase in production. He’s also consulted with the Point-Free team to improve TCA’s ergonomics. When someone with this level of expertise reports performance issues, it’s particularly worth examining.
But why is that happening? Let’s break down how TCA’s design principles translate to performance challenges:
The Foundation: Struct-Based Global State
First, let’s understand TCA’s fundamental approach to state. In TCA, all application state is represented as a single Swift struct:
struct AppState: Equatable {
var settings: SettingsState
var profile: ProfileState
var feed: FeedState
// ... more state properties
}
This struct contains everything – all user data, UI state, feature states, everything. And because it’s a Swift struct (a value type), it gets copied whenever it’s modified.
The Critical Rule: Global State Flows Down the Entire View Hierarchy
This is where TCA makes a critical architectural decision: the entire global state struct is passed down through the entire view hierarchy. This is a fundamental rule of TCA’s design.
In practice, it looks like this:
struct RootView: View {
let store: Store<AppState, AppAction>
var body: some View {
HomeView(store: store) // Passing the ENTIRE store down
}
}
struct HomeView: View {
let store: Store<AppState, AppAction> // Gets the ENTIRE store
var body: some View {
VStack {
Text("Hello, (store.state.userProfile.name)")
SettingsButton(store: store) // Passes the ENTIRE store down again
FeedView(store: store) // And again...
}
}
}
struct FeedView: View {
let store: Store<AppState, AppAction> // Gets the ENTIRE store
var body: some View {
// Even though this view only cares about feed data,
// it receives the entire application state
List(store.state.feed.items) { item in
FeedItemView(store: store, item: item) // Passes entire store down again
}
}
}
Every view in your hierarchy, no matter how deeply nested, receives the entire application state struct. This is by design in TCA – it ensures every component has access to the full state.

Why This Causes Performance Problems
This approach directly conflicts with how SwiftUI is designed to work efficiently.
This directly contradicts one of SwiftUI’s core performance recommendations. Remember that WWDC session “Demystify SwiftUI”? They explicitly warned against passing down more data than a view needs. The presenters emphasized that we should “only pass down information that the view really needs” to avoid unnecessary view updates and evaluations.
When SwiftUI decides whether to redraw a view, it evaluates what data that view depends on. The problem is that when you pass the entire state struct to a view:
SwiftUI sees a dependency on the entire state: From SwiftUI’s perspective, the view depends on everything in that struct
Any state change can trigger reevaluation: When any property in the state changes, SwiftUI has to check if the view needs redrawing
Cascading reevaluations: This happens for every view in your hierarchy that received the state
In a large application where:
- The state struct might contain hundreds of properties
- The view hierarchy might be dozens of levels deep
- Multiple state changes happen in quick succession
…this creates a perfect storm of performance problems. When a single property changes, potentially hundreds of views need to reevaluate whether they should redraw, even if most of them don’t actually display anything related to that property.
TCA’s approach initially caused massive redraw performance problems because any state change would potentially cause all views to be evaluated. It’s like if changing your profile picture somehow made your settings screen recalculate – totally unnecessary work!
The ViewStore Partial Solution
To work around this fundamental issue, TCA introduced ViewStore wrappers that only expose certain slices of state to views:
struct ContentView: View {
let store: Store<AppState, AppAction>
var body: some View {
WithViewStore(store, observe: { $0.profile }) { viewStore in
// This view only "sees" the profile part of state
ProfileView(name: viewStore.name)
}
}
}
ViewStore tries to solve the problem by:
- Still receiving the entire state (following TCA’s rule)
- But only extracting and observing a specific slice of it
- Only triggering view updates when that specific slice changes
While this helps, it’s essentially a workaround for a problem created by the architecture itself. It adds boilerplate and complexity to solve an issue that wouldn’t exist with a more granular state management approach. And as we’ll see, even these ViewStores become performance bottlenecks at scale.
The Double-Diffing Problem
This “entire state flows down” rule creates another significant performance issue: double diffing. Let me explain what happens:
1. SwiftUI’s Built-in Diffing
SwiftUI already has its own diffing mechanism. When state changes, SwiftUI compares the old and new values to determine if a view needs to be redrawn. This is a core part of SwiftUI’s performance optimization strategy.
For example, when you write:
struct ProfileView: View {
let name: String
var body: some View {
Text("Hello, (name)")
}
}
SwiftUI tracks that this view depends on the name
property. When name
changes, SwiftUI knows to update the view. When name
doesn’t change, SwiftUI skips redrawing the view entirely. This is efficient and happens automatically.
2. TCA’s Required Diffing
But in TCA, we have another layer of diffing happening. Because the entire state struct is passed everywhere, TCA needs its own diffing mechanism to determine what actually changed:
// Inside TCA's implementation
func send(_ action: Action) {
let oldState = self.state
self.state = self.reducer(self.state, action)
// TCA has to diff the entire state to see what changed
if oldState != self.state {
// Notify observers about the new state
self.observers.forEach { $0(self.state) }
}
}
This is why TCA requires all state to conform to Equatable
– it needs to compare old and new states. For large state structs, this comparison can be expensive.
The Compounding Performance Hit
So now we have two expensive operations happening for every state change:
- TCA diffing the entire state struct to determine if anything changed
- SwiftUI diffing its dependencies to determine which views need updating
And because the entire state flows down to every view, these costs multiply across your view hierarchy. In a large application, this creates a cascade of diffing operations:
Action → State Change → TCA Diffing → SwiftUI Diffing in View 1 → SwiftUI Diffing in View 2 → ... → SwiftUI Diffing in View N
Each step adds overhead, and this happens for every action processed by the system.
The ViewStore Overhead
ViewStore helps with the SwiftUI diffing part by limiting what each view observes, but it adds its own overhead:
WithViewStore(store, observe: { state in
// This projection function runs on EVERY state change
return SomeProjectedState(
name: state.profile.name,
isActive: state.settings.isActive
)
}) { viewStore in
// View body using viewStore
}
That projection function in observe:
runs on every state update, even for updates completely unrelated to what it’s extracting. In a large application with dozens or hundreds of ViewStores, this creates significant CPU work just to determine that most views don’t need to update.
The Action Processing Bottleneck
Every state change in TCA must go through an action, which creates another bottleneck:
enum AppAction {
case settings(SettingsAction)
case profile(ProfileAction)
case feed(FeedAction)
// ... potentially hundreds more actions
}
Each action goes through the entire reducer hierarchy, even if it only affects a small part of your state. This serializes all state updates through a single pipeline, which becomes a performance bottleneck as your application grows.
As Zabłocki noted in his talk, even a no-op action (one that doesn’t change state) had a cost of around 6ms in a large application. At 60fps, you only have about 16ms per frame, so spending 6ms just to process an action that does nothing is a huge performance hit.
The Struct Copying Overhead
Remember that Swift structs are value types, so when any property changes, the entire struct gets copied. In a large state tree, this means:
// When changing one tiny property
state.deeplyNested.verySpecific.someFlag = true
// A new copy is created for:
// - someFlag's parent struct
// - verySpecific's parent struct
// - deeplyNested's parent struct
// - The entire AppState
Each level of nesting requires a new struct to be created. For deeply nested state, this creates a cascade of allocations and copies that adds significant overhead.
Real-World Impact
These design decisions create a compounding effect. As your application grows:
- Larger state structs = more expensive diffing and copying
- More views = more places where the entire state flows
- More actions = more processing through the central bottleneck
- More ViewStores = more projection functions running on every update
This is why TCA can work beautifully for small to medium apps but hit a performance wall in larger applications. The architectural decisions that make TCA clean and predictable for small apps create significant performance challenges at scale.
The Elegant Solution: The Observation Feature
Apple’s introduction of the Observation framework in Swift 5.9 feels almost like a deus ex machina for TCA’s performance woes. This new feature provides fine-grained state tracking that elegantly solves many of the problems we’ve discussed.
While Observation elegantly solves many of TCA’s performance issues, it’s worth noting that TCA got lucky here. The TCA framework was designed around capabilities that Swift simply didn’t have at the time, betting that the language would eventually evolve in that direction.
This approach works beautifully, but it wasn’t available when TCA was designed and adopted by many developers.
What Can We Learn From This?
So what are the takeaways for us as developers thinking about architecture?
Consider scale from the beginning: Architectural patterns that work beautifully for small apps might become performance bottlenecks at scale.
Be pragmatic about principles: Sometimes you need to bend the rules (like the single store principle) to make things work in the real world.
Measure, don’t assume: Performance issues often come from unexpected places. Zabłocki built custom tools to identify exactly where the bottlenecks were.
Design with the platform: Architectures should work with the grain of the platform, not against it. TCA’s Redux-inspired approach sometimes fights against SwiftUI’s natural patterns.
Anticipate evolution: The best architectures can adapt as the platform evolves, incorporating new capabilities (like Observation) without requiring complete rewrites.
Final Thoughts
To be fair, the TCA team hasn’t been standing still. Recent versions have introduced significant improvements like the new reducer protocol, and they’re actively working on Observation framework integration that should address many of the performance challenges discussed here. These improvements show that the framework is evolving – though the fundamental architectural decisions we’ve examined will continue to shape how TCA performs at scale.
I find it fascinating how architectural decisions made with the best intentions can have such significant performance implications. TCA isn’t “bad” – it’s just making specific trade-offs that prioritize certain qualities (predictability, testability) over others (raw performance).
The question for us as developers isn’t “Is TCA good or bad?” but rather “What trade-offs am I willing to make for my specific application?” Understanding these trade-offs helps us make more informed decisions about the architectures we adopt or design.
What about you? Have you used TCA or other architectures that made interesting performance trade-offs? I’d love to hear about your experiences in the comments!
> Recent versions have introduced significant improvements like the new reducer protocol, and they’re actively working on Observation framework integration
The reducer protocol was announced in October 2022: https://www.pointfree.co/blog/posts/81-announcing-the-reducer-protocol and Observation in January 2024: https://www.pointfree.co/blog/posts/130-observation-comes-to-the-composable-architecture
WithViewStore has been deprecated since 1.7: https://pointfreeco.github.io/swift-composable-architecture/main/documentation/composablearchitecture/withviewstore/
Every view in your hierarchy, no matter how deeply nested, receives the entire application state struct. This is by design in TCA – it ensures every component has access to the full state.
that is so wrong have you heard of scoping ?
I appreciate your comment, but I think you might be misunderstanding how TCA’s architecture actually works under the hood.
Yes, TCA has scoping – but scoping doesn’t change the fundamental dependency flow in the architecture. Let me explain the difference:
What Actually Happens in TCA
1. Dependency Flow: The `Store` is indeed passed down through the entire view hierarchy as a dependency. Look at how views are structured in TCA:
struct RootView: View {
let store: Store
var body: some View {
HomeView(store: store) // Passing the entire store down
}
}
struct HomeView: View { // Gets the ENTIRE store as a dependency
let store: Store
var body: some View {
WithViewStore(store, observe: { state in
// Only observe the specific parts we need
return state.userProfile
}) { viewStore in
Text("Hello, \(viewStore.name)")
ProfileView(store: store) // Continues passing the entire store down
}
}
}
struct ProfileView: View { // Again, the ENTIRE store
let store: Store
var body: some View {
// More UI and potentially more views that receive the store
}
}
The store containing the entire global state is passed down through initializers at each level. This is not something I made up – it’s directly from TCA’s design and standard usage pattern.
2. ViewStore as an Optimization: The `WithViewStore` wrapper is precisely the workaround TCA introduced to deal with the performance problems of having the entire state available everywhere.
The confusion comes from conflating two different things:
– The actual dependency (the Store with full state that flows through initializers)
– The optimization layer (ViewStore that projects specific parts for SwiftUI updates)
Scoping Doesn’t Change the Dependency Flow
When you use `scope()` to create a child store:
let profileStore = store.scope(
state: { $0.profile },
action: { AppAction.profile($0) }
)
You’re creating a lens into the parent store. But the parent store (with the full state) still exists, and the child store is still connected to it. Actions sent to the child store flow back to the parent store and go through the entire reducer hierarchy.
The scoping is indeed “tricking” the view updating system. It’s an optimization layer on top of an architecture that fundamentally passes the entire state tree through the system.
Why This Matters
This distinction is important because the performance issues reported by many developers stem from this exact architectural decision. Even with scoping, TCA still:
1. Processes all actions through the entire reducer hierarchy
2. Maintains a single global state struct
3. Requires diffing the entire state to detect changes
Scoping helps with view updates, but it doesn’t change these fundamental aspects of the architecture that create performance bottlenecks at scale.
Not a Criticism, Just Architecture Analysis
This isn’t about criticizing TCA – it’s about understanding the trade-offs in its design. Every architecture makes trade-offs, and TCA prioritizes predictability and testability over raw performance. That’s a valid choice, but one developers should understand when adopting it.
A lot of this article is based on things that have long since been updated in TCA or things that just aren’t true and never have been.
It is never the case that the “entire app state” is passed to every view and every child view.
Not is it true that the state has to hold everything in it. It is quite common (and good practise) for views (reducers/state) to get state from dependencies. Whether that’s an in memory cache, or core data, or the network etc… it doesn’t need to be stored globally for the whole stack to access. TBF that was an initial misunderstanding I had when I first used TCA several years ago before I used TCA in anything more than a tutorial sized app.
Can you give us better architecture implementation?
My personal preference is Vanilla Swiftui. Use MVVM with Reposiory pattern to separate out logic and get testing capabilities.
For larger more complex flows, Coordinators are greate because they can take care of the navigation logic. This allows you thinks like programmatic navigation, deep linking.
I prefer a simple approach when starting a project and then adapt and introduce more design patterns as the app grows.
These topics are quite vaste and I will for sure write and make videos about them (including more complex demo projects)
Nice insights wow
Good article, and ties in with my experience. I used TCA in a major project at a former employer, and on top of a significant—and constantly changing—learning curve, very little documentation from third parties, Point Free videos that spent far too long talking about *how* they built rather than how to *use* it, we hit major scaling issues with large state maxing out stack memory. This caused serious out of memory errors that we struggled to fix. What documentation there was, was contradictory, and the constant rewrites and alterations on ‘best practice’ made it hard to know what was the correct way to use it. Going forward, I have vowed never to work for a company using it again, as it felt like it was fighting SwiftUI and Apple standards, rather than working with them. Technically brilliant, but for all its good points we found one or more downsides that sucked up any speed advantage we gained.
I’ve used TCA on a mid-to-large project, and I have found a lot of problems:
– Performance Issues
– When the compiler doesn’t understand something, the error is not meaningful
– I hate all the 3rd party dependencies that it introduces
– Developer learning curve is really long
– Additional compile time that adds up over time
– Not really straight forward rules (ie: when to share a reducer vs when to add a child reducer)
– Child reducers introduced a lot of extra complexity
In my experience, I’d never pick TCA again, I’d go with vanilla MVVM and only add complexity as needed