TypeScript at Scale

Managing large TypeScript codebases without drowning in complexity — modular architecture, state management, and type system patterns that actually work.

TypeScriptFrontendArchitecture

The Problem: When Your Codebase Fights Back

You start a TypeScript project and it’s a joy. Types catch bugs, autocomplete saves time, refactoring is safe. Then the codebase hits 200+ components, three teams are committing daily, and the state management layer has become a dependency graph that nobody fully understands. Build times creep up. A change in one feature breaks something unrelated. New developers take weeks to become productive.

I’ve worked on TypeScript codebases that stayed maintainable at scale and ones that didn’t. The difference came down to a few architectural decisions made early — and the discipline to enforce them. This post covers those decisions.

Modular Architecture: The Foundation

Feature-Based Organization

Stop organizing by technical layer (components/, services/, utils/). It forces developers to scatter a single feature across half a dozen directories. Instead, organize by business feature:

src/
├── features/
│   ├── authentication/
│   │   ├── components/
│   │   ├── hooks/
│   │   ├── services/
│   │   ├── types/
│   │   ├── utils/
│   │   └── index.ts
│   ├── user-management/
│   │   ├── components/
│   │   ├── hooks/
│   │   ├── services/
│   │   ├── types/
│   │   ├── utils/
│   │   └── index.ts
│   └── reporting/
│       ├── components/
│       ├── hooks/
│       ├── services/
│       ├── types/
│       ├── utils/
│       └── index.ts
├── shared/
│   ├── components/
│   ├── hooks/
│   ├── services/
│   ├── types/
│   └── utils/
└── core/
    ├── api/
    ├── auth/
    ├── config/
    ├── router/
    └── store/

The win here is ownership. Each feature contains everything it needs. A team owns authentication/ end to end. A new developer can understand the feature by reading one directory. And when you need to delete a feature (it happens), you delete a folder instead of hunting across the entire tree.

Explicit Module Boundaries

Barrel files (index.ts) are how you enforce boundaries. Every feature exports its public API explicitly, and nothing else leaves the module:

// features/authentication/index.ts
export { LoginForm } from './components/LoginForm';
export { useAuth } from './hooks/useAuth';
export type { User, Credentials } from './types';
export { AuthProvider } from './context/AuthContext';

// Do not export internal implementation details

This is critical: internal components stay private. You can refactor the guts of authentication/ without breaking consumers, because they only depend on what’s exported. Without this discipline, every file becomes a de facto public API and refactoring becomes archaeology.

A related discipline that sits inside the authentication/ feature and never leaks out: token storage. If useAuth returns an access token and a handful of components shove it into localStorage “just for convenience”, any XSS vulnerability anywhere in the app reads it — there is no such thing as a “small” XSS when tokens live in localStorage. Keep access tokens in memory inside the feature module (or behind a Backend-for-Frontend with HttpOnly; Secure; SameSite=Lax cookies) and never export the raw token from the barrel file. Export useAuth(), not getAccessToken(). I cover the server side of this pattern in Modern Authentication Patterns with Go and TypeScript.

Dependency Management

Barrel files aren’t enough on their own — you also need to control which modules can depend on which. I define the intended dependency graph explicitly:

// core/dependency-graph.ts
export const dependencyGraph = {
  'features/authentication': ['core/api', 'core/auth', 'shared/components'],
  'features/user-management': ['core/api', 'features/authentication', 'shared/components'],
  'features/reporting': ['core/api', 'features/user-management', 'shared/components'],
};

This can be enforced using ESLint with the eslint-plugin-import and custom rules:

// .eslintrc.js
module.exports = {
  // ...other config
  rules: {
    'import/no-restricted-paths': [
      'error',
      {
        zones: [
          {
            target: './src/features/authentication',
            from: './src/features/user-management',
            message: 'Authentication cannot depend on user-management',
          },
          {
            target: './src/features/authentication',
            from: './src/features/reporting',
            message: 'Authentication cannot depend on reporting',
          },
          // Add more restrictions based on your dependency graph
        ],
      },
    ],
  },
};

Type System Patterns That Pay Off

TypeScript’s type system is the single best tool you have for managing complexity at scale. But most teams barely scratch the surface — they type their props and call it a day. Here are the patterns I’ve seen deliver the most value.

Domain-Driven Type Design

Your types should model your business domain, not mirror API responses or UI components. This is the most common mistake I see:

// Bad: Types based on API structure
interface UserApiResponse {
  id: number;
  first_name: string;
  last_name: string;
  email: string;
  role_id: number;
  created_at: string;
  updated_at: string;
}

// Good: Types based on domain concepts
interface User {
  id: string;
  name: {
    first: string;
    last: string;
  };
  email: string;
  role: UserRole;
  timestamps: {
    created: Date;
    updated: Date;
  };
}

enum UserRole {
  Admin = 'ADMIN',
  Manager = 'MANAGER',
  User = 'USER',
}

The domain type uses proper nested structures, an enum for roles, and Date objects instead of raw strings. When the backend team renames first_name to firstName, you change one transformer function instead of touching fifty components.

Type Transformation Layers

This is the glue between your API layer and your domain types. I create explicit transformer functions at the boundary:

// api/transformers/user.ts
import { UserApiResponse } from '../types';
import { User, UserRole } from '../../features/user-management/types';

export function transformUserFromApi(response: UserApiResponse): User {
  return {
    id: String(response.id),
    name: {
      first: response.first_name,
      last: response.last_name,
    },
    email: response.email,
    role: mapRoleIdToUserRole(response.role_id),
    timestamps: {
      created: new Date(response.created_at),
      updated: new Date(response.updated_at),
    },
  };
}

function mapRoleIdToUserRole(roleId: number): UserRole {
  switch (roleId) {
    case 1:
      return UserRole.Admin;
    case 2:
      return UserRole.Manager;
    case 3:
      return UserRole.User;
    default:
      // Fail closed on unknown roles. A new backend role must be an explicit
      // code change, never a silent downgrade. And never rely on this client
      // mapping for authorization — the server is the source of truth.
      throw new Error(`Unknown role_id from API: ${roleId}`);
  }
}

The key insight: transformation logic lives in one place. When the API changes (and it will), you update one function and the compiler tells you if you missed anything. Without this layer, API shape assumptions leak into every component and hook in your codebase.

Progressive Type Refinement

Type guards let you narrow types based on runtime checks, and they’re one of TypeScript’s most underused features:

// Type guard for checking if a user is an admin
function isAdmin(user: User): user is User & { role: UserRole.Admin } {
  return user.role === UserRole.Admin;
}

// Usage
function renderUserActions(user: User) {
  if (isAdmin(user)) {
    // TypeScript knows user.role is UserRole.Admin here
    return <AdminActions user={user} />;
  }
  
  return <StandardActions user={user} />;
}

After the isAdmin check, TypeScript knows user.role is UserRole.Admin. No type casting, no as assertions. This is vastly preferable to sprinkling user as AdminUser throughout your codebase — those assertions lie to the compiler and will bite you.

Branded Types for Type Safety

This is my favorite advanced pattern. Branded types prevent you from accidentally passing a UserId where an OrderId is expected, even though both are strings at runtime:

// Branded type for user IDs
type UserId = string & { readonly __brand: unique symbol };

// Branded type for order IDs
type OrderId = string & { readonly __brand: unique symbol };

// Create branded types from raw values — validate at the construction site
function createUserId(id: string): UserId {
  if (!/^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$/i.test(id)) {
    throw new Error('createUserId: expected UUID');
  }
  return id as UserId;
}

function createOrderId(id: string): OrderId {
  if (!/^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$/i.test(id)) {
    throw new Error('createOrderId: expected UUID');
  }
  return id as OrderId;
}

// Usage
function getUserById(id: UserId): User {
  // Implementation
}

// This will fail at compile time
getUserById(createOrderId('123')); // Error: Argument of type 'OrderId' is not assignable to parameter of type 'UserId'

The return id as UserId cast inside createUserId looks like a hole in the type system, and it is — but a deliberate one. Branded types are a pure compile-time construct; there’s no runtime marker to attach, so the only way to mint a branded value is a type assertion. The discipline is to confine those assertions to a small set of construction sites (this factory function, a database row decoder, an API response parser) and treat them as the validation boundary. Inside those functions, you prove the invariant (valid UUID, non-empty string, parsed from a trusted source) before the cast — otherwise the branded type is a lie. A factory that just casts without checking is worse than no brand at all, because it gives you false confidence. Outside the factory, the rest of the codebase consumes UserId and OrderId as opaque types and the compiler enforces that they never cross.

The rule: as UserId at the factory is fine. as UserId at the call site is a bug waiting to ship. Lint for it, or put the branded type factories in a file that’s the only place as casts on branded types are allowed.

I’ve seen this catch real bugs in production codebases — someone passes a customer ID to a function expecting a product ID, and without branded types, TypeScript is perfectly happy to let that through. The overhead is minimal and the safety is worth it.

State Management at Scale

State management is where large TypeScript apps most often go wrong. The mistake is putting everything in global state. Most state is local. Here’s the hierarchy I use:

Hierarchical State Management

The rule is simple: state lives at the lowest level that needs it.

  1. Application State - Global state shared across the application
  2. Feature State - State specific to a feature
  3. Component State - Local state within a component
// Application state (using Redux Toolkit)
import { configureStore } from '@reduxjs/toolkit';
import authReducer from '../features/authentication/slice';
import userReducer from '../features/user-management/slice';

export const store = configureStore({
  reducer: {
    auth: authReducer,
    users: userReducer,
  },
});

export type RootState = ReturnType<typeof store.getState>;
export type AppDispatch = typeof store.dispatch;
// Feature state (using Redux Toolkit)
import { createSlice, PayloadAction } from '@reduxjs/toolkit';
import { User } from './types';

interface UserState {
  users: User[];
  selectedUserId: string | null;
  isLoading: boolean;
  error: string | null;
}

const initialState: UserState = {
  users: [],
  selectedUserId: null,
  isLoading: false,
  error: null,
};

const userSlice = createSlice({
  name: 'users',
  initialState,
  reducers: {
    setUsers: (state, action: PayloadAction<User[]>) => {
      state.users = action.payload;
    },
    selectUser: (state, action: PayloadAction<string>) => {
      state.selectedUserId = action.payload;
    },
    // Other reducers
  },
});

export const { setUsers, selectUser } = userSlice.actions;
export default userSlice.reducer;

Typed hooks are what make this usable from components. Define useAppSelector and useAppDispatch once so every consumer gets full type inference without importing RootState everywhere:

// core/hooks.ts
import { useDispatch, useSelector, type TypedUseSelectorHook } from 'react-redux';
import type { RootState, AppDispatch } from './store';

export const useAppDispatch: () => AppDispatch = useDispatch;
export const useAppSelector: TypedUseSelectorHook<RootState> = useSelector;
// features/user-management/UserList.tsx
import { useAppSelector, useAppDispatch } from '../../core/hooks';
import { selectUser } from './slice';

export function UserList() {
  const users = useAppSelector((state) => state.users.users);
  const selectedUserId = useAppSelector((state) => state.users.selectedUserId);
  const dispatch = useAppDispatch();

  return (
    <ul>
      {users.map((u) => (
        <li
          key={u.id}
          onClick={() => dispatch(selectUser(u.id))}
          aria-selected={u.id === selectedUserId}
        >
          {u.name}
        </li>
      ))}
    </ul>
  );
}

The selector function is typed against RootState automatically — hover state.users.users and you get the full shape. This is the payoff for the RootState export in the store file above.

// Component state (using React hooks)
import { useState } from 'react';

function UserForm() {
  const [formData, setFormData] = useState({
    firstName: '',
    lastName: '',
    email: '',
  });
  
  // Component logic
}

Most state is component-local — form data, toggle states, animation flags. Feature state (selected user, loading flags) belongs in feature-scoped stores. Only truly global state (auth, theme, feature flags) goes in the root store. When teams dump everything into Redux, they get a monolithic state object that every component subscribes to and every action potentially affects.

State Selectors with Memoization

Selectors are the right way to derive data from state. Never compute derived data in components:

// features/user-management/selectors.ts
import { createSelector } from '@reduxjs/toolkit';
import { RootState } from '../../core/store';
import { User, UserRole } from './types';

export const selectUsers = (state: RootState) => state.users.users;
export const selectSelectedUserId = (state: RootState) => state.users.selectedUserId;

export const selectSelectedUser = createSelector(
  [selectUsers, selectSelectedUserId],
  (users, selectedUserId): User | undefined => {
    if (!selectedUserId) return undefined;
    return users.find(user => user.id === selectedUserId);
  }
);

export const selectAdminUsers = createSelector(
  [selectUsers],
  (users): User[] => {
    return users.filter(user => user.role === UserRole.Admin);
  }
);

createSelector from RTK memoizes automatically — if selectUsers hasn’t changed, selectAdminUsers won’t recompute. This matters when you have lists of thousands of items and multiple components subscribing to derived data. Selectors are also pure functions, which makes them trivial to unit test.

State Machines for Complex Workflows

For anything with more than three states and non-trivial transitions, I reach for a state machine. Boolean flags like isLoading, isError, isSubmitting are the road to impossible states (isLoading && isError — what does that even mean?). XState makes valid states explicit:

// features/order-processing/state-machine.ts
import { setup, assign } from 'xstate';
import { Order } from './types';

const orderMachine = setup({
  types: {
    context: {} as { order: Order; rejectionReason?: string },
    events: {} as
      | { type: 'SUBMIT' }
      | { type: 'APPROVE' }
      | { type: 'REJECT'; reason: string }
      | { type: 'SHIP' }
      | { type: 'DELIVER' }
      | { type: 'CANCEL' },
  },
  actions: {
    setRejectionReason: assign({
      rejectionReason: ({ event }) => {
        if (event.type === 'REJECT') return event.reason;
        return undefined;
      },
    }),
  },
}).createMachine({
  id: 'order',
  initial: 'draft',
  states: {
    draft: {
      on: { SUBMIT: { target: 'pending' } },
    },
    pending: {
      on: {
        APPROVE: { target: 'approved' },
        REJECT: { target: 'rejected', actions: 'setRejectionReason' },
      },
    },
    approved: {
      on: {
        SHIP: { target: 'shipped' },
        CANCEL: { target: 'cancelled' },
      },
    },
    shipped: {
      on: { DELIVER: { target: 'delivered' } },
    },
    delivered: { type: 'final' },
    rejected: {
      on: { SUBMIT: { target: 'pending' } },
    },
    cancelled: { type: 'final' },
  },
});

The machine definition makes it impossible to ship a cancelled order or approve a delivered one. You can look at the state chart and immediately see every valid transition. Compare that to a switch statement with boolean flags scattered across three hooks — the state machine wins every time.

Performance Optimization

Architecture isn’t just about code organization — it directly affects performance. The feature-based structure we set up earlier makes these optimizations almost trivial.

Code Splitting

With feature-based organization, code splitting is just lazy-loading at the route level:

// App.tsx
import React, { lazy, Suspense } from 'react';
import { BrowserRouter, Routes, Route } from 'react-router-dom';
import { Loading } from './shared/components';

// Lazy-loaded features
const Authentication = lazy(() => import('./features/authentication'));
const UserManagement = lazy(() => import('./features/user-management'));
const Reporting = lazy(() => import('./features/reporting'));

function App() {
  return (
    <BrowserRouter>
      <Suspense fallback={<Loading />}>
        <Routes>
          <Route path="/login" element={<Authentication />} />
          <Route path="/users/*" element={<UserManagement />} />
          <Route path="/reports/*" element={<Reporting />} />
        </Routes>
      </Suspense>
    </BrowserRouter>
  );
}

Because each feature is a self-contained module with a single entry point, the bundler can split cleanly along feature boundaries. No circular cross-feature imports means no unexpected chunks pulling in half the app.

Virtualization for Large Lists

If you’re rendering more than ~100 items in a list, you need virtualization. Full stop. Here’s the pattern with react-window:

// features/user-management/components/UserList.tsx
import React from 'react';
import { FixedSizeList } from 'react-window';
import { User } from '../types';

interface UserListProps {
  users: User[];
  onSelectUser: (userId: string) => void;
}

export function UserList({ users, onSelectUser }: UserListProps) {
  const Row = ({ index, style }: { index: number; style: React.CSSProperties }) => {
    const user = users[index];
    return (
      <div
        style={style}
        onClick={() => onSelectUser(user.id)}
        className="user-list-item"
      >
        <div className="user-name">{user.name.first} {user.name.last}</div>
        <div className="user-email">{user.email}</div>
        <div className="user-role">{user.role}</div>
      </div>
    );
  };

  return (
    <FixedSizeList
      height={500}
      width="100%"
      itemCount={users.length}
      itemSize={60}
    >
      {Row}
    </FixedSizeList>
  );
}

This renders only the visible rows. A list of 10,000 users creates ~10 DOM nodes instead of 10,000. I’ve seen this single change take a page from unusable to instant.

Memoization for Expensive Calculations

useMemo is not premature optimization when you’re filtering or aggregating large datasets on every render:

// features/reporting/hooks/useReportData.ts
import { useMemo } from 'react';
import { ReportData, ReportFilters } from '../types';

export function useReportData(data: ReportData[], filters: ReportFilters) {
  const filteredData = useMemo(() => {
    return data.filter(item => {
      if (filters.startDate && new Date(item.date) < filters.startDate) {
        return false;
      }
      
      if (filters.endDate && new Date(item.date) > filters.endDate) {
        return false;
      }
      
      if (filters.category && item.category !== filters.category) {
        return false;
      }
      
      return true;
    });
  }, [data, filters.startDate, filters.endDate, filters.category]);
  
  const summary = useMemo(() => {
    return {
      total: filteredData.reduce((sum, item) => sum + item.value, 0),
      average: filteredData.length > 0
        ? filteredData.reduce((sum, item) => sum + item.value, 0) / filteredData.length
        : 0,
      count: filteredData.length,
    };
  }, [filteredData]);
  
  return { filteredData, summary };
}

The dependency arrays are explicit: filteredData only recomputes when data or the filter values change, and summary only recomputes when filteredData changes. For a reporting page with thousands of rows and complex filters, this is the difference between a responsive UI and a janky one.

One warning that I can’t emphasize enough: stale-closure bugs in dependency arrays are the single most common React bug I see in code review. If you reference a variable inside the useMemo callback but forget to list it in the dependency array, the callback captures the value from the first render and never updates. The memoized result is silently wrong, and because the output is often “close enough” (yesterday’s filter shape applied to today’s data) nobody notices until a user files a ticket. Turn on react-hooks/exhaustive-deps as an error, not a warning, in your ESLint config, and don’t let PRs ship with the lint disabled at the line level. If you find yourself wanting to disable it, the real fix is almost always to pull the stale value out into state or a ref, not to lie to the linter. The same rule applies to useEffect and useCallback — every hook with a dependency array is a stale-closure trap if you’re sloppy.

Testing Strategies

Testing at scale isn’t about coverage percentages — it’s about testing the right things at the right level. I use three layers.

Type Testing with tsd

Your types are part of your public API. Test them the same way you test your functions:

// types/user.test-d.ts
import { expectType, expectError } from 'tsd';
import { User, UserRole, isAdmin } from './user';

// Test the User type
const user: User = {
  id: '123',
  name: {
    first: 'John',
    last: 'Doe',
  },
  email: 'john@example.com',
  role: UserRole.User,
  timestamps: {
    created: new Date(),
    updated: new Date(),
  },
};

// Test the isAdmin type guard
const adminUser: User = { ...user, role: UserRole.Admin };
if (isAdmin(adminUser)) {
  expectType<UserRole.Admin>(adminUser.role);
}

// This should error: after isAdmin narrows, role is Admin, not User
if (isAdmin(adminUser)) {
  // @ts-expect-error — role is narrowed to Admin, assigning to User must fail
  const wrong: UserRole.User = adminUser.role;
  void wrong;
}

Type tests catch regressions that unit tests miss entirely. When someone changes a type guard or narrows a union type differently, these tests break before your users do. I add them for every public type export.

Component Testing with React Testing Library

Test what users see and do, not implementation details:

// features/user-management/components/UserForm.test.tsx
import React from 'react';
import { render, screen, fireEvent } from '@testing-library/react';
import { UserForm } from './UserForm';
import { UserRole } from '../types';

describe('UserForm', () => {
  const mockSubmit = jest.fn();
  
  beforeEach(() => {
    mockSubmit.mockClear();
  });
  
  it('renders the form correctly', () => {
    render(<UserForm onSubmit={mockSubmit} />);
    
    expect(screen.getByLabelText(/first name/i)).toBeInTheDocument();
    expect(screen.getByLabelText(/last name/i)).toBeInTheDocument();
    expect(screen.getByLabelText(/email/i)).toBeInTheDocument();
    expect(screen.getByLabelText(/role/i)).toBeInTheDocument();
    expect(screen.getByRole('button', { name: /submit/i })).toBeInTheDocument();
  });
  
  it('submits the form with valid data', () => {
    render(<UserForm onSubmit={mockSubmit} />);
    
    fireEvent.change(screen.getByLabelText(/first name/i), { target: { value: 'John' } });
    fireEvent.change(screen.getByLabelText(/last name/i), { target: { value: 'Doe' } });
    fireEvent.change(screen.getByLabelText(/email/i), { target: { value: 'john@example.com' } });
    fireEvent.change(screen.getByLabelText(/role/i), { target: { value: UserRole.User } });
    
    fireEvent.click(screen.getByRole('button', { name: /submit/i }));
    
    expect(mockSubmit).toHaveBeenCalledWith({
      firstName: 'John',
      lastName: 'Doe',
      email: 'john@example.com',
      role: UserRole.User,
    });
  });
  
  it('validates required fields', () => {
    render(<UserForm onSubmit={mockSubmit} />);
    
    fireEvent.click(screen.getByRole('button', { name: /submit/i }));
    
    expect(mockSubmit).not.toHaveBeenCalled();
    expect(screen.getByText(/first name is required/i)).toBeInTheDocument();
    expect(screen.getByText(/last name is required/i)).toBeInTheDocument();
    expect(screen.getByText(/email is required/i)).toBeInTheDocument();
  });
});

Notice there’s no testing of internal state or implementation details — we test what the user sees and does. These tests survive refactoring because they don’t care how the form works, only that it works.

Integration Testing with Playwright

For end-to-end workflows, I use Playwright (I switched from Cypress years ago — better performance, better TypeScript support, and real multi-tab testing):

// e2e/user-management.spec.ts
import { test, expect } from '@playwright/test';

test.describe('User Management', () => {
  test.beforeEach(async ({ page }) => {
    await page.goto('/login');
    await page.getByLabel('Email').fill('admin@example.com');
    await page.getByLabel('Password').fill('password');
    await page.getByRole('button', { name: 'Sign in' }).click();
    await page.waitForURL('/dashboard');
    await page.goto('/users');
  });

  test('displays the user list', async ({ page }) => {
    await expect(page.getByTestId('user-list')).toBeVisible();
    await expect(page.getByTestId('user-list-item')).not.toHaveCount(0);
  });

  test('can create a new user', async ({ page }) => {
    const email = `test-${Date.now()}@example.com`;

    await page.getByTestId('add-user-button').click();
    await expect(page.getByTestId('user-form')).toBeVisible();

    await page.getByTestId('first-name-input').fill('Test');
    await page.getByTestId('last-name-input').fill('User');
    await page.getByTestId('email-input').fill(email);
    await page.getByTestId('role-select').selectOption('User');

    await page.getByTestId('submit-button').click();

    await expect(page.getByTestId('success-message')).toBeVisible();
    await expect(page.getByText(email)).toBeVisible();
  });
});

These tests verify complete workflows from the user’s perspective. They’re slower than unit tests, so I run them against critical paths only — user creation, checkout, auth flows — not every minor interaction.

The Decisions That Matter

If I had to distill this entire post into the decisions that make or break a large TypeScript codebase:

  1. Organize by feature, not by layer. This is the single highest-leverage structural decision. It determines whether teams can work independently, whether code splitting is clean, and whether new developers can get productive in days instead of weeks.

  2. Enforce module boundaries with tooling, not discipline. Barrel files plus ESLint import restrictions. If it’s not enforced by CI, it doesn’t exist.

  3. Put your types to work. Domain-driven types, transformation layers at boundaries, branded types for IDs, type guards instead of type assertions. The type system is your most powerful tool for preventing bugs at scale — use it aggressively.

  4. State belongs at the lowest level that needs it. Global state is a last resort, not a default. Most state is local. Fight the urge to throw everything into Redux.

  5. Test at the right level. Type tests for your public type API. Component tests with Testing Library for behavior. Playwright for critical user flows. Skip the unit tests for implementation details that will change next sprint.

None of these are revolutionary. But I’ve watched teams skip them in the name of shipping faster, and every one of them paid the price six months later when the codebase ground to a halt. The best time to make these decisions is at the start. The second best time is now.

← Back to blog