Skip to main content

API Design Best Practices: REST, GraphQL, and the Patterns That Scale

A practical guide to designing APIs that last, covering REST conventions, GraphQL fundamentals, tRPC, authentication patterns, rate limiting, error handling, and testing tools with Node.js examples.

Priya Patel
16 min read
API Design Best Practices: REST, GraphQL, and the Patterns That Scale

Why API Design Deserves More Thought Than It Gets

I have seen the same pattern play out multiple times across different teams. A developer builds an API quickly to support a frontend feature. It works. Shipping happens. Then six months later, a mobile app needs the same data but in a different shape. Another team wants to integrate. Suddenly that quick API becomes a bottleneck, and refactoring it means breaking every client that depends on it.

Bad API design is technical debt that compounds faster than almost anything else in software. Once external clients depend on your API, changing it becomes a coordination nightmare. Version 1 sticks around far longer than anyone planned, and you end up maintaining two or three versions simultaneously because nobody wants to migrate.

Good API design, on the other hand, ages gracefully. The Stripe API, the GitHub API, the Twilio API — these are lauded not because they use some secret technology, but because they follow consistent conventions that make them predictable and easy to work with. A developer who has used one Stripe endpoint can guess how the next one will work.

So here is what I have learned about designing APIs that do not become liabilities — whether you are building REST, GraphQL, or something in between.

REST Conventions That Matter

REST is not a protocol or a specification. It is a set of architectural constraints that, when followed, lead to APIs that are predictable and cacheable. Most "REST" APIs in the wild are actually just JSON-over-HTTP with varying degrees of adherence to REST principles, and that is fine. Perfect REST is not the goal — consistency is.

Resource Naming

URLs should represent resources (nouns), not actions (verbs). The HTTP method conveys the action.

# Good
GET    /api/users          → List users
POST   /api/users          → Create a user
GET    /api/users/123      → Get a specific user
PUT    /api/users/123      → Replace a user
PATCH  /api/users/123      → Partially update a user
DELETE /api/users/123      → Delete a user

# Bad
GET    /api/getUsers
POST   /api/createUser
POST   /api/deleteUser/123
GET    /api/getUserById?id=123

Use plural nouns for collections (/users, not /user). Use kebab-case for multi-word resources (/order-items, not /orderItems or /order_items). Be consistent — pick a convention and stick to it across every endpoint.

For nested resources, think about whether the nesting is necessary:

# Nested — use when the child resource does not make sense without the parent
GET /api/users/123/orders          → Orders belonging to user 123

# Flat — use when the child resource has its own identity
GET /api/orders?user_id=123        → Orders filtered by user
GET /api/orders/456                → A specific order (regardless of user)

I generally prefer flat structures with query parameters for filtering. Deep nesting (/api/users/123/orders/456/items/789) becomes unwieldy fast.

HTTP Methods and Status Codes

Use the right HTTP method for each operation:

MethodPurposeIdempotent?Request Body?
GETRetrieve dataYesNo
POSTCreate new resourceNoYes
PUTReplace entire resourceYesYes
PATCHPartial updateYes*Yes
DELETERemove resourceYesOptional

*PATCH is technically not required to be idempotent, but designing it to be idempotent avoids a category of bugs.

And please, use appropriate status codes. Nothing frustrates API consumers more than getting 200 OK with a body that says {"error": "User not found"}.

// Express.js example — proper status codes
app.get('/api/users/:id', async (req, res) => {
  try {
    const user = await db.users.findById(req.params.id);

    if (!user) {
      return res.status(404).json({
        error: 'NOT_FOUND',
        message: `User with id ${req.params.id} does not exist`,
      });
    }

    res.status(200).json({ data: user });
  } catch (err) {
    console.error('Error fetching user:', err);
    res.status(500).json({
      error: 'INTERNAL_SERVER_ERROR',
      message: 'An unexpected error occurred',
    });
  }
});

Here are the status codes you should actually use:

CodeMeaningWhen to Use
200OKSuccessful GET, PUT, PATCH
201CreatedSuccessful POST that creates a resource
204No ContentSuccessful DELETE
400Bad RequestInvalid input, validation failure
401UnauthorizedMissing or invalid authentication
403ForbiddenAuthenticated but insufficient permissions
404Not FoundResource does not exist
409ConflictDuplicate resource, conflicting update
422Unprocessable EntityValid syntax but semantic errors
429Too Many RequestsRate limit exceeded
500Internal Server ErrorUnexpected server failure

Versioning

Your API will change. Plan for it from the start.

URL versioning is the most common and most practical approach:

GET /api/v1/users
GET /api/v2/users

Header versioning is cleaner but harder for clients to implement:

GET /api/users
Accept: application/vnd.myapi.v2+json

I recommend URL versioning for its simplicity. Every tutorial, every tool, every developer understands it immediately. Start at v1 and increment only when you make breaking changes. Non-breaking additions (new fields, new optional parameters) do not require a version bump.

Pagination

Never return unbounded lists. An endpoint that returns 10,000 records because nobody added pagination will eventually bring down your server or your client.

Offset-based pagination:

GET /api/products?page=2&limit=20
{
  "data": [...],
  "pagination": {
    "page": 2,
    "limit": 20,
    "total": 156,
    "totalPages": 8
  }
}

Cursor-based pagination (better for large datasets and real-time data):

GET /api/products?cursor=eyJpZCI6MTAwfQ&limit=20
{
  "data": [...],
  "pagination": {
    "nextCursor": "eyJpZCI6MTIwfQ",
    "hasMore": true
  }
}

Cursor-based pagination is more performant (no OFFSET scans in the database) and handles insertions/deletions between pages correctly. For any dataset that might exceed a few thousand records, use cursors.

When REST Breaks Down

REST works beautifully when your data access patterns are straightforward — CRUD operations on well-defined resources. It starts showing cracks when:

  1. Clients need different shapes of the same data. A mobile app might want a user's name and avatar. The admin dashboard wants the user's name, email, role, created date, last login, and order count. With REST, you either return everything (over-fetching) or create separate endpoints for each client (endpoint explosion).

  2. Related data requires multiple round trips. To display a user's profile with their latest orders and shipping addresses, a REST client might need to hit /users/123, /users/123/orders?limit=5, and /users/123/addresses. Three HTTP requests for one screen.

  3. Real-time updates are needed. REST is request-response. For live dashboards, chat, or notifications, you need WebSockets or SSE on top of REST.

This is where GraphQL enters the picture.

GraphQL Fundamentals

GraphQL lets the client specify exactly what data it needs. The server exposes a schema (a type system describing all available data), and clients query against that schema.

Schema Definition

type User {
  id: ID!
  name: String!
  email: String!
  role: Role!
  orders(limit: Int = 10): [Order!]!
  addresses: [Address!]!
  createdAt: DateTime!
}

type Order {
  id: ID!
  total: Float!
  status: OrderStatus!
  items: [OrderItem!]!
  createdAt: DateTime!
}

enum Role {
  USER
  ADMIN
  MODERATOR
}

enum OrderStatus {
  PENDING
  PROCESSING
  SHIPPED
  DELIVERED
  CANCELLED
}

type Query {
  user(id: ID!): User
  users(page: Int, limit: Int): UserConnection!
  order(id: ID!): Order
}

type Mutation {
  createUser(input: CreateUserInput!): User!
  updateUser(id: ID!, input: UpdateUserInput!): User!
  deleteUser(id: ID!): Boolean!
}

Queries

A client fetches exactly what it needs:

# Mobile app — minimal data
query {
  user(id: "123") {
    name
    avatarUrl
  }
}

# Admin dashboard — detailed data
query {
  user(id: "123") {
    name
    email
    role
    createdAt
    orders(limit: 5) {
      id
      total
      status
    }
    addresses {
      city
      state
    }
  }
}

Both queries hit the same endpoint (POST /graphql). The server returns exactly the fields requested — nothing more, nothing less. No over-fetching, no under-fetching, one network request.

Resolvers

// Node.js with Apollo Server
const resolvers = {
  Query: {
    user: async (_, { id }, context) => {
      return context.db.users.findById(id);
    },
    users: async (_, { page = 1, limit = 20 }, context) => {
      return context.db.users.findMany({ page, limit });
    },
  },
  User: {
    orders: async (parent, { limit }, context) => {
      return context.db.orders.findByUserId(parent.id, { limit });
    },
    addresses: async (parent, _, context) => {
      return context.db.addresses.findByUserId(parent.id);
    },
  },
  Mutation: {
    createUser: async (_, { input }, context) => {
      return context.db.users.create(input);
    },
  },
};

The N+1 Problem and DataLoader

GraphQL's biggest performance trap is the N+1 query problem. If you fetch a list of 20 users, and each user resolver triggers a database query for their orders, that is 1 + 20 = 21 queries. With nested relationships, this multiplies further.

DataLoader solves this by batching and caching database queries within a single request:

import DataLoader from 'dataloader';

// Create a loader that batches user IDs
const ordersByUserLoader = new DataLoader(async (userIds) => {
  // One query instead of N queries
  const orders = await db.orders.findByUserIds(userIds);

  // Return results in the same order as the input IDs
  return userIds.map(id => orders.filter(order => order.userId === id));
});

// In the resolver
const resolvers = {
  User: {
    orders: (parent) => ordersByUserLoader.load(parent.id),
  },
};

DataLoader batches all .load() calls within a single tick of the event loop into one database query. For the 20-user example, instead of 21 queries, you get 2 — one for users and one for all their orders.

Every production GraphQL server should use DataLoader. There are no exceptions to this rule.

tRPC for Full-Stack TypeScript

If both your frontend and backend are TypeScript (Next.js, for example), tRPC offers something neither REST nor GraphQL can: end-to-end type safety without code generation.

// Server — define your API
import { initTRPC } from '@trpc/server';
import { z } from 'zod';

const t = initTRPC.create();

export const appRouter = t.router({
  user: t.router({
    getById: t.procedure
      .input(z.object({ id: z.string() }))
      .query(async ({ input }) => {
        const user = await db.users.findById(input.id);
        if (!user) throw new TRPCError({ code: 'NOT_FOUND' });
        return user;
      }),

    create: t.procedure
      .input(z.object({
        name: z.string().min(2),
        email: z.string().email(),
      }))
      .mutation(async ({ input }) => {
        return db.users.create(input);
      }),
  }),
});

export type AppRouter = typeof appRouter;
// Client — fully typed, autocompletions everywhere
import { trpc } from '@/utils/trpc';

function UserProfile({ userId }: { userId: string }) {
  const { data, isLoading } = trpc.user.getById.useQuery({ id: userId });

  // TypeScript knows `data` has `name`, `email`, etc.
  // Autocomplete works. Type errors are caught at build time.
  return <div>{data?.name}</div>;
}

Change a field name on the server, and your IDE immediately highlights every client that references the old name. No API documentation to keep in sync. No types to generate. The router definition is the contract.

tRPC is not suitable for public APIs (it requires a TypeScript client), but for full-stack applications where you control both ends, it eliminates an entire category of bugs and busywork.

API Authentication Patterns

JWT (JSON Web Tokens)

The most common approach for SPAs and mobile apps. The server issues a signed token after login, and the client includes it in every request.

// Login endpoint
app.post('/api/auth/login', async (req, res) => {
  const { email, password } = req.body;
  const user = await verifyCredentials(email, password);

  if (!user) {
    return res.status(401).json({ error: 'Invalid credentials' });
  }

  const token = jwt.sign(
    { userId: user.id, role: user.role },
    process.env.JWT_SECRET,
    { expiresIn: '15m' }
  );

  const refreshToken = jwt.sign(
    { userId: user.id },
    process.env.REFRESH_SECRET,
    { expiresIn: '7d' }
  );

  // Set refresh token as httpOnly cookie
  res.cookie('refreshToken', refreshToken, {
    httpOnly: true,
    secure: true,
    sameSite: 'strict',
    maxAge: 7 * 24 * 60 * 60 * 1000,
  });

  res.json({ token });
});

// Auth middleware
function authenticate(req, res, next) {
  const authHeader = req.headers.authorization;
  if (!authHeader?.startsWith('Bearer ')) {
    return res.status(401).json({ error: 'Missing token' });
  }

  try {
    const token = authHeader.split(' ')[1];
    const payload = jwt.verify(token, process.env.JWT_SECRET);
    req.user = payload;
    next();
  } catch {
    res.status(401).json({ error: 'Invalid or expired token' });
  }
}

Key practices:

  • Short-lived access tokens (15-30 minutes)
  • Long-lived refresh tokens stored in httpOnly cookies (not localStorage)
  • Include minimal data in the JWT payload (user ID, role — not the entire user object)

Session-Based Authentication

The traditional approach. The server creates a session, stores it (in memory, Redis, or database), and sends a session ID cookie to the client.

import session from 'express-session';
import RedisStore from 'connect-redis';

app.use(session({
  store: new RedisStore({ client: redisClient }),
  secret: process.env.SESSION_SECRET,
  resave: false,
  saveUninitialized: false,
  cookie: {
    secure: true,
    httpOnly: true,
    maxAge: 24 * 60 * 60 * 1000, // 24 hours
    sameSite: 'strict',
  },
}));

Sessions are simpler and more secure than JWTs for traditional web apps. The server has full control — you can revoke a session instantly, which is not possible with JWTs (you have to wait for them to expire or maintain a blocklist).

API Keys

For server-to-server communication and third-party integrations. API keys are simple but less secure than tokens — they are long-lived and grant access until revoked.

function authenticateApiKey(req, res, next) {
  const apiKey = req.headers['x-api-key'];
  if (!apiKey) return res.status(401).json({ error: 'Missing API key' });

  const client = await db.apiKeys.findByKey(apiKey);
  if (!client || client.revokedAt) {
    return res.status(401).json({ error: 'Invalid API key' });
  }

  req.client = client;
  next();
}
MethodBest ForStatefulnessRevocation
JWTSPAs, mobile appsStatelessDifficult (wait for expiry)
SessionsTraditional web appsStateful (server-side)Instant
API KeysServer-to-server, integrationsStatelessInstant (database check)

Rate Limiting

Without rate limiting, a single misbehaving client can bring down your API for everyone. Implement it early, not as an afterthought.

import rateLimit from 'express-rate-limit';

// Global rate limit
const globalLimiter = rateLimit({
  windowMs: 15 * 60 * 1000, // 15 minutes
  max: 100, // 100 requests per window
  standardHeaders: true, // Return rate limit info in headers
  message: {
    error: 'RATE_LIMIT_EXCEEDED',
    message: 'Too many requests, please try again later',
    retryAfter: 900, // seconds
  },
});

// Stricter limit for auth endpoints
const authLimiter = rateLimit({
  windowMs: 15 * 60 * 1000,
  max: 10,
  message: { error: 'Too many login attempts' },
});

app.use('/api/', globalLimiter);
app.use('/api/auth/', authLimiter);

Return 429 Too Many Requests with a Retry-After header. Use the standard RateLimit-* headers so clients can implement backoff:

RateLimit-Limit: 100
RateLimit-Remaining: 23
RateLimit-Reset: 1708200000

Error Handling Patterns

Consistent error responses make life easier for every client developer. Define a standard error format and use it everywhere.

// Error response format
{
  "error": {
    "code": "VALIDATION_ERROR",
    "message": "Invalid request data",
    "details": [
      {
        "field": "email",
        "message": "Must be a valid email address"
      },
      {
        "field": "name",
        "message": "Must be at least 2 characters"
      }
    ]
  }
}
// Centralized error handler in Express
app.use((err, req, res, next) => {
  console.error(`[${req.method}] ${req.path}:`, err);

  if (err.name === 'ValidationError') {
    return res.status(400).json({
      error: {
        code: 'VALIDATION_ERROR',
        message: 'Invalid request data',
        details: err.details,
      },
    });
  }

  if (err.name === 'UnauthorizedError') {
    return res.status(401).json({
      error: {
        code: 'UNAUTHORIZED',
        message: 'Authentication required',
      },
    });
  }

  // Default — never expose internal error details in production
  res.status(500).json({
    error: {
      code: 'INTERNAL_ERROR',
      message: process.env.NODE_ENV === 'production'
        ? 'An unexpected error occurred'
        : err.message,
    },
  });
});

Documentation with OpenAPI / Swagger

Your API is only as good as its documentation. OpenAPI (formerly Swagger) is the industry standard for API documentation.

openapi: 3.0.3
info:
  title: E-Commerce API
  version: 1.0.0
  description: API for managing products, orders, and users
paths:
  /api/v1/products:
    get:
      summary: List products
      parameters:
        - name: category
          in: query
          schema:
            type: string
        - name: limit
          in: query
          schema:
            type: integer
            default: 20
      responses:
        '200':
          description: List of products
          content:
            application/json:
              schema:
                type: object
                properties:
                  data:
                    type: array
                    items:
                      $ref: '#/components/schemas/Product'

Use swagger-ui-express to serve interactive documentation from your API server:

import swaggerUi from 'swagger-ui-express';
import swaggerDocument from './openapi.json';

app.use('/api/docs', swaggerUi.serve, swaggerUi.setup(swaggerDocument));

Testing APIs

Postman

The most popular API testing tool. Create collections of requests, set up environments (development, staging, production), write test scripts, and share collections with your team. The free tier is generous.

Bruno

A newer, open-source alternative to Postman that stores collections as plain files on your filesystem. No cloud sync, no accounts, and collections can be version-controlled with Git. If Postman's increasing focus on cloud features and team plans bothers you, Bruno is a refreshing alternative.

httpie

A command-line HTTP client that is far more readable than curl:

# GET request
http GET localhost:3000/api/users

# POST request with JSON body
http POST localhost:3000/api/users name="Priya" email="[email protected]"

# With authentication
http GET localhost:3000/api/users Authorization:"Bearer token123"

For quick testing and scripting, httpie is faster than opening Postman or Bruno.

Choosing Between REST, GraphQL, and tRPC

ConsiderationRESTGraphQLtRPC
Public APIBest choiceGoodNot suitable
Internal API (full-stack TS)GoodGoodBest choice
Multiple clients (web, mobile, third-party)GoodBest choiceNot suitable
Simple CRUDBest choiceOverkillGood
Complex data relationshipsAcceptableBest choiceGood
Learning curveLowMediumLow (if you know TS)
CachingExcellent (HTTP caching)Requires effortRelies on React Query
Tooling ecosystemMassiveLargeGrowing

My default recommendation: start with REST. It is the simplest, most widely understood, and best-supported approach. Move to GraphQL when you have multiple clients with different data needs, or when REST's over-fetching/under-fetching becomes a genuine problem. Use tRPC when you control both ends and both are TypeScript.

Most importantly, whichever style you choose, be consistent. A well-designed REST API will outperform a poorly designed GraphQL API every time. The principles — clear naming, proper error handling, authentication, documentation, testing — transcend any specific technology choice.

Design for the developer who will use your API at 2 AM with a deadline. Make it predictable, make it documented, and make it forgiving of mistakes. That is what separates APIs that people love from APIs that people tolerate.

Advertisement

Advertisement

Ad Space

Share

Priya Patel

Senior Tech Writer

Covers AI, machine learning, and emerging technologies. Previously at TechCrunch India.

Comments (0)

Leave a Comment

Related Articles