API Design: REST and GraphQL Patterns That Scale
API design guide covering REST, GraphQL, tRPC, authentication, rate limiting, error handling, and testing with Node.js examples.

Bad API Design Costs More Time Than Bad Code
I mean it. You can refactor bad code in an afternoon. You can rewrite a messy function during lunch. But a bad API? Once external clients depend on it, changing it becomes a coordination nightmare across teams, apps, and third-party integrations. Version 1 sticks around far longer than anyone planned, and suddenly you're maintaining two or three versions simultaneously because nobody wants to migrate.
That's my pet peeve. I've watched it happen multiple times across different teams. A developer builds an API quickly to support a frontend feature. Works fine. Ships on time. Then six months later, a mobile app needs the same data but in a different shape. Another team wants to integrate. What was "quick and dirty" is now a bottleneck, and refactoring means breaking every client that depends on it.
Good API design ages gracefully. Look at Stripe. Look at GitHub's API. Look at Twilio. They aren't lauded because they use some secret technology — they follow consistent conventions that make them predictable and easy to work with. A developer who's used one Stripe endpoint can guess how the next one will work. That's the bar.
So here's what I've learned about designing APIs that don't become liabilities — whether you're building REST, GraphQL, or something in between.
REST Conventions That Matter
REST isn't a protocol or a specification. It's a set of architectural constraints that, when followed, lead to APIs that are predictable and cacheable. Most "REST" APIs in the wild are actually just JSON-over-HTTP with varying degrees of adherence to REST principles, and that's fine. Perfect REST isn't the goal. Consistency is.
Resource Naming
URLs should represent resources (nouns), not actions (verbs). The HTTP method conveys the action.
# Good
GET /api/users → List users
POST /api/users → Create a user
GET /api/users/123 → Get a specific user
PUT /api/users/123 → Replace a user
PATCH /api/users/123 → Partially update a user
DELETE /api/users/123 → Delete a user
# Bad
GET /api/getUsers
POST /api/createUser
POST /api/deleteUser/123
GET /api/getUserById?id=123
Use plural nouns for collections (/users, not /user). Use kebab-case for multi-word resources (/order-items, not /orderItems or /order_items). Pick a convention and stick to it across every endpoint. Don't mix styles. Ever.
For nested resources, think about whether the nesting is necessary:
# Nested — use when the child resource doesn't make sense without the parent
GET /api/users/123/orders → Orders belonging to user 123
# Flat — use when the child resource has its own identity
GET /api/orders?user_id=123 → Orders filtered by user
GET /api/orders/456 → A specific order (regardless of user)
I generally prefer flat structures with query parameters for filtering. Deep nesting (/api/users/123/orders/456/items/789) becomes unwieldy fast. Three levels deep and your frontend developer is already cursing your name.
HTTP Methods and Status Codes
Use the right HTTP method for each operation:
| Method | Purpose | Idempotent? | Request Body? |
|---|---|---|---|
| GET | Retrieve data | Yes | No |
| POST | Create new resource | No | Yes |
| PUT | Replace entire resource | Yes | Yes |
| PATCH | Partial update | Yes* | Yes |
| DELETE | Remove resource | Yes | Optional |
*PATCH is technically not required to be idempotent, but designing it to be idempotent avoids a category of bugs.
And please, use appropriate status codes. Nothing frustrates API consumers more than getting 200 OK with a body that says {"error": "User not found"}. I've seen this pattern in production. It's maddening.
// Express.js example — proper status codes
app.get('/api/users/:id', async (req, res) => {
try {
const user = await db.users.findById(req.params.id);
if (!user) {
return res.status(404).json({
error: 'NOT_FOUND',
message: `User with id ${req.params.id} does not exist`,
});
}
res.status(200).json({ data: user });
} catch (err) {
console.error('Error fetching user:', err);
res.status(500).json({
error: 'INTERNAL_SERVER_ERROR',
message: 'An unexpected error occurred',
});
}
});
Here are the status codes you should actually be using:
| Code | Meaning | When to Use |
|---|---|---|
| 200 | OK | Successful GET, PUT, PATCH |
| 201 | Created | Successful POST that creates a resource |
| 204 | No Content | Successful DELETE |
| 400 | Bad Request | Invalid input, validation failure |
| 401 | Unauthorized | Missing or invalid authentication |
| 403 | Forbidden | Authenticated but insufficient permissions |
| 404 | Not Found | Resource doesn't exist |
| 409 | Conflict | Duplicate resource, conflicting update |
| 422 | Unprocessable Entity | Valid syntax but semantic errors |
| 429 | Too Many Requests | Rate limit exceeded |
| 500 | Internal Server Error | Unexpected server failure |
Versioning
Your API will change. Plan for it from day one.
URL versioning is the most common and most practical approach:
GET /api/v1/users
GET /api/v2/users
Header versioning is cleaner but harder for clients to implement:
GET /api/users
Accept: application/vnd.myapi.v2+json
I recommend URL versioning. Every tutorial, every tool, every developer understands it immediately. Start at v1 and increment only when you make breaking changes. Non-breaking additions (new fields, new optional parameters) don't require a version bump. Simple.
Pagination
Never return unbounded lists. An endpoint that returns 10,000 records because nobody added pagination will eventually bring down your server or your client. Probably both.
Offset-based pagination:
GET /api/products?page=2&limit=20
{
"data": [...],
"pagination": {
"page": 2,
"limit": 20,
"total": 156,
"totalPages": 8
}
}
Cursor-based pagination (better for large datasets and real-time data):
GET /api/products?cursor=eyJpZCI6MTAwfQ&limit=20
{
"data": [...],
"pagination": {
"nextCursor": "eyJpZCI6MTIwfQ",
"hasMore": true
}
}
Cursor-based pagination is more performant (no OFFSET scans in the database) and handles insertions/deletions between pages correctly. For any dataset that might exceed a few thousand records, use cursors. I suspect most teams default to offset pagination out of habit, but cursors aren't much harder to implement and they scale way better.
When REST Breaks Down
REST works beautifully when your data access patterns are straightforward — CRUD operations on well-defined resources. It starts showing cracks when:
-
Clients need different shapes of the same data. A mobile app might want a user's name and avatar. An admin dashboard wants name, email, role, created date, last login, and order count. With REST, you either return everything (over-fetching) or create separate endpoints for each client (endpoint explosion).
-
Related data requires multiple round trips. To display a user's profile with their latest orders and shipping addresses, a REST client might need to hit
/users/123,/users/123/orders?limit=5, and/users/123/addresses. Three HTTP requests for one screen. On a flaky mobile connection in tier-2 India, that's painful. -
Real-time updates are needed. REST is request-response. For live dashboards, chat, or notifications, you need WebSockets or SSE layered on top.
Here's where GraphQL enters the picture.
GraphQL Fundamentals
GraphQL lets the client specify exactly what data it needs. The server exposes a schema (a type system describing all available data), and clients query against that schema. No more guessing what fields an endpoint returns. No more over-fetching.
Schema Definition
type User {
id: ID!
name: String!
email: String!
role: Role!
orders(limit: Int = 10): [Order!]!
addresses: [Address!]!
createdAt: DateTime!
}
type Order {
id: ID!
total: Float!
status: OrderStatus!
items: [OrderItem!]!
createdAt: DateTime!
}
enum Role {
USER
ADMIN
MODERATOR
}
enum OrderStatus {
PENDING
PROCESSING
SHIPPED
DELIVERED
CANCELLED
}
type Query {
user(id: ID!): User
users(page: Int, limit: Int): UserConnection!
order(id: ID!): Order
}
type Mutation {
createUser(input: CreateUserInput!): User!
updateUser(id: ID!, input: UpdateUserInput!): User!
deleteUser(id: ID!): Boolean!
}
Queries
A client fetches exactly what it needs:
# Mobile app — minimal data
query {
user(id: "123") {
name
avatarUrl
}
}
# Admin dashboard — detailed data
query {
user(id: "123") {
name
email
role
createdAt
orders(limit: 5) {
id
total
status
}
addresses {
city
state
}
}
}
Both queries hit the same endpoint (POST /graphql). Server returns exactly the fields requested — nothing more, nothing less. No over-fetching. No under-fetching. One network request. Beautiful, honestly.
Resolvers
// Node.js with Apollo Server
const resolvers = {
Query: {
user: async (_, { id }, context) => {
return context.db.users.findById(id);
},
users: async (_, { page = 1, limit = 20 }, context) => {
return context.db.users.findMany({ page, limit });
},
},
User: {
orders: async (parent, { limit }, context) => {
return context.db.orders.findByUserId(parent.id, { limit });
},
addresses: async (parent, _, context) => {
return context.db.addresses.findByUserId(parent.id);
},
},
Mutation: {
createUser: async (_, { input }, context) => {
return context.db.users.create(input);
},
},
};
Watch Out: The N+1 Problem and DataLoader
GraphQL's biggest performance trap. If you fetch a list of 20 users, and each user resolver triggers a database query for their orders, that's 1 + 20 = 21 queries. With nested relationships, it multiplies further. Gets ugly fast.
DataLoader solves this by batching and caching database queries within a single request:
import DataLoader from 'dataloader';
// Create a loader that batches user IDs
const ordersByUserLoader = new DataLoader(async (userIds) => {
// One query instead of N queries
const orders = await db.orders.findByUserIds(userIds);
// Return results in the same order as the input IDs
return userIds.map(id => orders.filter(order => order.userId === id));
});
// In the resolver
const resolvers = {
User: {
orders: (parent) => ordersByUserLoader.load(parent.id),
},
};
DataLoader batches all .load() calls within a single tick of the event loop into one database query. For the 20-user example, instead of 21 queries, you get 2 — one for users and one for all their orders.
Every production GraphQL server should use DataLoader. No exceptions. I've seen servers grind to a halt because someone skipped this step. Don't be that person.
tRPC for Full-Stack TypeScript
If both your frontend and backend are TypeScript (Next.js, for example), tRPC offers something neither REST nor GraphQL can: end-to-end type safety without code generation. If you want to go deeper into TypeScript's type system to make the most of tRPC, our advanced TypeScript patterns guide covers the techniques that make this possible.
// Server — define your API
import { initTRPC } from '@trpc/server';
import { z } from 'zod';
const t = initTRPC.create();
export const appRouter = t.router({
user: t.router({
getById: t.procedure
.input(z.object({ id: z.string() }))
.query(async ({ input }) => {
const user = await db.users.findById(input.id);
if (!user) throw new TRPCError({ code: 'NOT_FOUND' });
return user;
}),
create: t.procedure
.input(z.object({
name: z.string().min(2),
email: z.string().email(),
}))
.mutation(async ({ input }) => {
return db.users.create(input);
}),
}),
});
export type AppRouter = typeof appRouter;
// Client — fully typed, autocompletions everywhere
import { trpc } from '@/utils/trpc';
function UserProfile({ userId }: { userId: string }) {
const { data, isLoading } = trpc.user.getById.useQuery({ id: userId });
// TypeScript knows `data` has `name`, `email`, etc.
// Autocomplete works. Type errors are caught at build time.
return <div>{data?.name}</div>;
}
Change a field name on the server, and your IDE immediately highlights every client that references the old name. No API documentation to keep in sync. No types to generate. The router definition is the contract. Honestly, it's probably the best developer experience I've had with any API approach.
tRPC isn't suitable for public APIs (it requires a TypeScript client), but for full-stack applications where you control both ends, it eliminates an entire category of bugs and busywork.
API Authentication Patterns
JWT (JSON Web Tokens)
Most common approach for SPAs and mobile apps. Server issues a signed token after login, and the client includes it in every request.
// Login endpoint
app.post('/api/auth/login', async (req, res) => {
const { email, password } = req.body;
const user = await verifyCredentials(email, password);
if (!user) {
return res.status(401).json({ error: 'Invalid credentials' });
}
const token = jwt.sign(
{ userId: user.id, role: user.role },
process.env.JWT_SECRET,
{ expiresIn: '15m' }
);
const refreshToken = jwt.sign(
{ userId: user.id },
process.env.REFRESH_SECRET,
{ expiresIn: '7d' }
);
// Set refresh token as httpOnly cookie
res.cookie('refreshToken', refreshToken, {
httpOnly: true,
secure: true,
sameSite: 'strict',
maxAge: 7 * 24 * 60 * 60 * 1000,
});
res.json({ token });
});
// Auth middleware
function authenticate(req, res, next) {
const authHeader = req.headers.authorization;
if (!authHeader?.startsWith('Bearer ')) {
return res.status(401).json({ error: 'Missing token' });
}
try {
const token = authHeader.split(' ')[1];
const payload = jwt.verify(token, process.env.JWT_SECRET);
req.user = payload;
next();
} catch {
res.status(401).json({ error: 'Invalid or expired token' });
}
}
Key practices:
- Short-lived access tokens (15-30 minutes)
- Long-lived refresh tokens stored in httpOnly cookies (not localStorage — never localStorage)
- Include minimal data in the JWT payload (user ID, role — not the entire user object)
Session-Based Authentication
Old school. Still works great. Server creates a session, stores it (in memory, Redis, or database), and sends a session ID cookie to the client.
import session from 'express-session';
import RedisStore from 'connect-redis';
app.use(session({
store: new RedisStore({ client: redisClient }),
secret: process.env.SESSION_SECRET,
resave: false,
saveUninitialized: false,
cookie: {
secure: true,
httpOnly: true,
maxAge: 24 * 60 * 60 * 1000, // 24 hours
sameSite: 'strict',
},
}));
Sessions are simpler and more secure than JWTs for traditional web apps. Server has full control — you can revoke a session instantly, which isn't possible with JWTs (you have to wait for them to expire or maintain a blocklist). Seems like teams often default to JWTs when sessions would've been a better fit, probably because JWTs get more blog posts written about them.
API Keys
For server-to-server communication and third-party integrations. Simple but less secure than tokens — they're long-lived and grant access until revoked.
function authenticateApiKey(req, res, next) {
const apiKey = req.headers['x-api-key'];
if (!apiKey) return res.status(401).json({ error: 'Missing API key' });
const client = await db.apiKeys.findByKey(apiKey);
if (!client || client.revokedAt) {
return res.status(401).json({ error: 'Invalid API key' });
}
req.client = client;
next();
}
| Method | Best For | Statefulness | Revocation |
|---|---|---|---|
| JWT | SPAs, mobile apps | Stateless | Difficult (wait for expiry) |
| Sessions | Traditional web apps | Stateful (server-side) | Instant |
| API Keys | Server-to-server, integrations | Stateless | Instant (database check) |
Rate Limiting
Without rate limiting, a single misbehaving client can bring down your API for everyone. Implement it early, not as an afterthought. I've seen this bite teams who thought "we'll add it later." Later never came, and then a bot hammered their endpoint for three hours.
import rateLimit from 'express-rate-limit';
// Global rate limit
const globalLimiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // 100 requests per window
standardHeaders: true, // Return rate limit info in headers
message: {
error: 'RATE_LIMIT_EXCEEDED',
message: 'Too many requests, please try again later',
retryAfter: 900, // seconds
},
});
// Stricter limit for auth endpoints
const authLimiter = rateLimit({
windowMs: 15 * 60 * 1000,
max: 10,
message: { error: 'Too many login attempts' },
});
app.use('/api/', globalLimiter);
app.use('/api/auth/', authLimiter);
Return 429 Too Many Requests with a Retry-After header. Use the standard RateLimit-* headers so clients can implement backoff:
RateLimit-Limit: 100
RateLimit-Remaining: 23
RateLimit-Reset: 1708200000
Error Handling Patterns
Consistent error responses make life easier for every client developer. Define a standard error format and use it everywhere. Everywhere. Not "most places." Everywhere.
// Error response format
{
"error": {
"code": "VALIDATION_ERROR",
"message": "Invalid request data",
"details": [
{
"field": "email",
"message": "Must be a valid email address"
},
{
"field": "name",
"message": "Must be at least 2 characters"
}
]
}
}
// Centralized error handler in Express
app.use((err, req, res, next) => {
console.error(`[${req.method}] ${req.path}:`, err);
if (err.name === 'ValidationError') {
return res.status(400).json({
error: {
code: 'VALIDATION_ERROR',
message: 'Invalid request data',
details: err.details,
},
});
}
if (err.name === 'UnauthorizedError') {
return res.status(401).json({
error: {
code: 'UNAUTHORIZED',
message: 'Authentication required',
},
});
}
// Default — never expose internal error details in production
res.status(500).json({
error: {
code: 'INTERNAL_ERROR',
message: process.env.NODE_ENV === 'production'
? 'An unexpected error occurred'
: err.message,
},
});
});
Documentation with OpenAPI / Swagger
Your API is only as good as its documentation. OpenAPI (formerly Swagger) is the industry standard.
openapi: 3.0.3
info:
title: E-Commerce API
version: 1.0.0
description: API for managing products, orders, and users
paths:
/api/v1/products:
get:
summary: List products
parameters:
- name: category
in: query
schema:
type: string
- name: limit
in: query
schema:
type: integer
default: 20
responses:
'200':
description: List of products
content:
application/json:
schema:
type: object
properties:
data:
type: array
items:
$ref: '#/components/schemas/Product'
If you're deploying your API using containers, our Docker and Kubernetes beginner's guide covers how to containerize Node.js services like this. Use swagger-ui-express to serve interactive documentation from your API server:
import swaggerUi from 'swagger-ui-express';
import swaggerDocument from './openapi.json';
app.use('/api/docs', swaggerUi.serve, swaggerUi.setup(swaggerDocument));
Testing APIs
Postman
Most popular API testing tool. Create collections of requests, set up environments (development, staging, production), write test scripts, and share collections with your team. Free tier is generous.
Bruno
A newer, open-source alternative to Postman that stores collections as plain files on your filesystem. No cloud sync, no accounts, and collections can be version-controlled with Git. If Postman's increasing focus on cloud features and team plans bothers you, Bruno is a refreshing alternative. I think more teams should give it a look.
httpie
A command-line HTTP client that's far more readable than curl:
# GET request
http GET localhost:3000/api/users
# POST request with JSON body
http POST localhost:3000/api/users name="Priya" email="priya@example.com"
# With authentication
http GET localhost:3000/api/users Authorization:"Bearer token123"
For quick testing and scripting, httpie is faster than opening Postman or Bruno. Probably my go-to for anything ad-hoc.
Choosing Between REST, GraphQL, and tRPC
| Consideration | REST | GraphQL | tRPC |
|---|---|---|---|
| Public API | Best choice | Good | Not suitable |
| Internal API (full-stack TS) | Good | Good | Best choice |
| Multiple clients (web, mobile, third-party) | Good | Best choice | Not suitable |
| Simple CRUD | Best choice | Overkill | Good |
| Complex data relationships | Acceptable | Best choice | Good |
| Learning curve | Low | Medium | Low (if you know TS) |
| Caching | Excellent (HTTP caching) | Requires effort | Relies on React Query |
| Tooling ecosystem | Massive | Large | Growing |
My default recommendation: start with REST. Simplest. Most widely understood. Best-supported. Move to GraphQL when you have multiple clients with different data needs, or when REST's over-fetching/under-fetching becomes a genuine problem — not a theoretical one. Use tRPC when you control both ends and both are TypeScript.
Consistency Beats Perfection
Here's what I'd leave you with after a decade of building and consuming APIs: consistency matters more than picking the "right" approach. A well-designed REST API will outperform a poorly designed GraphQL API every single time. And a GraphQL API with sloppy schema design won't save you from the same problems REST had.
Pick your conventions. Document them. Stick to them. Clear naming, proper error handling, authentication, rate limiting, documentation, testing — these principles transcend any specific technology choice.
Design for the developer who'll use your API at 2 AM with a deadline bearing down on them. Make it predictable. Make it documented. Make it forgiving of mistakes. A perfectly "correct" API that's inconsistent across endpoints will frustrate people more than a slightly unconventional API that behaves the same way everywhere.
If you're preparing for technical interviews, understanding API design is a key part of system design interview preparation — interviewers love to dig into how you'd design scalable APIs. But even outside of interviews, this stuff matters daily. Arguably more than most things you'll learn.
Consistency over perfection. Every time.
Priya Patel
Senior Tech Writer
AI and machine learning specialist with 6 years covering emerging technologies. Previously a senior tech correspondent at TechCrunch India, she now writes in-depth analyses of AI tools, LLM developments, and their real-world applications for Indian businesses.
Stay Ahead in Tech
Get the latest tech news, tutorials, and reviews delivered straight to your inbox every week.
No spam ever. Unsubscribe anytime.
Comments (0)
Leave a Comment
All comments are moderated before appearing. Please be respectful and follow our community guidelines.
Related Articles

DSA Roadmap for Placements: What Matters in 2026
Practical DSA roadmap for campus placements in India: topic priorities, platform choices, company patterns, and time management.

Redis Caching: Speed Up Your App in Practice
Redis caching strategies, data structures, and cache invalidation with Node.js, Next.js, and Upstash integration guide.

TypeScript Advanced Patterns You Should Know
Advanced TypeScript: discriminated unions, template literals, branded types, builder pattern, and Zod validation with examples.