50 Full Stack Interview Questions That Actually Get Asked in 2026
After conducting 200+ full stack interviews and getting offers from Stripe, Airbnb, and multiple YC startups, I've compiled the questions that separate senior engineers from everyone else. No fluff—just the real stuff.
My first full stack interview at a Series B startup went sideways fast. The interviewer asked me to design a real-time collaborative document editor. I froze. I could build React components and write Express routes, but connecting the dots between WebSockets, conflict resolution, and database design? That's where I fell apart.
That experience taught me something crucial: full stack interviews aren't about knowing every framework. They're about demonstrating you can think across the entire system—from the browser to the database and back. The best candidates I've interviewed understand trade-offs, not just syntax.
This guide covers 50 questions organized from fundamentals to advanced real-world scenarios. Each answer reflects how a senior engineer would actually respond in an interview—with context, trade-offs, and practical wisdom.
What Interviewers Actually Evaluate
- Frontend Mastery: React patterns, state management, performance optimization
- Backend Depth: API design, database modeling, authentication, caching
- System Thinking: How components interact, scalability considerations
- Trade-off Analysis: Why you'd choose one approach over another
- Real-world Experience: Stories of problems you've solved and lessons learned
JavaScript & Core Fundamentals (Questions 1-10)
1. Explain the JavaScript event loop. How does it handle asynchronous operations?
Tests understanding of JavaScript's concurrency model
Answer:
JavaScript is single-threaded but handles async operations through the event loop. Here's how it works:
Call Stack: Executes synchronous code, one function at a time.
Web APIs: Browser-provided APIs (setTimeout, fetch, DOM events) handle async operations outside the main thread.
Task Queue (Macrotasks): Holds callbacks from setTimeout, setInterval, I/O operations.
Microtask Queue: Holds Promise callbacks and MutationObserver. Always processed before macrotasks.
console.log('1'); // Sync - runs first
setTimeout(() => console.log('2'), 0); // Macrotask
Promise.resolve().then(() => console.log('3')); // Microtask
console.log('4'); // Sync
// Output: 1, 4, 3, 2
// Microtasks (Promise) run before macrotasks (setTimeout)Why it matters: Understanding this prevents race conditions and helps debug async code. In interviews, I often see candidates confuse why Promise.then runs before setTimeout with 0ms delay.
2. What's the difference between var, let, and const? When would you use each?
Tests scope understanding and modern JavaScript practices
Answer:
var: Function-scoped, hoisted with undefined initialization. Can be redeclared. Generally avoided in modern code due to unexpected scoping issues.
let: Block-scoped, hoisted but not initialized (temporal dead zone). Can be reassigned but not redeclared in the same scope.
const: Block-scoped like let, but cannot be reassigned. Note: objects and arrays declared with const can still have their contents modified.
// The classic interview gotcha
for (var i = 0; i < 3; i++) {
setTimeout(() => console.log(i), 100);
}
// Output: 3, 3, 3 (var is function-scoped)
for (let j = 0; j < 3; j++) {
setTimeout(() => console.log(j), 100);
}
// Output: 0, 1, 2 (let creates new binding each iteration)My rule: Use const by default. Use let when you need reassignment. Never use var in new code.
3. Explain closures with a practical example. Where have you used them?
Tests functional programming understanding
Answer:
A closure is a function that remembers the variables from its outer scope even after that scope has finished executing. It "closes over" its environment.
// Practical example: Creating a rate limiter
function createRateLimiter(maxCalls, timeWindow) {
let calls = []; // Closure keeps track of calls
return function(fn) {
const now = Date.now();
calls = calls.filter(time => now - time < timeWindow);
if (calls.length < maxCalls) {
calls.push(now);
return fn();
}
return null; // Rate limited
};
}
const limiter = createRateLimiter(5, 1000);
// limiter remembers 'calls' array between invocations
// Another common use: Private variables
function createCounter() {
let count = 0; // Private - can't access directly
return {
increment: () => ++count,
decrement: () => --count,
getCount: () => count
};
}Real-world uses: Data privacy, memoization, debouncing/throttling, React hooks (useState stores state via closures), partial application.
4. What is the prototype chain? How does JavaScript inheritance work?
Answer:
JavaScript uses prototypal inheritance. Every object has an internal [[Prototype]] link to another object. When you access a property, JS walks up this chain until it finds the property or reaches null.
// Modern class syntax (syntactic sugar over prototypes)
class Animal {
constructor(name) {
this.name = name;
}
speak() {
console.log(`${this.name} makes a sound`);
}
}
class Dog extends Animal {
speak() {
console.log(`${this.name} barks`);
}
}
const dog = new Dog('Rex');
dog.speak(); // "Rex barks"
// Under the hood:
// dog -> Dog.prototype -> Animal.prototype -> Object.prototype -> null
// Checking the chain
console.log(dog.__proto__ === Dog.prototype); // true
console.log(Dog.prototype.__proto__ === Animal.prototype); // trueInterview tip: Know the difference between __proto__ (the actual prototype link) and .prototype (a property on constructor functions that becomes __proto__ of instances).
5. Explain the difference between == and ===. What about Object.is()?
Answer:
== (Loose equality): Performs type coercion before comparison. Can lead to surprising results.
=== (Strict equality): No type coercion. Values must be same type and value. Use this 99% of the time.
Object.is(): Like ===, but handles edge cases: NaN === NaN is false, but Object.is(NaN, NaN) is true. Also distinguishes +0 and -0.
// Coercion chaos '5' == 5 // true (string coerced to number) '5' === 5 // false (different types) null == undefined // true null === undefined // false // Edge cases Object.is() handles NaN === NaN // false (IEEE 754 spec) Object.is(NaN, NaN) // true +0 === -0 // true Object.is(+0, -0) // false // React uses Object.is() for state comparison
6. What are Promises and async/await? How do you handle errors in async code?
Answer:
Promises represent eventual completion or failure of async operations. async/await is syntactic sugar that makes async code look synchronous.
// Promise-based
function fetchUser(id) {
return fetch(`/api/users/${id}`)
.then(res => {
if (!res.ok) throw new Error('User not found');
return res.json();
})
.catch(err => {
console.error('Fetch failed:', err);
throw err; // Re-throw to propagate
});
}
// async/await equivalent
async function fetchUser(id) {
try {
const res = await fetch(`/api/users/${id}`);
if (!res.ok) throw new Error('User not found');
return await res.json();
} catch (err) {
console.error('Fetch failed:', err);
throw err;
}
}
// Parallel execution
const [users, posts] = await Promise.all([
fetchUsers(),
fetchPosts()
]);
// Handle partial failures
const results = await Promise.allSettled([
fetchUsers(),
fetchPosts()
]);
// results: [{status: 'fulfilled', value: ...}, {status: 'rejected', reason: ...}]7. Explain 'this' keyword behavior in JavaScript. How does it differ in arrow functions?
Answer:
'this' is determined by how a function is called, not where it's defined—except for arrow functions which lexically capture 'this'.
const obj = {
name: 'MyObject',
// Regular function: 'this' depends on call site
regularMethod() {
console.log(this.name); // 'MyObject'
setTimeout(function() {
console.log(this.name); // undefined! 'this' is window/global
}, 100);
},
// Arrow function: 'this' is lexically bound
arrowMethod() {
console.log(this.name); // 'MyObject'
setTimeout(() => {
console.log(this.name); // 'MyObject' - arrow captures 'this'
}, 100);
}
};
// Explicit binding
const boundFn = obj.regularMethod.bind(obj);
obj.regularMethod.call(otherObj); // 'this' is otherObj
obj.regularMethod.apply(otherObj, [args]); // Same, array argsRule of thumb: Use arrow functions for callbacks to preserve 'this'. Use regular functions for object methods when you need dynamic 'this'.
8. What is debouncing vs throttling? Implement both.
Answer:
Debounce: Wait until user stops triggering for X ms, then execute once. Good for search inputs.
Throttle: Execute at most once every X ms. Good for scroll/resize handlers.
// Debounce: Execute after user stops typing
function debounce(fn, delay) {
let timeoutId;
return function(...args) {
clearTimeout(timeoutId);
timeoutId = setTimeout(() => fn.apply(this, args), delay);
};
}
const searchInput = debounce((query) => {
fetchResults(query);
}, 300);
// Throttle: Execute at most once per interval
function throttle(fn, limit) {
let inThrottle;
return function(...args) {
if (!inThrottle) {
fn.apply(this, args);
inThrottle = true;
setTimeout(() => inThrottle = false, limit);
}
};
}
const handleScroll = throttle(() => {
updateScrollPosition();
}, 100);9. Explain shallow copy vs deep copy. How do you deep clone an object?
Answer:
Shallow copy: Copies top-level properties. Nested objects still reference the original.
Deep copy: Recursively copies all nested objects, creating entirely independent data.
const original = { a: 1, nested: { b: 2 } };
// Shallow copies
const spread = { ...original };
const assigned = Object.assign({}, original);
spread.nested.b = 99; // Also changes original.nested.b!
// Deep copy methods
// 1. structuredClone (modern, best option)
const deep1 = structuredClone(original);
// 2. JSON (loses functions, dates become strings)
const deep2 = JSON.parse(JSON.stringify(original));
// 3. Manual recursive (full control)
function deepClone(obj) {
if (obj === null || typeof obj !== 'object') return obj;
if (obj instanceof Date) return new Date(obj);
if (obj instanceof Array) return obj.map(deepClone);
const cloned = {};
for (const key in obj) {
if (obj.hasOwnProperty(key)) {
cloned[key] = deepClone(obj[key]);
}
}
return cloned;
}2026 recommendation: Use structuredClone() for most cases. It handles circular references, dates, maps, sets, and more.
10. What are generators and iterators? Give a practical use case.
Answer:
Iterator: An object with a next() method that returns {value, done}.
Generator: A function that can pause (yield) and resume, automatically creating an iterator.
// Generator for paginated API calls
async function* fetchAllPages(endpoint) {
let page = 1;
let hasMore = true;
while (hasMore) {
const response = await fetch(`${endpoint}?page=${page}`);
const data = await response.json();
yield data.items;
hasMore = data.hasNextPage;
page++;
}
}
// Usage: memory-efficient pagination
for await (const items of fetchAllPages('/api/users')) {
processItems(items);
// Only one page in memory at a time
}
// Infinite sequence (lazy evaluation)
function* fibonacci() {
let [a, b] = [0, 1];
while (true) {
yield a;
[a, b] = [b, a + b];
}
}
const fib = fibonacci();
fib.next().value; // 0
fib.next().value; // 1
fib.next().value; // 1React & Frontend (Questions 11-25)
11. Explain the React component lifecycle in functional components with hooks.
Tests modern React understanding
Answer:
In functional components, useEffect replaces lifecycle methods:
function Component({ userId }) {
// componentDidMount + componentDidUpdate
useEffect(() => {
fetchUser(userId);
}, [userId]); // Runs when userId changes
// componentDidMount only (empty deps)
useEffect(() => {
initializeAnalytics();
}, []);
// componentWillUnmount (cleanup function)
useEffect(() => {
const subscription = subscribe(userId);
return () => {
subscription.unsubscribe(); // Cleanup
};
}, [userId]);
// Run on every render (no deps array)
useEffect(() => {
document.title = `User ${userId}`;
}); // Careful: runs after EVERY render
}Key insight: Effects run after render, not during. For synchronous DOM updates, use useLayoutEffect instead.
12. What causes unnecessary re-renders in React? How do you optimize them?
Answer:
Components re-render when: parent renders, state changes, context changes, or props change (reference comparison).
// Problem: New object/function created every render
function Parent() {
const config = { theme: 'dark' }; // New reference every render!
const handleClick = () => {}; // New function every render!
return <Child config={config} onClick={handleClick} />;
}
// Solution: Memoization
function Parent() {
const config = useMemo(() => ({ theme: 'dark' }), []);
const handleClick = useCallback(() => {
doSomething();
}, []);
return <Child config={config} onClick={handleClick} />;
}
// Prevent child re-renders
const Child = React.memo(({ config, onClick }) => {
return <div onClick={onClick}>{config.theme}</div>;
});
// For expensive calculations
const sortedList = useMemo(() => {
return items.sort((a, b) => a.name.localeCompare(b.name));
}, [items]);Warning: Don't over-optimize. Premature memoization adds complexity. Profile first with React DevTools.
13. Explain React's Virtual DOM and reconciliation algorithm.
Answer:
The Virtual DOM is a lightweight JavaScript representation of the actual DOM. When state changes, React:
- Creates a new Virtual DOM tree
- Diffs it against the previous tree (reconciliation)
- Calculates minimal DOM operations needed
- Batches and applies changes to real DOM
Key optimizations in the diffing algorithm:
- Elements of different types produce different trees (replaced entirely)
- Keys help identify moved/reordered elements in lists
- Same component type = update props, preserve state
// Why keys matter
// Bad: Index as key - items reorder, state gets confused
{items.map((item, index) => <Item key={index} />)}
// Good: Stable unique ID
{items.map(item => <Item key={item.id} />)}14. Compare useState vs useReducer. When would you choose each?
Answer:
useState: Simple state, independent values, straightforward updates.
useReducer: Complex state logic, multiple related values, state depends on previous state, or when next state computation is complex.
// useState: Simple counter
const [count, setCount] = useState(0);
// useReducer: Complex form with validation
const formReducer = (state, action) => {
switch (action.type) {
case 'SET_FIELD':
return {
...state,
values: { ...state.values, [action.field]: action.value },
errors: { ...state.errors, [action.field]: null }
};
case 'SET_ERROR':
return {
...state,
errors: { ...state.errors, [action.field]: action.error }
};
case 'SUBMIT_START':
return { ...state, isSubmitting: true };
case 'SUBMIT_SUCCESS':
return { ...state, isSubmitting: false, isSubmitted: true };
default:
return state;
}
};
const [state, dispatch] = useReducer(formReducer, initialState);
dispatch({ type: 'SET_FIELD', field: 'email', value: 'test@test.com' });My rule: If I'm writing more than 3 related useState calls or complex update logic, I switch to useReducer.
15. How does React Context work? What are its performance implications?
Answer:
Context provides a way to pass data through the component tree without prop drilling. However, any component consuming context re-renders when the context value changes.
// Problem: All consumers re-render on any change
const AppContext = createContext();
function App() {
const [user, setUser] = useState(null);
const [theme, setTheme] = useState('dark');
// New object every render = all consumers re-render
return (
<AppContext.Provider value={{ user, theme, setUser, setTheme }}>
<Child />
</AppContext.Provider>
);
}
// Solution 1: Split contexts by update frequency
const UserContext = createContext();
const ThemeContext = createContext();
// Solution 2: Memoize context value
function App() {
const [user, setUser] = useState(null);
const value = useMemo(() => ({ user, setUser }), [user]);
return (
<UserContext.Provider value={value}>
<Child />
</UserContext.Provider>
);
}
// Solution 3: Use selectors (with libraries like use-context-selector)
const theme = useContextSelector(AppContext, ctx => ctx.theme);When to avoid Context: Frequently changing data (use state management library instead) or deeply nested data (prop drilling might be clearer).
16. Explain React Server Components. How do they differ from Client Components?
Answer:
React Server Components (RSC) run only on the server and send HTML to the client. They can't use hooks, event handlers, or browser APIs, but can directly access databases and filesystems.
Server Components: Zero JS bundle impact, can fetch data directly, no state/effects
Client Components: Interactive, use hooks, handle events, run in browser
// Server Component (default in Next.js 13+)
async function ProductList() {
const products = await db.query('SELECT * FROM products');
return (
<ul>
{products.map(p => <li key={p.id}>{p.name}</li>)}
</ul>
);
}
// Client Component (opt-in)
'use client';
import { useState } from 'react';
function AddToCartButton({ productId }) {
const [loading, setLoading] = useState(false);
return (
<button onClick={() => addToCart(productId)}>
Add to Cart
</button>
);
}
// Composition: Server renders, Client adds interactivity
function ProductPage() {
return (
<div>
<ProductList /> {/* Server Component */}
<AddToCartButton /> {/* Client Component */}
</div>
);
}17. How would you implement infinite scroll in React?
Answer:
Two main approaches: scroll event listeners or Intersection Observer (preferred for performance).
function InfiniteList() {
const [items, setItems] = useState([]);
const [page, setPage] = useState(1);
const [hasMore, setHasMore] = useState(true);
const [loading, setLoading] = useState(false);
const loaderRef = useRef(null);
// Intersection Observer approach
useEffect(() => {
const observer = new IntersectionObserver(
(entries) => {
if (entries[0].isIntersecting && hasMore && !loading) {
loadMore();
}
},
{ threshold: 0.1 }
);
if (loaderRef.current) {
observer.observe(loaderRef.current);
}
return () => observer.disconnect();
}, [hasMore, loading]);
const loadMore = async () => {
setLoading(true);
const newItems = await fetchItems(page);
setItems(prev => [...prev, ...newItems]);
setPage(prev => prev + 1);
setHasMore(newItems.length > 0);
setLoading(false);
};
return (
<div>
{items.map(item => <Item key={item.id} {...item} />)}
<div ref={loaderRef}>
{loading && <Spinner />}
</div>
</div>
);
}For production: Consider virtualization (react-window) for very long lists to keep DOM size manageable.
18. Explain code splitting and lazy loading in React.
Answer:
Code splitting breaks your bundle into smaller chunks loaded on demand, improving initial load time.
import { lazy, Suspense } from 'react';
// Lazy load heavy components
const Dashboard = lazy(() => import('./Dashboard'));
const Analytics = lazy(() => import('./Analytics'));
function App() {
return (
<Suspense fallback={<LoadingSpinner />}>
<Routes>
<Route path="/dashboard" element={<Dashboard />} />
<Route path="/analytics" element={<Analytics />} />
</Routes>
</Suspense>
);
}
// Named exports need slightly different syntax
const Dashboard = lazy(() =>
import('./Dashboard').then(module => ({ default: module.Dashboard }))
);
// Preload on hover for better UX
function NavLink({ to, children }) {
const preload = () => {
if (to === '/analytics') {
import('./Analytics'); // Starts loading
}
};
return (
<Link to={to} onMouseEnter={preload}>
{children}
</Link>
);
}19. How do you handle forms in React? Compare controlled vs uncontrolled components.
Answer:
Controlled: React state is single source of truth. More control, enables validation/formatting.
Uncontrolled: DOM holds the state. Simpler, less code, use refs to read values.
// Controlled: Full control over input
function ControlledForm() {
const [email, setEmail] = useState('');
const [errors, setErrors] = useState({});
const handleChange = (e) => {
const value = e.target.value;
setEmail(value);
// Real-time validation
if (!value.includes('@')) {
setErrors({ email: 'Invalid email' });
} else {
setErrors({});
}
};
return <input value={email} onChange={handleChange} />;
}
// Uncontrolled: Simpler, DOM holds state
function UncontrolledForm() {
const emailRef = useRef();
const handleSubmit = (e) => {
e.preventDefault();
console.log(emailRef.current.value);
};
return (
<form onSubmit={handleSubmit}>
<input ref={emailRef} defaultValue="" />
<button type="submit">Submit</button>
</form>
);
}
// Modern approach: React Hook Form (best of both)
const { register, handleSubmit, formState } = useForm();Recommendation: Use React Hook Form or Formik for complex forms. Manual controlled inputs for simple cases.
20. What are custom hooks? Create one that handles API calls with loading/error states.
Answer:
Custom hooks extract reusable stateful logic. They're just functions that use other hooks.
function useFetch(url, options = {}) {
const [data, setData] = useState(null);
const [loading, setLoading] = useState(true);
const [error, setError] = useState(null);
useEffect(() => {
const abortController = new AbortController();
const fetchData = async () => {
try {
setLoading(true);
setError(null);
const response = await fetch(url, {
...options,
signal: abortController.signal
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}`);
}
const json = await response.json();
setData(json);
} catch (err) {
if (err.name !== 'AbortError') {
setError(err.message);
}
} finally {
setLoading(false);
}
};
fetchData();
return () => abortController.abort(); // Cleanup
}, [url]);
return { data, loading, error, refetch: () => setLoading(true) };
}
// Usage
function UserProfile({ userId }) {
const { data: user, loading, error } = useFetch(`/api/users/${userId}`);
if (loading) return <Spinner />;
if (error) return <Error message={error} />;
return <Profile user={user} />;
}21. Explain CSS-in-JS vs Tailwind vs CSS Modules. Which do you prefer and why?
Answer:
CSS-in-JS (styled-components, Emotion): Scoped styles, dynamic styling, but runtime cost and larger bundles.
Tailwind CSS: Utility-first, tiny production bundle, fast development. Learning curve for class names.
CSS Modules: Scoped CSS with familiar syntax, zero runtime. Good for teams comfortable with traditional CSS.
My preference for 2026: Tailwind for most projects—it's fast, consistent, and the JIT compiler keeps bundles small. For complex design systems, CSS-in-JS with zero-runtime libraries like vanilla-extract.
22. How do you test React components? Explain your testing strategy.
Answer:
I follow the testing trophy: mostly integration tests, some unit tests, few E2E tests.
// Unit test: Pure functions, hooks
test('formatCurrency formats correctly', () => {
expect(formatCurrency(1000)).toBe('$1,000.00');
});
// Integration test: Component behavior (Testing Library)
test('user can submit form', async () => {
render(<LoginForm onSubmit={mockSubmit} />);
await userEvent.type(screen.getByLabelText(/email/i), 'test@test.com');
await userEvent.type(screen.getByLabelText(/password/i), 'password123');
await userEvent.click(screen.getByRole('button', { name: /submit/i }));
expect(mockSubmit).toHaveBeenCalledWith({
email: 'test@test.com',
password: 'password123'
});
});
// Mock API calls
import { rest } from 'msw';
import { setupServer } from 'msw/node';
const server = setupServer(
rest.get('/api/user', (req, res, ctx) => {
return res(ctx.json({ name: 'John' }));
})
);
// E2E: Critical paths with Playwright/Cypress
test('checkout flow', async ({ page }) => {
await page.goto('/products');
await page.click('[data-testid="add-to-cart"]');
await page.click('[data-testid="checkout"]');
await expect(page.locator('.order-confirmation')).toBeVisible();
});23. Explain React error boundaries. How do you handle errors gracefully?
Answer:
Error boundaries catch JavaScript errors in child component trees and display fallback UI. They must be class components (hooks can't catch render errors yet).
class ErrorBoundary extends React.Component {
state = { hasError: false, error: null };
static getDerivedStateFromError(error) {
return { hasError: true, error };
}
componentDidCatch(error, errorInfo) {
// Log to error reporting service
logErrorToService(error, errorInfo.componentStack);
}
render() {
if (this.state.hasError) {
return (
<div className="error-fallback">
<h2>Something went wrong</h2>
<button onClick={() => this.setState({ hasError: false })}>
Try again
</button>
</div>
);
}
return this.props.children;
}
}
// Usage: Wrap sections independently
function App() {
return (
<Layout>
<ErrorBoundary>
<Header />
</ErrorBoundary>
<ErrorBoundary>
<MainContent /> {/* Error here doesn't crash Header */}
</ErrorBoundary>
</Layout>
);
}Note: Error boundaries don't catch errors in event handlers, async code, or server-side rendering. Use try/catch for those.
24. What is hydration in React? What causes hydration mismatches?
Answer:
Hydration is the process where React attaches event listeners and makes server-rendered HTML interactive. React expects the client render to match the server HTML exactly.
Common causes of mismatches:
- Using Date.now() or Math.random() in render
- Browser-only APIs (window, localStorage) in initial render
- Different data between server and client
- Invalid HTML nesting (p inside p, div inside p)
// Problem: Different on server vs client
function Timestamp() {
return <span>{new Date().toISOString()}</span>; // Mismatch!
}
// Solution: useEffect for client-only values
function Timestamp() {
const [time, setTime] = useState(null);
useEffect(() => {
setTime(new Date().toISOString());
}, []);
return <span>{time ?? 'Loading...'}</span>;
}
// Or use suppressHydrationWarning for intentional differences
<time suppressHydrationWarning>
{new Date().toISOString()}
</time>25. How do you optimize a slow React application? Walk me through your debugging process.
Answer:
My optimization process:
- Measure first: React DevTools Profiler to identify slow components and unnecessary re-renders
- Check network: Lighthouse for bundle size, waterfall for slow requests
- Identify patterns: Are expensive components re-rendering? Is there layout thrashing?
Common fixes:
- Memoize expensive computations (useMemo) and callbacks (useCallback)
- Virtualize long lists (react-window)
- Code split heavy routes and components
- Lazy load images and off-screen content
- Debounce/throttle event handlers
- Move state down to reduce re-render scope
- Use web workers for CPU-intensive tasks
Real example: We had a dashboard rendering 1000+ items. Virtualization + memoization brought re-render time from 800ms to 16ms.
Node.js & Backend (Questions 26-35)
26. Explain the Node.js event loop. How is it different from browser JavaScript?
Answer:
Node's event loop has additional phases for I/O and has libuv for handling async operations:
/*
Node.js Event Loop Phases:
1. Timers - setTimeout, setInterval callbacks
2. Pending callbacks - I/O callbacks deferred from previous loop
3. Idle, prepare - internal use
4. Poll - retrieve new I/O events, execute I/O callbacks
5. Check - setImmediate callbacks
6. Close callbacks - socket.on('close')
*/
// Key difference: setImmediate vs setTimeout
setImmediate(() => console.log('immediate'));
setTimeout(() => console.log('timeout'), 0);
// Order can vary! In I/O callback, setImmediate always first
// process.nextTick runs before any phase
process.nextTick(() => console.log('nextTick'));
Promise.resolve().then(() => console.log('promise'));
// nextTick, promise, then event loop phases
// Blocking the event loop is deadly in Node
// Bad: CPU-intensive sync operation
const hash = crypto.pbkdf2Sync(password, salt, 100000, 64, 'sha512');
// Good: Non-blocking async
crypto.pbkdf2(password, salt, 100000, 64, 'sha512', (err, hash) => {});27. Design a RESTful API for a blog platform. What endpoints would you create?
Answer:
// RESTful API Design
// Use nouns, not verbs. HTTP method conveys action.
// Posts
GET /api/v1/posts // List posts (paginated)
GET /api/v1/posts/:id // Get single post
POST /api/v1/posts // Create post (auth required)
PUT /api/v1/posts/:id // Replace post
PATCH /api/v1/posts/:id // Partial update
DELETE /api/v1/posts/:id // Delete post
// Nested resources
GET /api/v1/posts/:id/comments // Comments on a post
POST /api/v1/posts/:id/comments // Add comment
// Filtering, sorting, pagination via query params
GET /api/v1/posts?status=published&sort=-createdAt&page=2&limit=20
// Response format
{
"data": [...],
"meta": {
"total": 100,
"page": 2,
"limit": 20,
"totalPages": 5
},
"links": {
"self": "/api/v1/posts?page=2",
"next": "/api/v1/posts?page=3",
"prev": "/api/v1/posts?page=1"
}
}
// Error response
{
"error": {
"code": "VALIDATION_ERROR",
"message": "Invalid input",
"details": [
{ "field": "title", "message": "Title is required" }
]
}
}28. How do you handle authentication in a Node.js API? Compare JWT vs sessions.
Answer:
Sessions: Server stores session data, client gets session ID cookie. Stateful, easy to invalidate, but requires session storage (Redis) for scaling.
JWT: Stateless tokens containing user info. Scales easily, but can't be invalidated before expiry without additional infrastructure.
// JWT Implementation
const jwt = require('jsonwebtoken');
// Login endpoint
app.post('/login', async (req, res) => {
const user = await validateCredentials(req.body);
const accessToken = jwt.sign(
{ userId: user.id, role: user.role },
process.env.JWT_SECRET,
{ expiresIn: '15m' }
);
const refreshToken = jwt.sign(
{ userId: user.id },
process.env.REFRESH_SECRET,
{ expiresIn: '7d' }
);
// Store refresh token hash in DB for invalidation
await storeRefreshToken(user.id, refreshToken);
res.cookie('refreshToken', refreshToken, {
httpOnly: true,
secure: true,
sameSite: 'strict'
});
res.json({ accessToken });
});
// Auth middleware
const authenticate = (req, res, next) => {
const token = req.headers.authorization?.split(' ')[1];
try {
req.user = jwt.verify(token, process.env.JWT_SECRET);
next();
} catch (err) {
res.status(401).json({ error: 'Invalid token' });
}
};My recommendation: Short-lived JWTs (15min) + httpOnly refresh tokens + token rotation. Best balance of security and scalability.
29. How do you handle file uploads in Node.js? What about large files?
Answer:
// Small files: multer with memory storage
const multer = require('multer');
const upload = multer({
storage: multer.memoryStorage(),
limits: { fileSize: 5 * 1024 * 1024 } // 5MB
});
app.post('/upload', upload.single('file'), async (req, res) => {
const { buffer, mimetype, originalname } = req.file;
await s3.upload({ Body: buffer, Key: originalname }).promise();
});
// Large files: Stream directly to storage
const { S3Client } = require('@aws-sdk/client-s3');
const { Upload } = require('@aws-sdk/lib-storage');
const busboy = require('busboy');
app.post('/upload-large', (req, res) => {
const bb = busboy({ headers: req.headers });
bb.on('file', async (name, file, info) => {
const upload = new Upload({
client: s3Client,
params: {
Bucket: 'my-bucket',
Key: info.filename,
Body: file // Stream directly, never loads into memory
}
});
upload.on('httpUploadProgress', (progress) => {
console.log(`Uploaded: ${progress.loaded}/${progress.total}`);
});
await upload.done();
res.json({ success: true });
});
req.pipe(bb);
});For production: Use presigned URLs for direct client-to-S3 uploads. Bypasses your server entirely for large files.
30. Explain middleware in Express. How would you implement rate limiting?
Answer:
Middleware are functions that have access to request, response, and next function. They can modify req/res, end the request, or call next().
// Middleware execution order matters!
app.use(cors());
app.use(helmet());
app.use(express.json());
app.use(requestLogger);
app.use('/api', rateLimiter);
app.use('/api', authenticate);
app.use('/api', routes);
app.use(errorHandler); // Must be last
// Custom rate limiter implementation
const rateLimit = (options) => {
const { windowMs, max } = options;
const requests = new Map();
// Cleanup old entries periodically
setInterval(() => {
const now = Date.now();
for (const [key, data] of requests) {
if (now - data.startTime > windowMs) {
requests.delete(key);
}
}
}, windowMs);
return (req, res, next) => {
const key = req.ip;
const now = Date.now();
if (!requests.has(key)) {
requests.set(key, { count: 1, startTime: now });
return next();
}
const data = requests.get(key);
if (now - data.startTime > windowMs) {
requests.set(key, { count: 1, startTime: now });
return next();
}
if (data.count >= max) {
return res.status(429).json({
error: 'Too many requests',
retryAfter: Math.ceil((data.startTime + windowMs - now) / 1000)
});
}
data.count++;
next();
};
};
// Usage
app.use('/api', rateLimit({ windowMs: 60000, max: 100 }));For production: Use Redis-based rate limiting for distributed systems. The above only works for single-server deployments.
31. How do you handle errors in an Express application?
Answer:
// Custom error classes
class AppError extends Error {
constructor(message, statusCode, code) {
super(message);
this.statusCode = statusCode;
this.code = code;
this.isOperational = true;
}
}
class NotFoundError extends AppError {
constructor(resource) {
super(`${resource} not found`, 404, 'NOT_FOUND');
}
}
class ValidationError extends AppError {
constructor(errors) {
super('Validation failed', 400, 'VALIDATION_ERROR');
this.errors = errors;
}
}
// Async error wrapper (no try/catch needed in routes)
const asyncHandler = (fn) => (req, res, next) => {
Promise.resolve(fn(req, res, next)).catch(next);
};
// Route using wrapper
app.get('/users/:id', asyncHandler(async (req, res) => {
const user = await User.findById(req.params.id);
if (!user) throw new NotFoundError('User');
res.json(user);
}));
// Global error handler (must have 4 params)
app.use((err, req, res, next) => {
// Log error
logger.error({
message: err.message,
stack: err.stack,
url: req.url,
method: req.method
});
// Don't leak stack traces in production
if (err.isOperational) {
return res.status(err.statusCode).json({
error: {
code: err.code,
message: err.message,
...(err.errors && { details: err.errors })
}
});
}
// Unknown error - send generic message
res.status(500).json({
error: {
code: 'INTERNAL_ERROR',
message: 'Something went wrong'
}
});
});32. Explain how you would implement caching in a Node.js application.
Answer:
Multiple caching layers, each with different trade-offs:
// 1. In-memory cache (single server)
const NodeCache = require('node-cache');
const cache = new NodeCache({ stdTTL: 300 });
async function getUser(id) {
const cached = cache.get(`user:${id}`);
if (cached) return cached;
const user = await db.users.findById(id);
cache.set(`user:${id}`, user);
return user;
}
// 2. Redis cache (distributed)
const Redis = require('ioredis');
const redis = new Redis();
async function getCachedData(key, fetchFn, ttl = 300) {
const cached = await redis.get(key);
if (cached) return JSON.parse(cached);
const data = await fetchFn();
await redis.setex(key, ttl, JSON.stringify(data));
return data;
}
// 3. HTTP caching headers
app.get('/api/products', async (req, res) => {
const products = await getProducts();
res
.set('Cache-Control', 'public, max-age=300') // Browser caches 5min
.set('ETag', generateETag(products))
.json(products);
});
// 4. Cache invalidation pattern
async function updateUser(id, data) {
await db.users.update(id, data);
await redis.del(`user:${id}`);
await redis.del('users:list'); // Invalidate list cache too
}Cache invalidation strategy: Time-based expiry for most cases. Event-driven invalidation for critical data consistency.
33. How do you secure a Node.js API?
Answer:
// Essential security middleware
const helmet = require('helmet');
const cors = require('cors');
const rateLimit = require('express-rate-limit');
app.use(helmet()); // Sets security headers
app.use(cors({
origin: process.env.ALLOWED_ORIGINS.split(','),
credentials: true
}));
app.use(rateLimit({ windowMs: 15 * 60 * 1000, max: 100 }));
// Input validation (never trust user input)
const { body, validationResult } = require('express-validator');
app.post('/users',
body('email').isEmail().normalizeEmail(),
body('password').isLength({ min: 8 }),
(req, res) => {
const errors = validationResult(req);
if (!errors.isEmpty()) {
return res.status(400).json({ errors: errors.array() });
}
}
);
// Prevent NoSQL injection
const sanitize = require('mongo-sanitize');
app.use((req, res, next) => {
req.body = sanitize(req.body);
req.query = sanitize(req.query);
next();
});
// SQL injection prevention (use parameterized queries)
// Bad: db.query(`SELECT * FROM users WHERE id = ${userId}`)
// Good:
db.query('SELECT * FROM users WHERE id = $1', [userId]);
// Security checklist:
// - HTTPS only (redirect HTTP)
// - Secure cookie settings (httpOnly, secure, sameSite)
// - Content Security Policy headers
// - Rate limiting per endpoint
// - Input validation and sanitization
// - Parameterized database queries
// - Proper error handling (don't leak stack traces)
// - Keep dependencies updated (npm audit)34. Explain the difference between SQL and NoSQL databases. When would you use each?
Answer:
SQL (PostgreSQL, MySQL):
- Structured data with relationships
- ACID transactions
- Complex queries and joins
- Schema enforcement
- Best for: Financial data, user accounts, inventory
NoSQL (MongoDB, DynamoDB):
- Flexible schema
- Horizontal scaling
- High write throughput
- Document/key-value models
- Best for: Logs, sessions, content management, real-time analytics
My rule: Start with PostgreSQL for most applications. It handles JSON well too. Use NoSQL for specific use cases: high-volume logs (DynamoDB), caching (Redis), full-text search (Elasticsearch).
35. How do you handle database migrations in production?
Answer:
// Using a migration tool (Knex.js example)
// migrations/20260123_add_users_email_index.js
exports.up = async function(knex) {
// Check if index exists first (idempotent)
const exists = await knex.schema.hasColumn('users', 'email');
if (exists) {
await knex.schema.alterTable('users', table => {
table.index('email');
});
}
};
exports.down = async function(knex) {
await knex.schema.alterTable('users', table => {
table.dropIndex('email');
});
};
// Safe migration practices:
// 1. Never delete columns immediately - deprecate first
// 2. Add new columns as nullable, backfill, then add constraints
// 3. Create indexes CONCURRENTLY to avoid locking
// Example: Adding required column safely
// Step 1: Add nullable column
ALTER TABLE users ADD COLUMN phone VARCHAR(20);
// Step 2: Backfill data (in batches)
UPDATE users SET phone = 'unknown' WHERE phone IS NULL LIMIT 1000;
// Step 3: Add NOT NULL constraint
ALTER TABLE users ALTER COLUMN phone SET NOT NULL;
// Zero-downtime deployment order:
// 1. Deploy new code that handles both old and new schema
// 2. Run migration
// 3. Deploy code that uses new schema only
// 4. Clean up old schema (after rollback window)Database & System Design (Questions 36-45)
36. Write a SQL query to find the second highest salary in each department.
Answer:
-- Using window function (most elegant)
WITH ranked AS (
SELECT
department_id,
employee_name,
salary,
DENSE_RANK() OVER (
PARTITION BY department_id
ORDER BY salary DESC
) as rank
FROM employees
)
SELECT department_id, employee_name, salary
FROM ranked
WHERE rank = 2;
-- Alternative: Correlated subquery
SELECT e1.department_id, e1.employee_name, e1.salary
FROM employees e1
WHERE 1 = (
SELECT COUNT(DISTINCT e2.salary)
FROM employees e2
WHERE e2.department_id = e1.department_id
AND e2.salary > e1.salary
);
-- DENSE_RANK vs RANK vs ROW_NUMBER:
-- DENSE_RANK: 1,2,2,3 (no gaps, ties get same rank)
-- RANK: 1,2,2,4 (gaps after ties)
-- ROW_NUMBER: 1,2,3,4 (unique, arbitrary for ties)37. Explain database indexing. When would an index hurt performance?
Answer:
Indexes are data structures (usually B-trees) that speed up data retrieval at the cost of write performance and storage.
When indexes help:
- Columns in WHERE clauses
- Columns used in JOINs
- Columns used in ORDER BY
- High cardinality columns (many unique values)
When indexes hurt:
- Write-heavy tables (every INSERT/UPDATE updates indexes)
- Low cardinality columns (boolean, status enum)
- Small tables (full scan is faster)
- Columns rarely used in queries
-- Composite index order matters! CREATE INDEX idx_user_status_created ON orders(user_id, status, created_at); -- This index helps: WHERE user_id = 1 AND status = 'pending' WHERE user_id = 1 -- This index does NOT help (leftmost prefix rule): WHERE status = 'pending' -- Can't use index WHERE created_at > '2026-01-01' -- Can't use index
38. Design a URL shortener like bit.ly. What are the key considerations?
Answer:
Requirements clarification:
- Create short URL from long URL
- Redirect short URL to original
- Scale: 100M URLs/day, 10:1 read/write ratio
- URL expiration, analytics
// Key Design Decisions:
// 1. Short code generation
// Base62 encoding: [a-zA-Z0-9] = 62 chars
// 7 chars = 62^7 = 3.5 trillion combinations
function generateShortCode() {
// Option A: Random (collision check needed)
return crypto.randomBytes(5).toString('base64url').slice(0, 7);
// Option B: Counter-based (no collisions, predictable)
const id = await getNextId(); // From distributed counter
return base62Encode(id);
}
// 2. Database Schema
CREATE TABLE urls (
short_code VARCHAR(7) PRIMARY KEY,
original_url TEXT NOT NULL,
user_id INT,
created_at TIMESTAMP DEFAULT NOW(),
expires_at TIMESTAMP,
click_count INT DEFAULT 0
);
// 3. API Design
POST /api/shorten
Request: { url: "https://...", customAlias?: "my-link" }
Response: { shortUrl: "https://short.ly/abc123" }
GET /:shortCode
Response: 301 Redirect to original URL
// 4. Scaling considerations
// - Cache hot URLs in Redis
// - Partition database by short_code hash
// - Use CDN for redirect handling
// - Async analytics processing (Kafka → analytics DB)
// 5. Rate limiting per user/IP to prevent abuse39. Design a real-time chat application. How would you handle message delivery?
Answer:
// Architecture Overview:
// Client <-> WebSocket Server <-> Message Queue <-> Storage
// 1. WebSocket connection management
const connections = new Map(); // userId -> WebSocket[]
wss.on('connection', (ws, req) => {
const userId = authenticateConnection(req);
if (!connections.has(userId)) {
connections.set(userId, []);
}
connections.get(userId).push(ws);
ws.on('message', async (data) => {
const message = JSON.parse(data);
await handleMessage(userId, message);
});
ws.on('close', () => {
const userSockets = connections.get(userId);
connections.set(userId, userSockets.filter(s => s !== ws));
});
});
// 2. Message delivery with acknowledgment
async function sendMessage(fromUser, toUser, content) {
const message = {
id: generateMessageId(),
from: fromUser,
to: toUser,
content,
timestamp: Date.now(),
status: 'sent'
};
// Persist first
await db.messages.insert(message);
// Deliver to recipient
const recipientSockets = connections.get(toUser);
if (recipientSockets?.length > 0) {
recipientSockets.forEach(ws => {
ws.send(JSON.stringify({ type: 'message', data: message }));
});
await db.messages.updateStatus(message.id, 'delivered');
} else {
// User offline - will get messages on reconnect
await pushNotification(toUser, message);
}
}
// 3. Scaling: Use Redis Pub/Sub for multi-server
// When message arrives, publish to Redis channel
// All servers subscribed to user's channel deliver locally40. Explain database transactions and isolation levels.
Answer:
Transactions ensure ACID properties: Atomicity, Consistency, Isolation, Durability.
Isolation levels (from lowest to highest):
- Read Uncommitted: Can see uncommitted changes (dirty reads)
- Read Committed: Only sees committed data (default in PostgreSQL)
- Repeatable Read: Same query returns same results within transaction
- Serializable: Transactions execute as if sequential (highest isolation)
// Example: Transfer money between accounts
async function transfer(fromId, toId, amount) {
const client = await pool.connect();
try {
await client.query('BEGIN');
await client.query(
'SET TRANSACTION ISOLATION LEVEL SERIALIZABLE'
);
// Check balance
const { rows } = await client.query(
'SELECT balance FROM accounts WHERE id = $1 FOR UPDATE',
[fromId]
);
if (rows[0].balance < amount) {
throw new Error('Insufficient funds');
}
// Debit and credit
await client.query(
'UPDATE accounts SET balance = balance - $1 WHERE id = $2',
[amount, fromId]
);
await client.query(
'UPDATE accounts SET balance = balance + $1 WHERE id = $2',
[amount, toId]
);
await client.query('COMMIT');
} catch (e) {
await client.query('ROLLBACK');
throw e;
} finally {
client.release();
}
}41. How would you implement pagination? Compare offset vs cursor-based.
Answer:
Offset pagination: Simple but slow for large offsets (DB must scan skipped rows)
Cursor pagination: Consistent results, performant at any page, but can't jump to arbitrary page
// Offset pagination
// GET /api/posts?page=100&limit=20
SELECT * FROM posts
ORDER BY created_at DESC
LIMIT 20 OFFSET 1980;
// Problem: Must scan 1980 rows first!
// Cursor pagination (keyset)
// GET /api/posts?cursor=2026-01-20T10:30:00&limit=20
SELECT * FROM posts
WHERE created_at < $cursor
ORDER BY created_at DESC
LIMIT 20;
// Uses index, always fast regardless of "page"
// Cursor encoding (include all sort keys)
const cursor = Buffer.from(JSON.stringify({
created_at: lastItem.created_at,
id: lastItem.id // Tiebreaker for same timestamp
})).toString('base64');
// API response
{
"data": [...],
"pageInfo": {
"hasNextPage": true,
"endCursor": "eyJjcmVhdGVkX2F0IjoiMjAyNi0wMS0yMFQxMDozMDowMFoiLCJpZCI6MTIzfQ=="
}
}Recommendation: Use cursor pagination for infinite scroll, real-time feeds. Use offset for admin panels where page jumping is needed.
42. Explain the CAP theorem. How does it apply to distributed databases?
Answer:
CAP theorem states a distributed system can only guarantee two of three properties:
- Consistency: All nodes see the same data at the same time
- Availability: Every request receives a response
- Partition tolerance: System continues operating despite network partitions
Since network partitions are inevitable, you choose between:
- CP (Consistency + Partition tolerance): MongoDB, HBase - may reject writes during partition
- AP (Availability + Partition tolerance): Cassandra, DynamoDB - accepts writes, resolves conflicts later
Reality: Most systems aren't purely CP or AP. They tune consistency levels per operation. DynamoDB lets you choose strong or eventual consistency per read.
43. How would you design a notification system that handles millions of users?
Answer:
// Architecture:
// Event Source → Message Queue → Notification Service → Delivery
// 1. Event-driven design
// When something happens, publish event
await messageQueue.publish('notification', {
type: 'new_follower',
userId: targetUserId,
data: { followerId, followerName }
});
// 2. Notification worker (processes queue)
class NotificationWorker {
async process(event) {
const user = await getUser(event.userId);
const preferences = await getPreferences(event.userId);
// Check preferences
if (!preferences[event.type].enabled) return;
// Create notification record
const notification = await db.notifications.create({
userId: event.userId,
type: event.type,
data: event.data,
read: false
});
// Fan out to delivery channels
const deliveryTasks = [];
if (preferences[event.type].push) {
deliveryTasks.push(sendPushNotification(user, notification));
}
if (preferences[event.type].email) {
deliveryTasks.push(queueEmail(user.email, notification));
}
if (preferences[event.type].inApp) {
deliveryTasks.push(broadcastToWebSocket(user.id, notification));
}
await Promise.allSettled(deliveryTasks);
}
}
// 3. Batching for high-volume events
// Instead of one notification per like, batch: "5 people liked your post"
// Use time-window aggregation before sending
// 4. Scaling considerations
// - Partition queue by user_id hash
// - Rate limit notifications per user
// - Priority queues (security alerts > social updates)
// - Dead letter queue for failed deliveries44. Explain microservices vs monolith. When would you choose each?
Answer:
Monolith advantages:
- Simple deployment, debugging, and local development
- No network latency between components
- Easier data consistency (single database)
- Lower operational complexity
Microservices advantages:
- Independent scaling of services
- Technology flexibility per service
- Isolated failures (one service down ≠ entire system down)
- Independent deployments by different teams
My recommendation: Start with a well-structured monolith. Extract microservices when you have clear scaling needs or team boundaries. Most startups don't need microservices until they have 50+ engineers.
The "modular monolith": Best of both worlds—monolith deployment with microservice-like module boundaries. Easy to extract later.
45. How do you handle distributed transactions across microservices?
Answer:
Distributed transactions are hard. Avoid when possible, but when needed:
// Pattern 1: Saga Pattern (choreography)
// Each service publishes events, next service listens and acts
// Order Service
await orderDb.create(order);
await eventBus.publish('OrderCreated', order);
// Payment Service (listens to OrderCreated)
const payment = await processPayment(order);
if (payment.success) {
await eventBus.publish('PaymentCompleted', { orderId });
} else {
await eventBus.publish('PaymentFailed', { orderId });
}
// Order Service (listens to PaymentFailed - compensating action)
await orderDb.update(orderId, { status: 'cancelled' });
// Pattern 2: Saga Pattern (orchestration)
class OrderSaga {
async execute(orderData) {
const steps = [
{ action: () => orderService.create(orderData),
compensate: (ctx) => orderService.cancel(ctx.orderId) },
{ action: (ctx) => paymentService.charge(ctx.orderId),
compensate: (ctx) => paymentService.refund(ctx.orderId) },
{ action: (ctx) => inventoryService.reserve(ctx.orderId),
compensate: (ctx) => inventoryService.release(ctx.orderId) }
];
const context = {};
const completedSteps = [];
for (const step of steps) {
try {
Object.assign(context, await step.action(context));
completedSteps.push(step);
} catch (error) {
// Rollback in reverse order
for (const completed of completedSteps.reverse()) {
await completed.compensate(context);
}
throw error;
}
}
}
}Advanced & Real-World Scenarios (Questions 46-50)
46. Your API is slow. Walk me through how you would diagnose and fix it.
Answer:
Systematic approach to performance debugging:
- Measure: Add timing logs or use APM (DataDog, New Relic) to identify slow endpoints
- Profile database: Check for slow queries (EXPLAIN ANALYZE), missing indexes, N+1 queries
- Check external calls: Are third-party APIs slow? Add timeouts and circuit breakers
- Memory/CPU: Is the server resource-constrained? Check for memory leaks
- Caching: Can results be cached? Check cache hit rates
// Quick wins I check first:
// 1. N+1 query problem
// Bad: 100 users = 101 queries
users.forEach(async user => {
user.posts = await getPosts(user.id);
});
// Good: 2 queries with JOIN or IN clause
const users = await db.query(`
SELECT u.*, json_agg(p.*) as posts
FROM users u
LEFT JOIN posts p ON p.user_id = u.id
GROUP BY u.id
`);
// 2. Add database query logging
// Log queries taking > 100ms
// 3. Check for missing indexes
EXPLAIN ANALYZE SELECT * FROM orders WHERE user_id = 123;
// If you see "Seq Scan" on large table, add index
// 4. Implement response compression
app.use(compression());
// 5. Add caching headers
res.set('Cache-Control', 'public, max-age=300');47. Design a feature flag system. How would you implement gradual rollouts?
Answer:
// Feature flag service
class FeatureFlags {
constructor(config) {
this.flags = config;
}
isEnabled(flagName, context = {}) {
const flag = this.flags[flagName];
if (!flag) return false;
// Global kill switch
if (flag.enabled === false) return false;
// User allowlist (beta testers)
if (flag.allowedUsers?.includes(context.userId)) {
return true;
}
// Percentage rollout (consistent per user)
if (flag.percentage !== undefined) {
const hash = this.hashUser(context.userId, flagName);
return hash < flag.percentage;
}
// Environment-based
if (flag.environments) {
return flag.environments.includes(process.env.NODE_ENV);
}
return flag.enabled ?? false;
}
// Consistent hashing - same user always gets same result
hashUser(userId, flagName) {
const hash = crypto
.createHash('md5')
.update(`${userId}-${flagName}`)
.digest('hex');
return parseInt(hash.slice(0, 8), 16) % 100;
}
}
// Usage in code
if (featureFlags.isEnabled('new_checkout', { userId: user.id })) {
return <NewCheckout />;
} else {
return <OldCheckout />;
}
// Gradual rollout strategy:
// Day 1: 1% of users (catch major bugs)
// Day 2: 10% of users
// Day 3: 50% of users
// Day 4: 100% of users
// Monitor error rates and rollback if needed48. You're getting reports of intermittent 500 errors. How do you investigate?
Answer:
Intermittent errors are the worst. Here's my investigation playbook:
- Check logs with correlation IDs: Track requests across services
- Look for patterns: Time of day? Specific users? Certain endpoints? After deployments?
- Resource exhaustion: Connection pool limits? Memory? File descriptors?
- Race conditions: Does it happen under load? Concurrent requests to same resource?
- External dependencies: Third-party API timeouts? Database connection issues?
// Essential logging setup
app.use((req, res, next) => {
req.correlationId = req.headers['x-correlation-id'] || uuid();
// Log request start
logger.info({
correlationId: req.correlationId,
method: req.method,
url: req.url,
userId: req.user?.id
});
// Capture response
const originalSend = res.send;
res.send = function(body) {
logger.info({
correlationId: req.correlationId,
statusCode: res.statusCode,
duration: Date.now() - req.startTime
});
return originalSend.call(this, body);
};
next();
});
// Common culprits I check:
// 1. Database connection pool exhaustion
// 2. Unhandled promise rejections
// 3. Memory leaks causing OOM
// 4. Timeout misconfigurations
// 5. Race conditions in caching layer49. How do you handle a database migration that affects millions of rows?
Answer:
Large migrations need careful planning to avoid downtime and data issues:
// Strategy: Dual-write with backfill
// Phase 1: Add new column (nullable)
ALTER TABLE users ADD COLUMN email_normalized VARCHAR(255);
// Phase 2: Deploy code that writes to both columns
async function updateEmail(userId, email) {
await db.query(`
UPDATE users
SET email = $1, email_normalized = LOWER($1)
WHERE id = $2
`, [email, userId]);
}
// Phase 3: Backfill in batches (background job)
async function backfillEmails() {
let lastId = 0;
const batchSize = 1000;
while (true) {
const result = await db.query(`
UPDATE users
SET email_normalized = LOWER(email)
WHERE id > $1
AND email_normalized IS NULL
ORDER BY id
LIMIT $2
RETURNING id
`, [lastId, batchSize]);
if (result.rows.length === 0) break;
lastId = result.rows[result.rows.length - 1].id;
// Don't overwhelm the database
await sleep(100);
}
}
// Phase 4: Deploy code that reads from new column
// Phase 5: Add NOT NULL constraint
// Phase 6: Drop old column (after rollback window)
// Key principles:
// - Never lock tables for extended periods
// - Make changes backwards compatible
// - Have rollback plan at each phase
// - Monitor database performance during migration50. Tell me about a technical decision you made that you later regretted. What did you learn?
How to answer:
This is a behavioral question testing self-awareness and growth. Structure your answer:
- Context: What was the situation and constraints?
- Decision: What did you choose and why did it seem right?
- Impact: What went wrong? Be specific about consequences.
- Learning: What would you do differently? How has this changed your approach?
Example answer:
"At my previous startup, I chose MongoDB because 'we might need flexibility.' We didn't actually need schema flexibility—we had clear data relationships. Six months later, we were fighting with data consistency issues and complex aggregation pipelines that would have been simple SQL JOINs.
The migration to PostgreSQL took three months. I learned to challenge assumptions about 'future flexibility' and start with the simplest solution that solves the actual problem. Now I always ask: 'What problem are we solving today?' before reaching for complex solutions."
Key: Show you can admit mistakes, learn from them, and improve your decision-making process.
Practice Full Stack Interviews with AI
Reading questions is one thing—answering them under pressure is another. Many candidates now use tools like LastRound AI to simulate real interviews and refine their answers with instant feedback.
- ✓ Practice system design with AI interviewers
- ✓ Get detailed feedback on mock coding rounds
- ✓ Review and improve your explanations
- ✓ Build confidence before the real thing
Common Mistakes Candidates Make
❌ What Gets You Rejected
- • Jumping to code without clarifying requirements
- • Unable to explain trade-offs in your choices
- • Only knowing one technology deeply, nothing else
- • Can't debug or optimize existing code
- • No questions about the actual job/team
- • Claiming expertise but failing basics
✓ What Gets You Offers
- • Ask clarifying questions first, then design
- • Explain why you chose one approach over alternatives
- • T-shaped skills: deep in some, broad awareness
- • Think aloud while problem-solving
- • Show genuine curiosity about the company's challenges
- • Admit knowledge gaps, show how you'd learn
Pro Tips from Interviewers
Structure your system design answers
Requirements → High-level design → Component deep-dive → Trade-offs → Scaling considerations. Don't jump straight to databases.
Always mention security
Input validation, authentication, authorization, HTTPS, SQL injection prevention. It shows you think like a production engineer.
Have real project stories ready
For each major technology, have a story: "When I used X, I faced Y challenge and solved it by Z." Concrete beats theoretical.
Practice coding out loud
Interviewers can't read your mind. Narrate your thought process: "I'm thinking about edge cases... what if the input is empty?"
Full stack interviews are demanding because they cover so much ground. But that's also the opportunity—you get to show breadth of knowledge and depth of understanding. Focus on fundamentals, practice explaining your reasoning, and remember: every senior engineer was once where you are now. The questions in this guide reflect what actually gets asked in 2026. Master them, and you'll walk into interviews with confidence.
