Skip to main content
    January 23, 202645 min readQA Automation

    40+ QA Automation Interview Questions for SDET & Test Engineers

    QA automation interviews test both testing fundamentals and coding skills. Here's what companies ask when hiring SDETs and automation engineers.

    QA automation engineer reviewing test results and code

    The SDET role has grown beyond manual testing. Modern QA automation engineers write production-quality code, design test architectures, and integrate testing into CI/CD pipelines. These questions reflect what top tech companies actually ask.

    Key Skills Assessed

    • Test Strategy: When to automate, test pyramid, risk-based testing
    • Automation Skills: Selenium, Cypress, Playwright, API testing
    • Programming: Clean code, design patterns, debugging
    • CI/CD: Pipeline integration, parallel execution, reporting
    • Quality Mindset: Finding edge cases, thinking like a user

    Testing Fundamentals (Questions 1-12)

    1. Explain the test automation pyramid.

    Bottom to top:

    • Unit tests (70%): Fast, isolated, test single functions/classes
    • Integration tests (20%): Test component interactions, APIs
    • E2E tests (10%): Full user flows, slowest, most brittle

    More tests at bottom = faster feedback, easier maintenance.

    2. When should you automate a test vs keep it manual?

    Automate when: Test runs frequently, is repetitive, tests critical paths, requires exact data comparison, regression testing

    Keep manual when: Exploratory testing, UX/usability evaluation, one-time tests, rapidly changing features, visual testing (though this is automatable now)

    3. What's the difference between verification and validation?

    Verification: "Are we building the product right?" Checks against specifications.

    Validation: "Are we building the right product?" Checks against user needs.

    4. Explain black box vs white box vs gray box testing.

    Black box: No knowledge of internals, test based on requirements/behavior

    White box: Full code access, test internal paths/logic

    Gray box: Partial knowledge, common in integration testing

    5. What is regression testing? How do you optimize it?

    Regression testing ensures new changes don't break existing functionality.

    Optimization: Prioritize by risk, parallelize execution, use test impact analysis, maintain test suite (remove flaky/redundant tests), run smoke tests first.

    6. What are flaky tests? How do you handle them?

    Tests that pass/fail inconsistently without code changes.

    Common causes: Timing issues, test order dependencies, shared state, external dependencies.

    Solutions: Proper waits (not sleep), isolated test data, mock external services, quarantine flaky tests while fixing.

    7. Explain boundary value analysis and equivalence partitioning.

    Equivalence partitioning: Divide inputs into groups that should behave similarly, test one from each group.

    Boundary value: Test at edges of partitions (min, max, just inside, just outside). Bugs often occur at boundaries.

    8. What is test coverage? What metrics do you track?

    • Code coverage: Line, branch, function coverage
    • Requirement coverage: Tests mapped to requirements
    • Risk coverage: Critical paths tested

    Note: High coverage doesn't guarantee quality—you can have 100% coverage with poor tests.

    9. What's the difference between smoke testing and sanity testing?

    Smoke testing: Broad, shallow tests to verify basic functionality works. "Does the build even run?"

    Sanity testing: Narrow, deep tests on specific functionality after changes. "Does this feature work?"

    10. How do you write a good test case?

    • Clear, descriptive name explaining what it tests
    • Arrange-Act-Assert structure
    • One assertion per test (ideally)
    • Independent—no reliance on other tests
    • Repeatable—same result every time
    • Fast execution

    11. Explain shift-left testing.

    Testing earlier in the development cycle. Include QA in design discussions, review requirements for testability, write tests alongside code (TDD), catch bugs before they're expensive to fix.

    12. What is mutation testing?

    Introducing small code changes (mutations) to verify tests catch them. If a mutation survives (tests still pass), your tests aren't thorough enough. Measures test quality, not just coverage.

    UI Automation (Questions 13-24)

    13. Compare Selenium, Cypress, and Playwright.

    Selenium: Multi-language, multi-browser, mature ecosystem. Slower, more setup.

    Cypress: JavaScript only, fast, great DX, built-in waits. Single domain limitation.

    Playwright: Microsoft-backed, multi-browser, multi-language, modern API, good for cross-browser.

    14. Explain the Page Object Model pattern.

    Design pattern where each page/component has a class containing its elements and actions.

    Benefits: Reusability, maintainability (locator changes in one place), readable tests. Tests call page methods, not raw selectors.

    15. What locator strategies do you use? Which is most reliable?

    Preference order:

    1. data-testid (explicit, stable)
    2. ID (if unique and stable)
    3. Name/placeholder (semantic)
    4. CSS selector (flexible)
    5. XPath (powerful but brittle)

    Avoid: Index-based, complex XPath, class names that change.

    16. How do you handle dynamic elements and waits?

    • Explicit waits: Wait for specific condition (visibility, clickability)
    • Implicit waits: Global timeout for finding elements
    • Avoid: Thread.sleep/hard waits

    Cypress/Playwright have built-in auto-waiting, reducing flakiness.

    17. How do you handle file uploads and downloads in automation?

    Uploads: Send file path to input element (don't interact with OS dialog)

    Downloads: Configure download directory, verify file exists after action, use headless browser settings

    18. How do you handle popups, alerts, and iframes?

    Alerts: Switch to alert, accept/dismiss, get text

    Windows/tabs: Switch to window by handle

    iframes: Switch to frame before interacting with elements inside

    19. How do you run tests in parallel?

    • Use test runner features (pytest-xdist, TestNG parallel)
    • Selenium Grid or cloud services (BrowserStack, Sauce Labs)
    • Ensure tests are independent (no shared state)
    • Isolated test data per thread

    20. What is Selenium Grid?

    Distributed test execution system. Hub routes tests to nodes running different browser/OS combinations. Enables parallel execution across multiple machines. Selenium Grid 4 uses Docker for easier setup.

    21. How do you handle authentication in automated tests?

    • API login and inject session cookie (faster)
    • UI login in setup, reuse session
    • Mock authentication for unit tests
    • Store credentials securely (env vars, secrets manager)

    22. How do you implement cross-browser testing?

    Parameterize browser in tests, use cloud providers for browser farm, prioritize browsers by user analytics, handle browser-specific quirks with conditional logic or separate tests.

    23. What is visual regression testing?

    Comparing screenshots to detect unintended visual changes. Tools: Percy, Chromatic, Applitools.

    Challenges: Dynamic content, anti-aliasing differences, maintaining baselines.

    24. How do you debug a failing test?

    1. Check error message and stack trace
    2. Review screenshots/videos from failed run
    3. Run locally in headed mode
    4. Add debugging breakpoints
    5. Check if it's a test issue or application bug
    6. Verify test data and environment

    API & Performance Testing (Questions 25-35)

    25. What tools do you use for API testing?

    • Manual: Postman, Insomnia
    • Automation: REST Assured, requests (Python), SuperTest
    • Contract: Pact for consumer-driven contracts
    • Performance: k6, JMeter, Gatling

    26. What do you test in an API?

    • Status codes (200, 400, 401, 404, 500)
    • Response body structure and data types
    • Response time
    • Error handling and messages
    • Authentication/authorization
    • Input validation (invalid data, edge cases)
    • Idempotency for POST/PUT/DELETE

    27. Explain REST API methods and when to use each.

    GET: Retrieve data (idempotent)

    POST: Create new resource

    PUT: Update/replace resource (idempotent)

    PATCH: Partial update

    DELETE: Remove resource (idempotent)

    28. How do you validate API response schemas?

    Use JSON Schema validation. Define expected schema (types, required fields, formats), validate response against it. Tools: Ajv, jsonschema. Catches structural changes before they break clients.

    29. What is contract testing?

    Ensures API provider and consumer agree on interface. Consumer defines expectations (contract), provider verifies it meets them. Pact is popular tool. Catches integration issues without full E2E tests.

    30. How do you handle API authentication in tests?

    • Store tokens/keys in environment variables
    • Generate tokens programmatically in test setup
    • Handle token refresh for long test suites
    • Test both authenticated and unauthenticated scenarios

    31. What's the difference between load, stress, and soak testing?

    Load: Test under expected user load—can system handle normal traffic?

    Stress: Test beyond capacity—when does it break?

    Soak/Endurance: Test over extended time—any memory leaks or degradation?

    32. How do you identify performance bottlenecks?

    • Monitor response times per endpoint
    • Check database query times
    • Monitor server resources (CPU, memory, I/O)
    • Use APM tools (New Relic, Datadog)
    • Profile slow endpoints

    33. What metrics do you track in performance testing?

    • Response time (average, p50, p95, p99)
    • Throughput (requests/second)
    • Error rate
    • Concurrent users supported
    • Resource utilization

    34. How do you test microservices?

    • Unit tests for individual services
    • Contract tests between services
    • Integration tests with mocked dependencies
    • E2E tests for critical paths (fewer, slower)
    • Chaos engineering for resilience

    35. How do you integrate tests into CI/CD?

    • Unit tests on every commit
    • Integration tests on PR merge
    • E2E tests on staging deployment
    • Fail build on test failure
    • Publish reports and coverage
    • Alert on test degradation

    Advanced & Scenario Questions (Questions 36-40)

    36. You're given a new feature to test. Walk me through your approach.

    1. Review requirements and acceptance criteria
    2. Identify test scenarios (happy path, edge cases, errors)
    3. Determine what to automate vs test manually
    4. Write test cases before testing
    5. Execute tests, log bugs with clear repro steps
    6. Add automated tests to regression suite

    37. How do you handle test data management?

    • Create test data in setup, clean up in teardown
    • Use factories/builders for data generation
    • Avoid production data (privacy concerns)
    • Database seeding for consistent states
    • Isolated data per test to prevent conflicts

    38. How do you prioritize which bugs to fix?

    Consider: Severity (impact on users), frequency (how many affected), workaround availability, business impact, fix complexity. Critical/blocking bugs first, then high-impact issues. Communicate priorities with team.

    39. How do you maintain a large test automation suite?

    • Regular review and cleanup of obsolete tests
    • Fix or quarantine flaky tests immediately
    • Modular, reusable test components
    • Clear naming and organization
    • Documentation and onboarding for new team members
    • Monitor test suite health metrics

    40. Tell me about a bug you found that others missed.

    Structure: Describe the context, how you found it (exploratory testing, edge case thinking, user perspective), the impact, how you reported it. Shows your testing mindset and attention to detail.

    Practice QA Automation Interviews

    SDET interviews often include live coding and test design exercises. LastRound AI helps you practice explaining your testing approach and writing automation code under pressure.