Bringing Chrome DevTools to the terminal is not new. Puppeteer, Playwright, chrome --remote-debugging-port… tools that work well but require writing code, managing dependencies, and maintaining scripts. The chrome-devtools-mcp CLI takes a different approach: a persistent daemon and a single binary that exposes some of DevTools’ capabilities as shell subcommands.
It’s experimental, but already covers enough use cases to be worth installing.
Installation
npm i chrome-devtools-mcp@latest -g
chrome-devtools status
The daemon starts automatically the first time you run any command. From that point, the same Chrome process is reused across all subsequent calls: open pages, cookies, and session state persist between commands.
# Stop the daemon when you're done
chrome-devtools stop
By default it runs in headless mode with an isolated context. To use a real user profile:
chrome-devtools start --no-headless --userDataDir ~/Library/Application\ Support/Google/Chrome
Navigation
Navigation commands are the entry point for any automated workflow.
# Open a new tab
chrome-devtools new_page "https://joanleon.dev"
# Navigate in the active tab
chrome-devtools navigate_page --type url --url "https://web.dev"
# Reload ignoring cache
chrome-devtools navigate_page --type reload --ignoreCache
# History: back and forward
chrome-devtools navigate_page --type back
chrome-devtools navigate_page --type forward
# List all open tabs
chrome-devtools list_pages
# Switch active tab
chrome-devtools select_page 2
# Close a tab
chrome-devtools close_page 2
navigate_page accepts an initScript: JavaScript injected into each new document before any other scripts load. Useful for intercepting APIs or modifying the environment before the page’s own code runs.
chrome-devtools navigate_page --type url --url "https://joanleon.dev" \
--initScript "window.__PERF_MARK = Date.now()"
Emulation
Emulation lets you change page conditions without touching the browser manually.
# Emulate slow network
chrome-devtools emulate --networkConditions "Slow 3G"
# CPU throttling (4x factor)
chrome-devtools emulate --cpuThrottlingRate 4
# Emulate mobile device (viewport + touch)
chrome-devtools emulate --viewport "390x844x3,mobile,touch"
# Dark mode
chrome-devtools emulate --colorScheme dark
# Geolocation (Madrid)
chrome-devtools emulate --geolocation "40.4168x-3.7038"
# Custom user agent
chrome-devtools emulate --userAgent "Mozilla/5.0 (iPhone; ...)"
# Reset all emulations
chrome-devtools emulate --networkConditions ""
# Resize the window
chrome-devtools resize_page --width 1440 --height 900
Combining network and CPU throttling before running an audit more accurately reproduces the experience on mid-range devices, where most real-world performance issues appear.
Interaction and input automation
To interact with the page, you first need element UIDs. take_snapshot returns the accessibility tree with a unique identifier for each element.
# Get the accessibility tree with UIDs
chrome-devtools take_snapshot
# Click an element
chrome-devtools click "element-uid-123"
# Double click
chrome-devtools click "element-uid-123" --dblClick
# Hover (useful for menus and tooltips)
chrome-devtools hover "element-uid-456"
# Fill an input
chrome-devtools fill "input-uid-789" "search text"
# Press Enter after filling
chrome-devtools press_key "Enter"
# Keyboard shortcut
chrome-devtools press_key "Control+Shift+R"
# Type text in a focused input
chrome-devtools type_text "query" --submitKey "Enter"
# Upload a file
chrome-devtools upload_file --filePath "/path/to/file.png" --uid "input-file-uid"
# Accept or dismiss browser dialogs
chrome-devtools handle_dialog "accept"
take_snapshot is preferable to take_screenshot when the goal is interaction: the accessibility tree is lighter and gives UIDs directly. take_screenshot makes sense for visual captures or state comparisons.
Debugging
Console
list_console_messages exposes Console panel messages without opening the browser. Useful for catching JavaScript errors, third-party warnings, or performance logs after an automated navigation.
# List all console messages
chrome-devtools list_console_messages
# Errors only
chrome-devtools list_console_messages --types '["error"]'
# Get a specific message by ID
chrome-devtools get_console_message 42
# Include messages from the last 3 navigations
chrome-devtools list_console_messages --includePreservedMessages
Run JavaScript
evaluate_script runs a function in the page context and returns the result as JSON:
# Page title
chrome-devtools evaluate_script "() => document.title"
# Number of images without alt attribute
chrome-devtools evaluate_script "() => document.querySelectorAll('img:not([alt])').length"
# DOM size
chrome-devtools evaluate_script "() => document.querySelectorAll('*').length"
Inline scripts work for simple cases, but don’t scale. The CLI doesn’t include a native --file flag yet (issue #1775), but shell command substitution solves exactly that:
# Run a script from a local file
chrome-devtools evaluate_script "() => { $(cat scripts/measure-lcp.js) }"
# Run any script from the WebPerf Snippets repository
chrome-devtools evaluate_script "() => { $(cat ~/.claude/skills/webperf-core-web-vitals/scripts/LCP.js) }"
evaluate_script expects a function, not a loose code block. The () => { ... } wrapper is required when the script contains multiple statements.
This is the command that connects directly to WebPerf Snippets. Any snippet from the repository can be read with $(cat) and passed to evaluate_script without copying its content or opening DevTools.
The workflow described in WebPerf Snippets and Agent SKILLs uses exactly this approach: the agent reads the script from the SKILL (for example scripts/LCP.js) and runs it via evaluate_script on the page it’s analyzing. The difference between doing it from an agent and doing it from the shell is minimal.
Screenshots
take_screenshot saves the visual state of the page to disk. Useful for documenting audit results, comparing before and after states, or capturing specific elements by UID.
# Screenshot of the viewport
chrome-devtools take_screenshot --filePath screenshot.png
# Full page
chrome-devtools take_screenshot --filePath full.png --fullPage
# Single element (by UID)
chrome-devtools take_screenshot --filePath element.png --uid "hero-image-uid"
# Format and quality
chrome-devtools take_screenshot --filePath compressed.webp --format webp --quality 80
Lighthouse
lighthouse_audit runs a full audit from the terminal. By default it reloads the page in navigation mode, though it can also analyze the current state without reloading.
# Audit in navigation mode (reloads the page)
chrome-devtools lighthouse_audit
# Audit the current state without reloading
chrome-devtools lighthouse_audit --mode snapshot
# On mobile
chrome-devtools lighthouse_audit --device mobile
# Save the report to a folder
chrome-devtools lighthouse_audit --outputDirPath ./reports
Note:
lighthouse_auditcovers accessibility, SEO, and best practices. For performance metrics (Core Web Vitals, LCP, INP, CLS), use the Performance tools.
Network
Analyzing network requests from the terminal is useful for detecting uncached resources, unexpected third parties, or slow responses, without opening the DevTools Network panel.
# List all network requests since the last navigation
chrome-devtools list_network_requests
# Images only
chrome-devtools list_network_requests --resourceTypes '["image"]'
# Scripts and stylesheets only
chrome-devtools list_network_requests --resourceTypes '["script","stylesheet"]'
# Pagination (first 100)
chrome-devtools list_network_requests --pageSize 100 --pageIdx 0
# Include requests from the last 3 navigations
chrome-devtools list_network_requests --includePreservedRequests
# Details of a specific request (by reqid)
chrome-devtools get_network_request 15
# Save the response to disk
chrome-devtools get_network_request 15 --responseFilePath response.json
Performance
Performance is the most in-depth analysis area: full trace, automatic insights, and Core Web Vitals metrics from the terminal.
# Navigate to the URL you want to analyze
chrome-devtools navigate_page --type url --url "https://joanleon.dev"
# Start the trace (with automatic reload)
chrome-devtools performance_start_trace --reload --autoStop
# Or control manually when to stop
chrome-devtools performance_start_trace --reload
# ... interact with the page ...
chrome-devtools performance_stop_trace
# Save the trace to disk
chrome-devtools performance_stop_trace --filePath trace.json.gz
Once the trace is complete, performance_analyze_insight provides detail on each detected insight:
# The result of performance_stop_trace includes the available insight sets
# Analyze a specific insight
chrome-devtools performance_analyze_insight "LCPBreakdown" --insightSetId "set-id-from-result"
chrome-devtools performance_analyze_insight "DocumentLatency" --insightSetId "set-id-from-result"
The insights include metrics like LCPBreakdown, DocumentLatency, RenderBlocking, SlowCSSSelector, and others that correspond exactly to what Chrome DevTools shows in the Performance Insights panel.
Memory
take_memory_snapshot generates a JavaScript heap snapshot on disk. The resulting file can be opened directly in Chrome DevTools (Memory panel > Load) to analyze object distribution and detect memory leaks, even if the browser was never open during that session.
# JavaScript heap snapshot
chrome-devtools take_memory_snapshot --filePath heap.heapsnapshot
Connecting with WebPerf Snippets and the WebPerf Skill
The chrome-devtools-mcp CLI and WebPerf Snippets solve the same problem from different angles.
In WebPerf Snippets + WebMCP, I described how snippets can be exposed as structured tools for AI agents via WebMCP. In WebPerf Snippets and Agent SKILLs, I detailed how SKILLs turn those same scripts into autonomous capabilities that an agent runs in the browser with Chrome DevTools MCP.
The CLI closes the loop for manual use: the same scripts an agent reads from the SKILL can be run directly from the shell with evaluate_script, without an LLM in the middle.
# Example: measure LCP from the terminal using the SKILL script
chrome-devtools navigate_page --type url --url "https://joanleon.dev"
chrome-devtools evaluate_script "() => { $(cat ~/.claude/skills/webperf-core-web-vitals/scripts/LCP.js) }"
The WebPerf Skill acts as the orchestration layer when you have an agent. The CLI acts as the direct execution layer when working without one. The measurement logic, thresholds, and scripts are the same in both cases.
JSON output and composition
Most commands return formatted text by default. With --output-format=json the output is parseable, letting you pipe it into jq or feed it to other analysis scripts.
chrome-devtools list_pages --output-format=json
chrome-devtools list_network_requests --output-format=json | jq '.[] | select(.status >= 400)'
Debugging the CLI itself
If the CLI hangs or fails to connect to the daemon:
chrome-devtools stop
chrome-devtools status
For detailed logs:
DEBUG=* chrome-devtools list_pages
Conclusion
The chrome-devtools-mcp CLI doesn’t replace Puppeteer or Playwright for complex automation workflows. But for quick inspection, one-off audits, and lightweight CI scripts, it eliminates almost all the boilerplate: a persistent daemon, direct subcommands, and JSON output when you need it.
Combined with WebPerf Snippets and the WebPerf Skill, you have three access layers to the same set of measurement tools: the CLI for manual use from the terminal, SKILLs for autonomous agent workflows, and WebMCP for when the W3C standard matures in browsers.