Connect Claude Desktop, Cursor, Antigravity, or any MCP-enabled IDE to the JSON Recon engine. One tool call extracts clean, Google-compliant JSON-LD from any publicly accessible URL. $0.01 per extraction.
Structured data is the foundation of modern SEO: Rich Results, FAQ snippets, local map packs, product carousels, and Generative Engine Optimization (GEO) all demand valid JSON-LD. JSON Recon turns that from a manual, error-prone process into an automated conversation with your AI.
Point your AI at a competitor's URL and instantly get their full structured data profile. See exactly which schema types they're using, what properties they populate, and where they're winning Rich Results you're missing. No browser extensions, no copy-pasting. Just ask: "Analyze the schema on this competitor's product page and compare it to ours."
For local businesses, we extract the deepest schema.org subtype. Dentist, not MedicalBusiness. ItalianRestaurant, not FoodEstablishment. Our taxonomy engine selects from 200+ subtypes to ensure your structured data matches Google's entity expectations, improving your visibility in local map packs and AI-powered "near me" queries.
AI Overviews and conversational search engines rely on structured signals to rank and cite sources. Pages with comprehensive, valid JSON-LD are significantly more likely to be referenced in AI-generated answers. JSON Recon generates deeply populated schema with sameAs, additionalProperty, and full entity linking, giving AI engines the structured context they need to prefer your content.
Ask your coding AI: "extract the LocalBusiness schema from this URL and inject it into my header.tsx". It does everything: reads the publicly accessible page, identifies the entity, builds a comprehensive JSON-LD block with address, hours, geo coordinates, and reviews, then writes it directly into your code. The target URL must be live and publicly accessible (not behind authentication or VPN). One sentence. Done.
Without JSON Recon, your AI assistant has to build a scraper from scratch, download HTML, parse the DOM, figure out what entity type the page represents, look up the schema.org spec, and assemble the JSON-LD, all inside your context window. Here's what that actually costs:
Organization instead of the correct subtypeThat's a 90% reduction in context window usage and a 10–50x cost saving per page.
Schema extraction isn't reliable when your only tool is a single HTTP request. Websites use JavaScript rendering, bot detection, CAPTCHAs, and lazily-loaded JSON-LD. JSON Recon handles all of this automatically, so you don't have to think about it.
Our engine automatically handles whatever a page throws at it. Static HTML, JavaScript-rendered SPAs, and heavily protected enterprise sites are all covered seamlessly, with no configuration required from you.
Many extraction tools fail on bot-protected sites. JSON Recon successfully extracts structured data from sites that most tools can't reach, giving you reliable results across the web.
When no JSON-LD exists on a page, our engine reads the visible content and generates comprehensive schema.org from scratch, selecting the most specific entity type from 200+ options.
Auto-refund on failure: If a website's firewall blocks extraction entirely, your credit is automatically refunded to your account. You never pay for data we couldn't retrieve.
JSON Recon gives your AI two tools. The primary tool handles everything intelligently in a single call. The second is a free pre-flight check to assess extraction difficulty before committing credits.
extract_schemaOne intelligent tool that handles everything. It first checks for existing JSON-LD on the page. If found and comprehensive, it returns immediately. If the page has no schema or only generic markup, it automatically escalates to AI-powered generation. Built-in advisory engine detects when native schema describes a page section (like an FAQ) rather than the page's primary purpose, and suggests a better type with ready-to-use JSON-LD. Flat rate: $0.01
check_extractabilityA free pre-flight check that tells your AI whether a URL is extractable before spending credits. Returns the domain's difficulty rating, expected schema type, known WAF issues, and estimated confidence. Smart agents use this to avoid wasting credits on sites known to block scrapers.
Adaptive pipeline automatically checks for native JSON-LD, escalating to AI-powered entity generation if missing. Same price regardless of complexity.
Generic AI can write JSON-LD, but it doesn't know what Google validates. JSON Recon does.
We don't stop at LocalBusiness. Our engine maps to 200+ schema.org subtypes: Dentist, AutoRepair, Brewery, NailSalon, and more. Matching exactly what Google's Rich Results tests expect.
Our AI is allowed to upgrade taxonomy (choosing the right @type), but strictly forbidden from inventing addresses, phone numbers, prices, or hours. If data isn't on the page, it's not in the schema. Period.
Missing a required property? Instead of omitting it, the AI inserts instructional placeholders like "REQUIRED: Add product price" so the developer knows exactly what to fill in for Google compliance.
Every response includes proper @context, @type, confidence scoring, and provenance tracking. Output is ready to paste into your <head> or validate in Google's Rich Results Test immediately.
Most websites implement schema.org for only one purpose — FAQ markup, breadcrumbs, or a generic organization block. But a single page often serves a deeper purpose that native markup doesn't capture. JSON Recon now detects this gap automatically.
A Laravel development agency's service page has native FAQPage markup — but the page's actual purpose is describing a service offering. The FAQ is just one section. Without advisory, your AI returns the FAQ data and moves on, missing the Service schema that would actually drive rich results for the page's core value proposition.
JSON Recon's advisory engine detects when native markup represents a section rather than the page's primary entity. It automatically suggests a better schema type, explains why, and returns the full extracted JSON-LD for the suggested type — ready to implement. One extraction, two schemas.
A page with only FAQPage markup misses Rich Result opportunities for its core entity. The advisory engine catches these gaps that even expert SEOs can overlook during manual audits, ensuring every page has the schema type that best represents its primary purpose.
Your AI coding assistant doesn't just return data — it returns advice. When building a new site, the suggested schema gives you the correct type to implement from the start. No more shipping FAQPage when you should have shipped Service.
The advisory engine is included in the standard $0.01 extraction fee for all MCP requests. There's no upsell. When your AI calls extract_schema, it automatically gets the advisory analysis whenever it could add value.
1. Generate an API key from the developer dashboard.
2. Top up your balance via Stripe ($10, $25, or $50).
3. Add the JSON block to your IDE's MCP config.
4. Ask your AI: "Use JSON Recon to extract the schema from example.com and write it into my layout."
Works with Claude Desktop, Cursor, Antigravity, Windsurf, and any IDE or platform that supports the Model Context Protocol.
Generate API Key →{
"mcpServers": {
"jsonrecon": {
"command": "npx",
"args": ["-y", "@jsonrecon/mcp-server"],
"env": {
"JSONRECON_API_KEY": "jr_liv_xxxxx"
}
}
}
}