Blogs/AI SEO Tool That Works With Every LLM: A Developer's Guide in 2026
Technology

AI SEO Tool That Works With Every LLM: A Developer's Guide in 2026

April 5, 2026

AI SEO Tool for Programmatic SEO

I've been writing React and Next.js apps for long enough to remember when SEO meant dropping a <Helmet> tag and calling it a day, but today success depends on semantic HTML, structured data, performance, and content aligned with real user intent. Modern search engines prioritize context, accessibility, and meaningful information architecture, while AI-powered SEO tools are reshaping workflows though many come with trade-offs like vendor lock-in, lack of transparency, or rigid ecosystems. What developers increasingly need is flexibility, control over their stack, and visibility into how optimizations are made, all while following a practical SEO checklist that ensures technical health, content quality, and discoverability without relying on opaque systems.

That changed for me when I stumbled onto @power-seo/ai an AI SEO tool that takes a fundamentally different approach. It's part of the Power SEO ecosystem from CyberCraft Bangladesh, and it's the most developer-native take on AI-powered SEO I've seen in 2026. In this post, I'm going to walk you through exactly what it does, why its architecture is clever, and how to integrate it into a real Next.js workflow from meta description generation to SERP eligibility checking in CI. Let's go.

The Problem With AI SEO Today

Before I show you the code, let me articulate the problem I kept running into. Every LLM provider  OpenAI, Anthropic Claude, Google Gemini, Mistral, local Ollama   has its own SDK, its own authentication dance, and its own API shape. If you want to build an SEO content pipeline that uses GPT-4o today but might switch to Claude Opus tomorrow, you end up rewriting your prompting logic every time. Worse, the prompting logic for SEO tasks, meta descriptions, title tags, content gaps   is identical regardless of which model you're talking to. The only thing that changes is the transport layer.

That's the gap power-seo/ai` fills. It extracts the SEO-specific prompting and parsing logic into reusable, provider-agnostic building blocks and then gets out of your way.

What Power SEO/AI Actually Is

The package is a set of prompt builders and response parsers for common SEO tasks. Prompt builders return a plain { system, user, maxTokens } object. You take that object and pass it directly to whatever LLM client you already have. No SDK bundled, no API keys managed, no network calls made by power-seo/ai itself.

The features break down into five main areas:

Meta description generation: BuildMetaDescription Prompt produces prompts targeting 120–158 character descriptions that include the focus keyphrase and a compelling call-to-action. The parser extracts a single optimized candidate, including its validation status, character count, and estimated pixel width.

SEO title generation: BuildTitlePrompt produces prompts for 5 optimized title tag variants. `parseTitleResponse` returns title candidates with character counts.

Content improvement suggestions: BuildContentSuggestionsPrompt analyzes existing content for gaps and returns typed suggestions for headings, paragraphs, keywords, and links   each with a priority level.

SERP feature prediction: BuildSerpPredictionPrompt and parseSerpPredictionResponse` predict which SERP features (Featured Snippet, FAQ, HowTo, Product, Review, Video, etc.) a page is likely to appear in, with likelihood scores and requirement breakdowns.

Rule-based SERP eligibility: AnalyzeSerpEligibility is the standout. It's fully deterministic, requires no LLM at all, costs nothing, and runs instantly. It inspects schema markup and content structure to determine eligibility for FAQ, HowTo, Product, and Article rich results. This is the function you run in CI.

## Installing It

```bash

npm install @power-seo/ai

# or

yarn add @power-seo/ai

# or

pnpm add @power-seo/ai

```

Zero runtime dependencies. Nothing else gets pulled in. The package is pure TypeScript, tree-shakable, ships both ESM and CJS formats, and is safe for SSR, Edge runtimes (Cloudflare Workers, Vercel Edge, Deno), and Node.js environments.

Meta Descriptions in Under 20 Lines

Here's the quick start example straight from the package docs. I want to show you the pattern first before we go deeper:

```typescript
import { buildMetaDescriptionPrompt, parseMetaDescriptionResponse } from '@power-seo/ai';

// 1. Build the prompt
const prompt = buildMetaDescriptionPrompt({
  title: 'Best Coffee Shops in New York City',
  content: 'Explore the top 15 coffee shops in NYC, from specialty espresso bars in Brooklyn...',
  focusKeyphrase: 'coffee shops nyc',
});

// 2. Send to your LLM of choice (example uses OpenAI)
import OpenAI from 'openai';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [
    { role: 'system', content: prompt.system },
    { role: 'user', content: prompt.user },
  ],
  max_tokens: prompt.maxTokens,
});

// 3. Parse the raw text response
const result = parseMetaDescriptionResponse(response.choices[0].message.content ?? '');
console.log(`"${result.description}"   ${result.charCount} chars, ~${result.pixelWidth}px`);
console.log(`Valid: ${result.isValid}`);
```

Notice what happened there. The `buildMetaDescriptionPrompt` call returns a plain object   `{ system, user, maxTokens }`. You decide what to do with it. The OpenAI client is something you already have. `power-seo/ai"Never touches your API keys.

That's the design philosophy in one code block.

Provider Flexibility: The Real Differentiator

The thing I find most compelling about `@power-seo/ai` as an AI- powered SEO tool is that you can swap providers on the same prompt object with zero friction. Here are all three major providers using the identical prompt builder:

typescript
// OpenAI
import OpenAI from 'openai';
import { buildMetaDescriptionPrompt, parseMetaDescriptionResponse } from '@power-seo/ai';
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const prompt = buildMetaDescriptionPrompt({
  title: 'My Article',
  content: '...',
  focusKeyphrase: 'my topic',
});
const openaiResponse = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: [
    { role: 'system', content: prompt.system },
    { role: 'user', content: prompt.user },
  ],
  max_tokens: prompt.maxTokens,
});
const result = parseMetaDescriptionResponse(openaiResponse.choices[0].message.content ?? '');
// Anthropic Claude
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
const claudeResponse = await anthropic.messages.create({
  model: 'claude-opus-4-6',
  system: prompt.system,
  messages: [{ role: 'user', content: prompt.user }],
  max_tokens: prompt.maxTokens,
});
const result2 = parseMetaDescriptionResponse(
  claudeResponse.content[0].type === 'text' ? claudeResponse.content[0].text : '',
);
// Google Gemini
import { GoogleGenerativeAI } from '@google/generative-ai';
const genai = new GoogleGenerativeAI(process.env.GEMINI_API_KEY!);
const model = genai.getGenerativeModel({ model: 'gemini-1.5-pro' });
const geminiResponse = await model.generateContent(`${prompt.system}\n\n${prompt.user}`);
const result3 = parseMetaDescriptionResponse(geminiResponse.response.text());


The `prompt` object is identical across all three. The only thing changing is the transport. This means you can genuinely run multi-LLM A/B tests   compare OpenAI, Claude, and Gemini output quality using identical prompt builders and measure the SERP impact. That's a use case that most AI SEO tools can't support because they lock you to their model.

SEO Title Generation: Five Variants, Type-Safe

Let's look at title generation. You probably don't want to ship the first title an LLM spits out, you want options. `buildTitlePrompt` asks the model to return 5 optimized variants, and `parseTitleResponse` gives you back a typed array:

```typescript
import { buildTitlePrompt, parseTitleResponse } from '@power-seo/ai';
import type { TitleInput, TitleResult } from '@power-seo/ai';
const input: TitleInput = {
  content: 'Article about the best tools for keyword research in 2026...',
  focusKeyphrase: 'keyword research tools',
  tone: 'informative',
};
const prompt = buildTitlePrompt(input);
const rawResponse = await yourLLM.complete(prompt.system, prompt.user, prompt.maxTokens);
const results: TitleResult[] = parseTitleResponse(rawResponse);
results.forEach(({ title, charCount, pixelWidth }, i) => {
  const status = charCount <= 60 ? 'OK' : 'TOO LONG';
  console.log(`${i + 1}. "${title}"   ${charCount} chars [${status}]`);
});
```

The `pixelWidth` field is something I haven't seen in other libraries. Google's search results truncate based on rendered pixel width, not character count. Having that number in the response object means you can build a preview component that accurately reflects what the SERP will show. The @power-seo/preview package in the broader ecosystem does exactly this if you want a pre-built solution.

Content Improvement Suggestions: Where AI Enhances SEO at Scale

This is the feature I'd reach for in a headless CMS pipeline. `BuildContentSuggestionsPrompt` takes your page content, focus keyphrase, and optionally your current analysis score, then asks the LLM to identify gaps. The parser returns typed `ContentSuggestion[]` objects, each with a type, a suggestion string, and a numeric priority:

```typescript
import { buildContentSuggestionsPrompt, parseContentSuggestionsResponse } from '@power-seo/ai';
import type { ContentSuggestionInput, ContentSuggestion } from '@power-seo/ai';
const input: ContentSuggestionInput = {
  title: 'React SEO Best Practices',
  content: '<h1>React SEO</h1><p>React is a JavaScript library...</p>',
  focusKeyphrase: 'react seo best practices',
  analysisResults: 'Current score: 58/100. Missing headings structure.',
};
const prompt = buildContentSuggestionsPrompt(input);
const rawResponse = await yourLLM.complete(prompt.system, prompt.user, prompt.maxTokens);
const suggestions: ContentSuggestion[] = parseContentSuggestionsResponse(rawResponse);
suggestions.forEach(({ type, suggestion, priority }) => {
  console.log(`[Priority ${priority}] ${type}: ${suggestion}`);
});

// Example output:

// [Priority 5] heading: Add h2 subheadings for "Server-Side Rendering" topic

// [Priority 4] paragraph: Expand "React Hooks" section with more examples

// [Priority 3] keyword: Increase usage of "react seo best practices" by 1-2 more mentions

// [Priority 2] link: Link to related article on Next.js SSR optimization

The ContentSuggestionType union   `'heading' | 'paragraph' | 'keyword' | 'link'`   means you can filter, sort, and route suggestions programmatically. Priority 5 heading suggestions go to a content editor. Priority 2 link suggestions go to an internal linking audit queue. This is what it looks like when AI enhances SEO with actual structure behind it.

SERP Feature Prediction: LLM-Powered and Deterministic Variants

There are two ways to predict SERP eligibility in `@power-seo/ai`, and understanding the difference is important.

The LLM-powered version uses `buildSerpPredictionPrompt` and `parseSerpPredictionResponse`. You send your page content and schema types to the model, and it returns predictions with likelihood scores for featured snippets, FAQ rich results, HowTo, Product, Review, Video carousels, image packs, local packs, and sitelinks:

```typescript
import { buildSerpPredictionPrompt, parseSerpPredictionResponse } from '@power-seo/ai';
import type { SerpFeatureInput, SerpFeaturePrediction } from '@power-seo/ai';
const input: SerpFeatureInput = {
  title: 'How to Make Cold Brew Coffee at Home',
  content: '<h1>Cold Brew Coffee</h1><h2>Step 1: Grind the Coffee</h2><p>...</p>',
  schema: ['HowTo', 'Recipe'],
  contentType: 'guide',
};
const prompt = buildSerpPredictionPrompt(input);
const rawResponse = await yourLLM.complete(prompt.system, prompt.user, prompt.maxTokens);
const predictions: SerpFeaturePrediction[] = parseSerpPredictionResponse(rawResponse);
predictions.forEach(({ feature, likelihood, requirements, met }) => {
  console.log(`${feature}: ${(likelihood * 100).toFixed(0)}% likelihood`);
  console.log(`  Requirements: ${requirements.join(', ')}`);
  console.log(`  Met: ${met.join(', ')}`);
});

The deterministic version is `analyzeSerpEligibility`. No LLM, no API call, no cost, instant execution. It inspects your page's schema markup and content structure using rule-based logic:

```typescript
import { analyzeSerpEligibility } from '@power-seo/ai';


// HowTo   detected by step-structured headings and HowTo schema

const result = analyzeSerpEligibility({
  title: 'How to Install Node.js on Ubuntu',
  content: '<h2>Step 1: Update apt</h2><p>...</p><h2>Step 2: Install nvm</h2><p>...</p>',
  schema: ['HowTo'],
});

// Returns array with SerpFeaturePrediction objects, including:

// { feature: 'how-to', likelihood: 0.8, requirements: [...], met: [...] }

// FAQ   detected by FAQPage schema

const faqResult = analyzeSerpEligibility({
  title: 'React SEO FAQ',
  content: '<h2>Does React hurt SEO?</h2><p>...</p><h2>How do I add meta tags?</h2><p>...</p>',
  schema: ['FAQPage'],
});

// Returns array including:

// { feature: 'faq-rich-result', likelihood: 0.9, requirements: [...], met: [...] }

This is the function to run in CI. After every deploy, you run `analyzeSerpEligibility` against your critical pages to confirm that schema markup is still intact and rich result eligibility hasn't regressed. Because it's fully deterministic, you can write Jest/Vitest assertions against the output. This is the kind of AI purpose-built for SEO  that actually fits into a modern engineering workflow.

The Full Type System

One thing I appreciate about this package is that the TypeScript types are genuinely useful, not just `any` with a label. Here's the full import surface:

```typescript
import type {
  PromptTemplate, // { system: string; user: string; maxTokens?: number }
  MetaDescriptionInput, // { title, content, focusKeyphrase?, maxLength?, tone? }
  MetaDescriptionResult, // { description, charCount, pixelWidth, isValid, validationMessage? }
  ContentSuggestionInput, // { title, content, focusKeyphrase?, analysisResults? }
  ContentSuggestionType, // 'heading' | 'paragraph' | 'keyword' | 'link'
  ContentSuggestion, // { type: ContentSuggestionType; suggestion: string; priority: number }
  SerpFeature, // 'featured-snippet' | 'faq-rich-result' | 'how-to' | 'product' | 'review' | 'video' | 'image-pack' | 'local-pack' | 'sitelinks'
  SerpFeatureInput, // { title, content, schema?, contentType? }
  SerpFeaturePrediction, // { feature: SerpFeature; likelihood: number; requirements: string[]; met: string[] }
  TitleInput, // { content, focusKeyphrase?, tone? }
  TitleResult, // { title: string; charCount: number; pixelWidth: number }
} from '@power-seo/ai';

The `SerpFeature` union is particularly useful for building dashboards. You can iterate over all `SerpFeaturePrediction[]` results, group by `feature`, and surface exactly which pages are missing what schema requirements. That's a content quality dashboard in a weekend.

Real-World Use Cases in Next.js

Let me translate the API surface into concrete Next.js scenarios that I'd actually build. 

Headless CMS publish-time suggestions: When an editor hits "Publish" in your CMS, fire a server action that calls `buildMetaDescriptionPrompt` and `buildTitlePrompt` with the article content. Return 3 meta description candidates and 5 title variants to a modal before the page goes live. Authors pick the one they like. The generation cost is one LLM call; the SEO value is consistent, keyphrase-targeted meta copy across every page.

Programmatic SEO pipelines: If you're generating thousands of pages from a database   location pages, product pages, category pages   writing unique meta descriptions manually is impossible. `@power-seo/ai` lets you automate that generation at build time or on-demand in `getStaticProps` / server components, with the same prompt quality whether you're generating 10 pages or 10,000.

Content quality dashboards: Combine `@power-seo/content-analysis` for scoring with `buildContentSuggestionsPrompt` for remediation. Surfaces low-scoring pages from your CMS, runs the suggestion builder against each one, and presents prioritized improvement actions to the content team.

CI rich result regression testing: Add `analyzeSerpEligibility` to your test suite. After every deploy, assert that your HowTo pages still have the right step structure, your FAQ pages still carry `FAQPage` schema, and your Product pages include all required fields. Schema regressions are invisible until Google stops showing your rich results   this makes them visible immediately.

Multi-LLM benchmarking: Run the same prompt builders against OpenAI, Claude, and Gemini. Log the outputs with their `charCount` and `pixelWidth` metadata. Measure click-through rate in Google Search Console against pages that received each model's copy. Find out empirically which model writes better meta descriptions for your specific content vertical.

My Honest Take

The best AI-powered SEO techniques aren't the ones that give you the fanciest dashboard, they're the ones that fit inside your existing engineering workflow. `@power-seo/ai` fits there because it respects what you already have. You chose your LLM provider for a reason. You built your content pipeline for a reason. This package slots into that architecture rather than demanding you rebuild around it.

The deterministic `analyzeSerpEligibility` function alone is worth the install to me. The fact that you can run it in CI, write assertions against it, and catch schema regressions before Google does   that's a genuine gap in the tooling landscape that this fills.

Among the AI-powered SEO tools of 2026, @power-seo/ai stands out not because it does the most, but because it makes the right architectural bets: provider-agnostic, zero dependencies, TypeScript-first, a powerful React developer tool, composable with the rest of the ecosystem, and critically deterministic where it should be and AI-powered where it needs to be.

Frequently Asked Questions

How do I install @power-seo/ai?

Run npm install @power-seo/ai, yarn add @power-seo/ai, or pnpm add @power-seo/ai. The package has zero runtime dependencies, so nothing else gets pulled in.

Is @power-seo/ai safe to use in Next.js App Router, Edge runtimes, and Cloudflare Workers?

Yes. The package is pure TypeScript with no browser-specific or Node.js-specific APIs. It is safe for SSR, Edge runtimes including Cloudflare Workers and Vercel Edge Functions, Deno, and standard Node.js environments.

Is the package tree-shakeable and TypeScript-first?

Yes on both counts. The package ships with "sideEffects": false and named exports per function, making it fully tree-shakeable. It is written in pure TypeScript and ships comprehensive type definitions for all inputs, outputs, and intermediate structures including union types for ContentSuggestionType and SerpFeature.

Does the package make any network calls or manage API keys?

No. @power-seo/ai makes zero network calls and manages no API keys. All LLM communication happens through your own client code. The package only builds prompt strings and parses raw text responses.


 

Code copied to clipboard

FAQ

Frequently Asked Questions

We offer end-to-end digital solutions including website design & development, UI/UX design, SEO, custom ERP systems, graphics & brand identity, and digital marketing.

Timelines vary by project scope. A standard website typically takes 3-6 weeks, while complex ERP or web application projects may take 2-5 months.

Yes - we offer ongoing support and maintenance packages for all projects. Our team is available to handle updates, bug fixes, performance monitoring, and feature additions.

Absolutely. Visit our Works section to browse our portfolio of completed projects across various industries and service categories.

Simply reach out via our contact form or call us directly. We will schedule a free consultation to understand your needs and provide a tailored proposal.