TL;DR: Full-text search lets users search through actual text content — articles, products, posts — with intelligence. It understands word variations ("running" matches "run"), ignores filler words, ranks results by relevance, and handles typos. It replaces the LIKE '%keyword%' approach that AI usually generates, which breaks down the moment your table has more than a few thousand rows. PostgreSQL has full-text search built in. For heavy search needs, tools like Meilisearch and Elasticsearch exist.
Why AI Coders Need to Know This
Here is a scenario that plays out constantly: you are building an app with AI. It has a list of products, blog posts, user profiles — whatever. You tell Claude or Cursor: "Add a search bar so users can find things."
Your AI writes this SQL query:
SELECT * FROM products WHERE name LIKE '%running shoes%';
It works. You type "running shoes" in the search bar, and products with that exact phrase show up. Ship it, right?
Then a user searches "run shoe" and gets zero results. Another user searches "runing shoes" (a typo) — nothing. Someone searches "best shoes for running" — nothing again. Your search bar technically works, but it is functionally useless for anyone who does not type the exact phrase stored in your database.
This is the problem full-text search solves. And every vibe coder building anything with user-facing search will hit this wall eventually.
What Full-Text Search Actually Is
Full-text search is a way for your database to search through text content with intelligence — not just look for exact character matches.
Think of it like the difference between a filing cabinet and a librarian:
- Filing cabinet (LIKE): You say "find me the file labeled 'running shoes.'" The cabinet only finds an exact match. If the label says "shoes for running," you get nothing.
- Librarian (full-text search): You say "I need something about running shoes." The librarian knows that "running" and "run" and "runner" are all related. They know to skip filler words like "the" and "for." They bring you the most relevant results first, with the best matches on top.
That is full-text search. Your database becomes a librarian instead of a filing cabinet.
Here is what full-text search handles that basic matching cannot:
- Stemming: Searching "running" also finds "run," "runs," and "runner." The search engine understands these are variations of the same root word.
- Stop words: Words like "the," "is," "and," "for" are automatically ignored. Searching "shoes for running" focuses on "shoes" and "running" — the words that actually matter.
- Relevance ranking: If ten articles mention "running shoes," the ones where that phrase appears in the title rank higher than ones where it appears once buried in paragraph twelve.
- Phrase matching: You can search for "running shoes" as a phrase (the words together) or as separate words (anything with "running" OR "shoes").
- Fuzzy matching: Some search tools can even handle typos — "runing" still finds "running."
LIKE vs. Full-Text Search: The Real Difference
This is the comparison that matters for vibe coders, because LIKE is what your AI will generate by default.
| Feature | LIKE '%keyword%' | Full-Text Search |
|---|---|---|
| How it works | Scans every row, character by character | Uses a pre-built index to find matches |
| Speed on 1,000 rows | Fast enough | Fast |
| Speed on 100,000 rows | Noticeably slow | Still fast |
| Speed on 1,000,000 rows | Painfully slow or times out | Still fast |
| "running" finds "run" | No | Yes (stemming) |
| Ignores "the," "is," "and" | No | Yes (stop words) |
| Results ranked by relevance | No — just a flat list | Yes — best matches first |
| Can use an index | Not with leading wildcard (%) | Yes — that is the whole point |
| Handles typos | No | Some implementations do |
The critical line in that table: "Can use an index." If you have read our guide on database indexes, you know that indexes are what make databases fast. When you write LIKE '%keyword%' with a wildcard at the front, your database cannot use any index. It has to read every single row in the table, every single time someone searches. This is called a full table scan, and it is the reason LIKE queries get slower as your data grows.
Full-text search uses its own special kind of index that is designed specifically for searching text. That index is what keeps search fast even when your table grows from 1,000 rows to 1,000,000.
Real Scenario: Your AI Search Bar That Stops Working
Let's make this concrete. You asked Claude to add search to your blog app. Here is what it probably generated:
What AI Usually Generates
-- The search query your AI wrote
SELECT id, title, content
FROM posts
WHERE title LIKE '%running%'
OR content LIKE '%running%'
ORDER BY created_at DESC;
This works great when you have 50 blog posts. Your search bar feels instant. You move on to building other features.
Six months later, you have 15,000 posts. Users start complaining that search takes 3–4 seconds. Some searches time out completely. Here is why:
- Every search scans all 15,000 rows — twice (once for title, once for content).
- The content column might have thousands of words per row. The database is reading all of it, every time.
- No index can help because of the leading
%wildcard. - Searching "run" does not find posts about "running" — users are frustrated.
- Results are sorted by date, not relevance — the best match might be buried on page 5.
This is the exact moment where you need full-text search. And the good news: if you are using PostgreSQL, you already have it.
PostgreSQL Full-Text Search: What You Already Have
PostgreSQL has full-text search built in. No extra software to install, no external service to pay for. It has been there for years — your AI just did not reach for it by default.
Here is how it works in plain language. There are four key pieces, and you only need to know what each one does:
tsvector — The Searchable Version of Your Text
A tsvector is what your text looks like after the database prepares it for searching. It takes your raw text, breaks it into individual words, removes stop words ("the," "is," "and"), reduces words to their root form ("running" becomes "run"), and stores the positions where each word appears.
Think of it as: the database creates a back-of-the-book index for your text. Instead of reading the whole book every time someone asks a question, it just looks up the answer in the index.
tsquery — What the User Is Searching For
A tsquery is the search terms, processed the same way. When a user types "running shoes" in your search bar, the database converts that into a tsquery — "run" & "shoe" — so it can match against the tsvector index.
The important thing: both the stored text and the search terms go through the same processing. That is how "running" matches "run" — both get reduced to the same root word before comparing.
to_tsvector — The Tool That Converts Text
to_tsvector is the function that converts your raw text into a searchable tsvector. You use it when storing text or when building a search index. It is the librarian cataloging a new book.
plainto_tsquery — The Tool That Converts Search Terms
plainto_tsquery is the function that converts a user's plain-text search into a tsquery. It takes whatever the user typed in the search bar and turns it into something the database can match. It is the librarian understanding your question before looking for the answer.
What to Ask Your AI Instead
Prompt: "Replace my LIKE-based search with PostgreSQL
full-text search. Add a tsvector column to the posts
table, create a GIN index on it, and write a search
query that ranks results by relevance."
-- What your AI should generate (simplified):
-- 1. Add a search column to your table
ALTER TABLE posts ADD COLUMN search_vector tsvector;
-- 2. Fill it with searchable text from title + content
UPDATE posts SET search_vector =
to_tsvector('english', title || ' ' || content);
-- 3. Create an index so searches are fast
CREATE INDEX idx_posts_search ON posts USING GIN (search_vector);
-- 4. The actual search query
SELECT id, title,
ts_rank(search_vector, plainto_tsquery('english', 'running shoes')) AS relevance
FROM posts
WHERE search_vector @@ plainto_tsquery('english', 'running shoes')
ORDER BY relevance DESC;
The result: searching "running shoes" now finds posts containing "run," "runs," "runner," and "shoe." Results are ranked by relevance — the post with "running shoes" in the title ranks higher than one that mentions "running" once in a footnote. And the GIN index means this search stays fast even at hundreds of thousands of rows.
Keep the Search Vector Updated
When you add or edit posts, the search_vector column needs to be updated too. Ask your AI to create a database trigger that automatically updates the search vector whenever a row changes. Otherwise your search results get stale — new posts will not appear in search until the vector is rebuilt.
When PostgreSQL Is Not Enough: Dedicated Search Engines
PostgreSQL full-text search is genuinely good. For most apps — blogs, internal tools, small-to-medium e-commerce — it is all you need. But some use cases push beyond what a general-purpose database was designed to handle.
Here is when to consider a dedicated search engine:
- Search-as-you-type autocomplete: Users see suggestions as they type each letter. This requires sub-50ms response times on every keystroke.
- Typo tolerance: "databse" should find "database." PostgreSQL does not handle this natively.
- Faceted search: Think Amazon's sidebar — filter by category, price range, rating, brand, all while searching. This is complex to build in raw SQL.
- Millions of documents: PostgreSQL handles hundreds of thousands well. At millions of searchable documents, dedicated engines are purpose-built for that scale.
- Search is your core feature: If your app IS a search product (job board, marketplace, content aggregator), invest in a dedicated tool.
Your Options
| Tool | What It Is | Best For |
|---|---|---|
| PostgreSQL FTS | Built into your database — nothing extra to install | Most apps. Start here. |
| Meilisearch | Open-source, fast, developer-friendly search engine | Search-as-you-type, typo tolerance, small-to-medium datasets |
| Typesense | Open-source search engine focused on simplicity | Similar to Meilisearch — easy setup, great typo tolerance |
| Elasticsearch | Industry-standard search engine (powerful but complex) | Large-scale search, log analysis, complex query needs |
| Algolia | Hosted search service (you pay, they run it) | When you want great search without managing infrastructure |
Start With PostgreSQL FTS When...
- You already use PostgreSQL
- Under ~500,000 searchable rows
- Search is a feature, not THE feature
- You want simplicity — one database, no extra services
- Budget is a concern
Use a Dedicated Engine When...
- You need search-as-you-type instant results
- Typo tolerance is important
- You need faceted filtering
- Millions of documents
- Search is your core product feature
For most vibe coders reading this: start with PostgreSQL full-text search. It is free, already available in your database, and handles far more than LIKE ever could. You can always add Meilisearch or another tool later if you outgrow it.
What AI Gets Wrong About Search
When you ask AI to build search features, watch for these common problems:
Defaulting to LIKE on every search request
This is the big one. Almost every AI — Claude, ChatGPT, Copilot — will generate LIKE '%term%' as the default search approach. It is the simplest solution and technically works. But it does not scale, does not rank results, and does not handle word variations. Always follow up: "Can you use full-text search instead of LIKE?"
Not creating an index on the search column
AI might add the tsvector column and write the search query, but forget to create the GIN index. Without that index, full-text search still works — but it is almost as slow as LIKE because the database has to scan every row. The index is what makes it fast. Always check: "Did you add a GIN index on the search vector column?"
Forgetting to keep the search vector updated
The tsvector column is a snapshot of your text at the time it was created. If users edit posts or you add new content, the search vector gets stale. AI often sets up the initial column but does not create a trigger to keep it updated. Ask: "Add a trigger that updates the search vector when rows are inserted or updated."
Overengineering with Elasticsearch for small apps
Sometimes AI suggests setting up Elasticsearch for a blog with 200 posts. That is like hiring a full construction crew to hang a picture frame. Elasticsearch is powerful, but it is also complex — it runs as a separate service, needs its own server resources, and requires data syncing between your database and the search engine. For small-to-medium apps, PostgreSQL full-text search does the job without any of that complexity.
Ignoring the search experience beyond the query
Good search is not just about the database query. It includes highlighting the matched terms in results, showing relevant snippets instead of full content, handling empty results gracefully, and debouncing search input so you are not running a query on every keystroke. If your AI only builds the backend query, ask it to build the complete search experience.
How to Talk to Your AI About Search
The right prompt makes all the difference. Here are prompts that get you real search instead of LIKE:
Prompt for PostgreSQL Full-Text Search
"Add search to my posts table using PostgreSQL
full-text search. I want to search across the title
and body columns. Add a tsvector column, create a GIN
index, set up a trigger to keep it updated on inserts
and updates, and write a search function that returns
results ranked by relevance. Use the 'english'
text search configuration."
Prompt for Adding Search to a REST API
"Add a GET /api/search?q=term endpoint to my Express
app. Use PostgreSQL full-text search on the products
table (search across name and description). Return the
top 20 results ranked by relevance. Include a snippet
of the matching text in the response. Handle empty
queries and no-results cases gracefully."
If you are building a REST API with search, that second prompt gives your AI everything it needs to generate production-quality search — not a LIKE hack.
Prompt When You Outgrow PostgreSQL
"My PostgreSQL full-text search is getting slow. I
have 800,000 products and need search-as-you-type with
typo tolerance. Set up Meilisearch alongside my
existing PostgreSQL database. Sync the product data to
Meilisearch and replace my search endpoint to query
Meilisearch instead of PostgreSQL."
What to Learn Next
Full-text search connects to several other concepts in your stack. If anything in this guide felt fuzzy:
- What Is a Database? — the foundation. Understand where your data lives before you search through it.
- What Is SQL? — the language you use to talk to your database, including search queries.
- What Is PostgreSQL? — the database that has full-text search built in, and why most AI tools default to it.
- What Are Database Indexes? — the concept behind why full-text search is fast and LIKE is slow.
- What Is a REST API? — how your search feature connects to your frontend through an API endpoint.
Next Step
If you have a project with a search bar that uses LIKE, try this now: ask your AI to "Replace the LIKE search query with PostgreSQL full-text search, add a tsvector column and GIN index." Compare the results. You will see the difference immediately — especially in how search handles word variations and relevance ranking.
FAQ
Full-text search is a database feature that lets users search through actual text content — articles, product descriptions, user posts — with intelligence. Unlike exact matching or basic LIKE queries, full-text search understands relevance ranking, word variations (stemming), common words to ignore (stop words), phrase matching, and fuzzy matching for typos.
LIKE '%keyword%' scans every single row in the table character by character, looking for an exact substring match. It has no concept of relevance, word variations, or ranking. Full-text search uses a pre-built index to find matches almost instantly, ranks results by relevance, understands that "running" and "run" are related, and ignores common filler words like "the" and "is." LIKE is fine for tiny tables. Full-text search is what you need for real search functionality.
Yes. PostgreSQL has full-text search built in — no extra software required. It uses tsvector (a searchable version of your text) and tsquery (the search terms) to perform intelligent searches. You can add a GIN index on the tsvector column for fast performance. For many applications with up to hundreds of thousands of rows, PostgreSQL's built-in search is more than sufficient.
Consider a dedicated search engine when you need search-as-you-type autocomplete, typo tolerance out of the box, faceted filtering (like filtering products by category and price range simultaneously), searching across millions of documents with sub-50ms response times, or when search is a core feature of your product rather than a nice-to-have. PostgreSQL full-text search is great for most apps. Dedicated engines are for when search IS your app.
When you ask AI to add search to your app, it usually generates a LIKE '%term%' query. This is the simplest approach and technically works — but it only finds exact substring matches. Searching "run" will not find "running." Searching "databse" will not find "database." And it gets painfully slow on tables with more than a few thousand rows. Ask your AI to implement PostgreSQL full-text search instead of LIKE for real search functionality.