We analyzed 500 articles ranking on Google's first page for competitive AI and business keywords to answer the question every content team is asking: does AI content rank?
The answer is nuanced. Pure AI content (published without human editing) ranks in only 12% of cases we studied, and overwhelmingly for low-competition, long-tail queries. Pure human content ranks in 35% of cases. But the clear winner is human-edited AI content — articles drafted by AI and refined by human editors — which accounts for 53% of page-1 results in our sample.
The pattern is consistent: AI handles the structural heavy lifting (research synthesis, outline creation, initial drafting), and humans add what AI cannot — original data, expert opinions, personal experience, brand voice, and editorial judgment on what to include and exclude.
The content characteristics that correlate most strongly with rankings: original data or research (present in 78% of top-3 results), specific examples with numbers (72%), structured data tables (68%), expert quotes or citations (63%), and clear section hierarchy with semantic HTML (91%).
What doesn't matter for rankings: whether the content was AI-generated (Google cannot reliably detect AI content, and has stated it doesn't penalize AI content that provides value), article length beyond 1,500 words (diminishing returns after this threshold), and keyword density (semantic relevance matters more than exact-match repetition).
The practical takeaway for content teams: use AI to draft and structure, but invest human time in three specific areas: adding original insights or data that AI can't generate, fact-checking every claim (AI hallucination is the #1 ranking risk), and optimizing the first 200 words with a clear, direct answer to the query (this is where AI Overviews pull from).