What AI actually does well
AI tools are genuinely good at generating standard patterns fast. A landing page layout, a contact form, a responsive menu, CSS from a brief description — things that used to take hours can be in minutes. GitHub's research on Copilot showed developers completed certain coding tasks 55% faster with AI assistance. That's real.
What this means for you: the cost of the — the standard structure every site needs — has dropped and will keep dropping. If you need a simple brochure site with no custom logic, AI tools have genuinely narrowed the gap between "build it yourself" and "hire a developer."
The question is: what percentage of a business website's value is the standard structure — and what's everything else?
Where AI-built sites consistently fail
The Baymard Institute, which has studied e-commerce UX since 2010, has a consistent finding: the average online checkout has 39 unnecessary steps and fields that cause people to leave. Fixing this isn't about generating the right components — it's about understanding why users abandon at specific steps, which fields create hesitation, and how your payment gateway's load time interacts with your mobile layout.
AI generates code that matches patterns it was trained on. It doesn't know that your WooCommerce store has a spike in checkout abandonment because your payment gateway adds a 3-second redirect that's painful on 4G. It doesn't know your mobile users are mostly on older Android phones where certain animations cause visible stuttering. It doesn't know your product catalogue runs a database query on every product page with no index — adding 800ms per load.
These aren't hypothetical problems. They're the things I actually fix on real sites. Every single one requires diagnosis — looking at real user data, testing on real devices — before writing a single line of code.
AI can't diagnose what it hasn't seen. Your site's performance failures are specific to your configuration, your hosting, your plugins, and your users' devices. There's no pattern for that. There's only investigation.
The honest concession: what AI has changed
Several things AI has genuinely changed — honestly:
Writing boilerplate is significantly faster. Code that took an hour from scratch can be generated in five minutes and then adapted. This reduces the cost of standard component work — and that saving should reach clients.
Documentation and explanations are better. AI can read an unfamiliar codebase and explain what a complex function does, which speeds up debugging and handovers.
First drafts of copy, error messages, and form labels are faster to produce.
What hasn't changed: the architectural decisions, the performance diagnosis, the security review, the judgment call about which plugin is maintained well enough to trust with your payments. Whether to build custom or use an existing tool. Whether your conversion problem is a speed issue or a copy issue or a issue. These require knowledge of your specific situation. They're what determines whether the website succeeds as a business tool.
Why the bar has actually risen
Here's the counterintuitive part: AI has raised the bar for what a developer needs to deliver to be worth hiring.
If generating standard structures is cheap and fast, a developer who only generates standard structures is competing on price with AI tools — and losing. The developers who remain valuable are the ones who bring what AI can't: diagnosis, architectural judgment, performance expertise, and the ability to link technical decisions to business outcomes.
For you, this is good news. A developer who can't explain why your site is slow, diagnose a checkout problem, or build a store that holds up under real load — that developer is being replaced. The baseline is rising.
The useful question to ask any developer isn't "do you use AI?" — most do. It's: "What judgment calls do you make that AI can't?" If the answer is vague, their value is mostly in the standard structure. If the answer is specific — "I look at your real user device breakdown before choosing a framework", "I profile your database queries before recommending a caching strategy", "I test your checkout on actual mobile hardware, not just emulators" — you've found the judgment layer.
The practical answer for your business
Use AI tools for genuinely simple, low-stakes sites. A one-page portfolio, a static landing page for a single product, a basic information site you'll maintain yourself — AI-assisted builders have improved a lot and are a reasonable option. The cost of failure is low.
Don't use AI as a substitute for a specialist when the site is a critical revenue channel. A WooCommerce store processing real transactions, an LMS with paid course enrolments, a lead-generation landing page for a B2B service — these are systems where the difference between good and poor technical execution is measured in revenue. The cost of an underperforming site far exceeds the cost of getting a specialist involved.
The test: if your website going offline for 24 hours, or your checkout conversion dropping 15%, would materially affect your business — the site is infrastructure, not a brochure. Infrastructure needs engineering judgment, not just pattern generation.
Want to know what judgment calls your site actually needs?
Tell me what you're working with — what the site does, what's not working, what you've tried. I'll tell you what I'd look at first.
Start the briefSources
- 1GitHub — The economic impact of the AI coding assistant (Copilot research, 2022) — Controlled study: developers completed specific coding tasks 55% faster with AI assistance. Gains concentrated in boilerplate and standard patterns.
- 2Baymard Institute — 2024 E-Commerce Checkout Usability Study — The average checkout has 39 form fields/steps; Baymard identifies a median of 24 unnecessary ones. Fixing requires user research and device testing — not code generation.
- 3Stack Overflow Developer Survey (2024) — 76% of developers are using or planning to use AI coding tools. Most useful for boilerplate; least useful for architecture and debugging complex systems.
- 4web.dev — Core Web Vitals and business impact case studies — Vodafone, Rakuten, and other brands achieved significant conversion gains through targeted performance fixes — all requiring diagnosis of site-specific bottlenecks.
- 5McKinsey — The economic potential of generative AI (2023) — Software tasks most amenable to AI automation: code generation from specification. Least amenable: architecture decisions, security review, and performance diagnosis.