If you’re running a WordPress multisite with 20+ client sites and wondering why the admin dashboard takes 8 seconds to load – yeah, I’ve been there. Multisite is one of those WordPress features that sounds incredible in theory and slowly crushes your server in practice.
The problem isn’t multisite itself. It’s that nobody tells you how the database architecture actually works under the hood, and by the time you figure it out, you’ve got 40 client sites on a single MySQL instance and your agency’s reputation is taking hits.
This guide covers what actually matters when scaling WordPress multisite for agency use – database architecture, plugin activation strategy, object caching, domain mapping overhead, and the specific bottlenecks that show up around the 30-50 site mark.
How Multisite Database Architecture Actually Works
Before optimizing anything, you need to understand what WordPress does to your database when you add a site to the network.
WordPress multisite uses a shared database with per-site table prefixes. Every time you create a new subsite, WordPress generates a full set of tables for that site with a numeric prefix. Site 2 gets wp_2_posts, wp_2_postmeta, wp_2_options, wp_2_comments, and so on. Site 47 gets wp_47_posts, wp_47_postmeta, etc.
Each site generates roughly 9-12 tables depending on active plugins. So at 50 sites, you’re looking at 500-600 tables in a single MySQL database. At 100 sites, over 1,000 tables.
But here’s the part that catches people off guard – some tables are shared across the entire network:
wp_usersandwp_usermeta– all user accounts across all siteswp_blogs– registry of every site in the networkwp_siteandwp_sitemeta– network-level configurationwp_registration_logandwp_signups– signup data
This is logical isolation, not physical isolation. Every site’s data lives in the same MySQL instance, sharing the same connection pool, the same query optimizer, and the same InnoDB buffer pool.
The Shared Tables Problem
The wp_users and wp_usermeta tables are the first bottleneck you’ll hit at scale.
Every site in the network shares these tables. If you’re an agency managing 50 client sites with 20 users each, that’s 1,000 users in wp_users and potentially 50,000+ rows in wp_usermeta. Each user gets capability metadata per site they belong to – wp_2_capabilities, wp_3_capabilities, etc., all stored as rows in the same wp_usermeta table.
Run this query on your multisite to see the damage:
SELECT COUNT(*) FROM wp_usermeta;
SELECT COUNT(DISTINCT user_id) FROM wp_usermeta;
If wp_usermeta is over 100K rows, you’re already in territory where user-related queries on the network admin screen start to lag. WordPress knows this too – the wp_is_large_network() function kicks in at 10,000 users and defers real-time user counts to background cron jobs instead of calculating them on every admin page load.
For agencies, the fix is straightforward: don’t add users to sites they don’t need access to. Sounds obvious, but I’ve seen multisite networks where every client admin was added to every site “just in case.”
The wp_options Autoload Trap (Multiplied by 50)
If you’ve read about WordPress autoload being a hidden performance killer, you already know the problem. On multisite, it’s that problem multiplied by every site in your network.
Each site has its own wp_{n}_options table, and each one loads all autoload='yes' rows on every request to that site. A plugin that dumps 500KB of serialized data into autoloaded options is bad on a single site. On multisite with 50 sites, that same plugin is creating that overhead independently on each site.
The tricky part – network-activated plugins write their settings to every site’s options table when they initialize. If you network-activate a plugin that stores large option values, you’ve just added that bloat to every site in one click.
Check your per-site autoload size:
SELECT CONCAT('wp_', blog_id, '_options') AS table_name,
blog_id
FROM wp_blogs
WHERE archived = 0 AND deleted = 0;
Then for any specific site:
SELECT SUM(LENGTH(option_value)) AS autoload_bytes
FROM wp_2_options
WHERE autoload = 'yes';
If any site is over 1MB of autoloaded data, that’s a problem. Over 2MB and you’ll feel it on every page load. I’ve written more about cleaning up database bloat – the same techniques apply per-site on multisite, you just need to do it across all sites.
Network Activate vs. Per-Site Activate – It Actually Matters
This is where most agencies get lazy, and it costs them.
Network-activating a plugin means it runs on every single site in the network. The Super Admin enables it globally, and individual Site Admins can’t deactivate it. Convenient? Sure. But it means every page load on every site processes that plugin’s code, fires its hooks, runs its init routines – even on sites that don’t need it.
The better approach for agencies:
- Network activate: Security plugins, object cache drop-in, essential mu-plugins, your agency’s maintenance plugin
- Per-site activate: WooCommerce (only on commerce sites), page builders, contact form plugins, anything site-specific
If only 5 out of 50 sites need WooCommerce, network-activating it means 45 sites are loading WooCommerce’s entire bootstrap on every request for no reason. WooCommerce alone adds 10-20 extra database queries per page load and consumes additional PHP memory.
For must-use plugins that should be network-wide but lightweight, put them in wp-content/mu-plugins/. These load before regular plugins and can’t be deactivated through the admin – perfect for agency infrastructure code.
switch_to_blog() – The Silent Performance Killer
If you’re writing custom code for your multisite network, this is the function that’ll trip you up.
switch_to_blog() lets you temporarily switch context to another site to access its data. Sounds handy. The problem is what happens under the hood – WordPress reloads the target site’s options into memory, clears relevant object cache entries, and reinitializes taxonomies and post types. Then restore_current_blog() does it all again in reverse.
One switch is fine. A loop switching through 50 sites? That’s been benchmarked at over 2 seconds without caching. At 100 sites, you’re looking at delays that make the operation impractical for any real-time request.
This matters for agencies because common tasks trigger these loops – generating cross-site reports, checking plugin versions across the network, or displaying “recent posts from all sites” widgets. If your network admin dashboard is painfully slow, switch_to_blog loops are likely the cause.
The fix: use direct database queries instead of switching context. If you need data from wp_15_posts, just query it directly:
global $wpdb;
$posts = $wpdb->get_results(
"SELECT ID, post_title FROM {$wpdb->base_prefix}15_posts
WHERE post_status = 'publish'
ORDER BY post_date DESC
LIMIT 10"
);
Not as elegant as using the WordPress API, but at 50+ sites, elegance takes a back seat to page load times.
Domain Mapping and sunrise.php Overhead
If you’re running client sites on multisite, each client probably wants their own domain. That means domain mapping.
Good news – since WordPress 4.5, domain mapping is native. You don’t need a plugin for it. Just go to Network Admin > Sites, edit the site, and change the Site Address to the custom domain. Make sure DNS points to your server and SSL is configured.
The sunrise.php drop-in is where things get interesting from a performance perspective. This file executes extremely early in the WordPress loading sequence – before mu-plugins, before regular plugins, before the theme. It runs on every single request to your network.
The key constraint: sunrise.php can’t access the database or most WordPress functions. It’s limited to pure PHP and setting constants. This is by design – executing expensive operations at this stage would slow down every request across every site.
If you’re using a legacy domain mapping plugin that still relies on sunrise.php (like the old WordPress MU Domain Mapping plugin), check what it’s actually doing in there. Some older implementations made database queries in sunrise.php, which is a performance hit on every request. With native domain mapping in modern WordPress, you can often remove sunrise.php entirely.
Add to your wp-config.php if you need sunrise.php:
define( 'SUNRISE', true );
And if you’re getting cookie issues after domain mapping, this helps:
define( 'COOKIE_DOMAIN', $_SERVER['HTTP_HOST'] );
I’ve covered more wp-config.php settings in the complete wp-config.php performance tuning guide – some of those settings are especially important on multisite.
Object Caching – Non-Negotiable at Scale
Running multisite with 20+ sites without Redis or Memcached is asking for trouble. The math is simple – each site generates its own set of database queries for options, posts, transients, and user data. Multiply that by the number of active sites and concurrent visitors, and your MySQL server is doing way more work than it needs to.
Redis object caching on multisite can reduce database queries by 50-80%. In one IEEE study, the performance difference was significant enough that researchers found MySQL CPU usage dropped by 25% and network traffic by up to 94% when proper object caching was in place.
There’s a catch with multisite though – cache isolation. By default, all sites in a network share the same Redis database. When you flush the cache (which plugins love to do), it clears the cache for every site, not just the one that triggered it.
You can isolate per-site caches using unique key prefixes or separate Redis database numbers. If you’re using the Redis Object Cache plugin, set this in wp-config.php:
define( 'WP_REDIS_PREFIX', 'site_' . get_current_blog_id() . ':' );
Or better – just use WP_CACHE_KEY_SALT which most object cache implementations respect:
define( 'WP_CACHE_KEY_SALT', 'mynetwork_' );
The salt gets combined with the blog ID automatically, so each site already gets isolated cache keys. What you want to prevent is one plugin’s aggressive cache flush wiping out cached data for all 50 sites.
Database Scaling Beyond 50 Sites
At 50+ sites, your single MySQL instance is handling 500+ tables, shared user queries, and concurrent requests from multiple sites. Here’s when you need to think about scaling the database layer.
Read replicas: Route read queries (which are 90%+ of WordPress queries) to replica servers. The HyperDB or LudicrousDB drop-ins handle this – they’re database abstraction layers that route queries to different servers based on the table being accessed. This is a real option for agencies running 50-100 sites.
Query monitoring: Enable slow query logging on your MySQL server. On multisite, you’ll often find that the slowest queries come from wp_usermeta JOINs and wp_options autoload queries – the shared tables and per-site options.
Table optimization: Run OPTIMIZE TABLE on your largest per-site tables periodically. With hundreds of tables, this needs to be scripted:
#!/bin/bash
# Optimize all WordPress multisite tables
mysql -u root -p your_database -e "
SELECT CONCAT('OPTIMIZE TABLE \`', TABLE_NAME, '\`;')
FROM information_schema.TABLES
WHERE TABLE_SCHEMA = 'your_database'
AND TABLE_NAME LIKE 'wp\_%'
AND DATA_FREE > 1048576
" --skip-column-names | mysql -u root -p your_database
This only optimizes tables with more than 1MB of fragmented space, which keeps the operation fast.
The Practical Checklist for Agency Multisite
If you’re managing 20-100 client sites on multisite, here’s what actually moves the needle:
- Install Redis object caching – this alone is probably the single biggest performance improvement. Not optional past 10 sites.
- Audit plugin activation scope – network activate only what every site needs. Per-site activate everything else. This reduces memory usage and database queries on sites that don’t need specific plugins.
- Clean autoloaded options per site – check each site’s options table for bloated autoload data. Transients, orphaned plugin settings, serialized blobs – they all add up.
- Remove unused sites – archived and deactivated sites still have their tables in the database. If a client left 6 months ago, export and delete that site. Fewer tables = faster schema operations.
- Avoid switch_to_blog loops – if you’ve got custom code or plugins that iterate over all sites, replace them with direct queries or batch operations via WP-CLI.
- Use WP-CLI for maintenance –
wp site listcombined withxargslets you run operations across all sites without the overhead of switch_to_blog:wp site list --field=url | xargs -I {} wp --url={} transient delete --expired - Monitor per-site performance independently – a slow site in your network affects the shared database resources. Use query monitoring to identify which site is being the noisy neighbor.
- Consider the wp_is_large_network threshold – at 10,000 users, WordPress defers user counts to cron. If your network admin is slow before that threshold, lower it:
add_filter( 'wp_is_large_network', function( $is_large, $component, $count ) { if ( 'users' === $component && $count > 2000 ) { return true; } return $is_large; }, 10, 3 );
When to Abandon Multisite
I know this isn’t what you want to hear in an article about scaling multisite, but it’s worth saying – sometimes the right move is to not use multisite at all.
If your agency’s client sites share almost nothing – different themes, different plugins, different update schedules – you’re not really getting the benefits of multisite. You’re just getting the constraints. Separate WordPress installs with a management tool like MainWP or ManageWP give you centralized control without the shared database bottleneck.
Multisite makes sense when sites share significant infrastructure – same theme, similar plugin sets, shared user base. Think universities with department sites, franchises with location pages, or media companies with multiple publications. If that’s not your use case, the database optimization effort spent on multisite might be better spent on a simpler architecture.
The honest take – multisite at 50+ sites is manageable with proper caching, selective plugin activation, and database monitoring. But it requires deliberate architecture decisions upfront. Bolting on performance fixes after you’ve already got 50 client sites sharing a single MySQL instance is… not fun. Ask me how I know.
Join developers and agency owners who get backend optimization strategies, tool releases, and deep-dive guides.
No spam. Unsubscribe anytime. We respect your privacy.
