And computers are big, too. You can buy a 500MHz machine with 1 gigabyte of RAM and six 100Mbit/sec Ethernet card for $3000 or so. Let's see - at 10000 clients, that's 50KHz, 100Kbytes, and 60Kbits/sec per client. It shouldn't take any more horsepower than that to take four kilobytes from the disk and send them to the network once a second for each of ten thousand clients. (That works out to $0.30 per client, by the way. Those $100/client licensing fees some operating systems charge are starting to look a little heavy!) So hardware is no longer the bottleneck.
(http://web.archive.org/web/19990508164301/http://www.kegel.c...)
If we haven't fixed the C10k problem with 17 years of hardware advancements, it's a pretty poor reflection on software engineering.
Making any variance between logged in, and not logged in visitors limited to JS enhanced interactions would help for most systems too.
A current server can pretty easily hit C100K per cpu core... There have been test runs at over a million concurrent connections on a single large server instance. It's all on the application design... using an application that's a couple decades old as a common baseline is part of the problem... honestly, wordpress should just die already.
This is not a configuration problem though, but a shared hosting problem. Most of the modern shared hosting is no longer a shared hosting. Instead, each client gets a tiny slice of server resources and never more, not even for a day. Their rationale for this is to host more clients on the same server. And they seem to be ok with clients leaving if their websites went down because of this. Some even make money selling such solutions, like cloudlinux.