: : : Neoseeker Server Project 2002

Neoseeker Server Project 2002 - PAGE 1

- Friday, May 10th, 2002 Like Share


next: Planning Stages »




Get updates when we publish new articles

Comments

Sort by date: ascending descending
0 thumbs!
^
Jeevan May 10, 02
Wowza, looks like you guys went through a lot (still reading it)! The images won't load up, instead just a markup tag shows. Anyway, seems like an interesting read
0 thumbs!
^
Mindbreaker May 10, 02
quote Neoseeker Server Project 2002 Report
Server Name: DS9
Function: Database server
DS9 as in Deep Space 9, the show? Then there's Enterprise, Defiant, and Warpcore. You guys and your Trek fetish.

On a more serious note, it was a pleasure to read that article about the process you guys went through. I can't wait until Phase B. Are you guys just going to tweak the current servers for more juice or are you going to add that second Pentium III to Warpcore?
0 thumbs!
^
Rabster May 10, 02
Excellent Article, appreciate the photos. Nicely done.

One thing though: RAID 0 and RAID 1 are being confused in this article. The config appears to be a RAID 0, not RAID 1.

Cheers
0 thumbs!
^
DEViATE May 10, 02
quote Rabster
Excellent Article, appreciate the photos. Nicely done.

One thing though: RAID 0 and RAID 1 are being confused in this article. The config appears to be a RAID 0, not RAID 1.

Cheers
Nice catch Rabster, you are absolutely correct we are running a Raid 0 and not a Raid 1. We were probably still thinking of running a Raid 1 when that part of the article was written. We'll change that ASAP.

To address the dual PIII-800 to warpcore, I think that most likely we'll leave it as a single CPU and perhaps add more ram then take warpcore out of the web cluster to reduce it's load. I think eventually we'll like warpcore just to handle our e-mail and other duties like image server etc. So when we do take warpcore out of the cluster, we'll add a couple of more web servers in there. We're going to aim for an additional 2-3, that should keep things nice and speedy.
0 thumbs!
^
fanatic May 10, 02
Two to three more servers, wow, that should definately be able to handle the traffic around here. Very interesting read I might add, although, for the most part, now I realize how much I don't know about computers.
0 thumbs!
^
Redemption May 11, 02
Added Note:

If you take a close look at the SMC-EZ1016FDT Switch you will note that it does NOT have a crossover port. While building the cluster, we discovered, to our delight, that port 1 was an auto-sensing crossover port - it basically detects whether to operate as a regular or crossover port, depending on the cabling. How sweet is that!!!
0 thumbs!
^
The Jedi May 15, 02
Much thanks for the article. I'm working on piecing together a web server and insight like this isn't quite abundant. It really concerns me though, how the author says that the server crashed so much before. It seems to me that with server class hardware, the system may bottleneck, but it shouldn't be crashing. Would you attribute the crashing to software, like the web server app, or to hardware, like running out of disk space? For all the $$ I spend on premium hardware, should I still expect it to eventually crash at some point? Do you have the computer auto-reboot or something?
0 thumbs!
^
Redemption May 15, 02
I think the case was more of the site crashing then the server hardware locking up. The server itself is not likely to actually crash just because of load, but imagine the following:

1) The hard drive is constantly thrashing, and the hard disk would actually FILL up with data every few days. Once a partition is full, there's a chance the databases can lock up and be corrupted.
2) A server can be so loaded that the system stops responding altogether for a while, or even until it is rebooted. Sometimes the webserver daemon or database daemon may hang due to extremely high loads. Start thinking server loads of 80 and above using Linux's uptime command.

We never actually had to reboot the server much, but we sure found ourselves killing off errant threads and repairing the database. A lot.

quote The Jedi
Much thanks for the article. I'm working on piecing together a web server and insight like this isn't quite abundant
Over the years I've noticed that very few people discuss about the actual servers too much while designing their high end websites. I think if there's an interest we'd love to write a few articles about our experiences building servers specifically for hosting traffic intensive websites (well... of a medium intensity anyway).
0 thumbs!
^
IRoBoI May 15, 02
The Server cluster is impressive.

Can i ask for a traffic statistic report of the site before and after the upgrade. (Concurrent connections, daily hits, average build time for large post)

from your stress testing how high did the servers reach.

0 thumbs!
^
Mindbreaker May 15, 02
I'd be interested in those statistics as well, IRoBoI.
0 thumbs!
^
Daviesw May 17, 02
quote IRoBoI
The Server cluster is impressive.

Can i ask for a traffic statistic report of the site before and after the upgrade. (Concurrent connections, daily hits, average build time for large post)

from your stress testing how high did the servers reach.

DITTO
0 thumbs!
^
Redemption May 21, 02
quote IRoBoI
Can i ask for a traffic statistic report of the site before and after the upgrade. (Concurrent connections, daily hits, average build time for large post)

from your stress testing how high did the servers reach.
With simulated load testing, we were able to load the current existing servers with 400-600 concurrent users, but that brought all the webservers pretty much to their knees, with forum load times of 13-40seconds, depending. The webserver never loaded past 6 or 7 when using Linux "uptime" or by running a constant check using the top or vmstat commands.

For those curious, concurrent users is different then how the forums measure the number of people online. 100 concurrent users is the same as having 100 users trying to access the site at the exact same time. If you had 400 concurrent users on the site, the number of users trying to access it are likely to hover in the thousands at that moment in time.

Our simulated tests involved using 3-4 systems running load simulation software - those machines were hitting something like 50+ different pages on the site.

Right now as I write this, the Webmux load balancer says we are using around 1000 connections spread across the farms. During peak load I've seen as many as 1500-2000 on this measure, but I don't know how the balancer actually "counts" these connections.

Right now, I almost always see forum load times under 0.4 seconds, with many of the larger pages loading in about 0.5-0.74 seconds. On the other hand, those same pages have been seen loading at less than 0.15seconds - it really depends on the time of day and how lucky you are .

So far, the current cluster has not ever been stressed for any lengthy period of time, except when one of the servers is taken offline. With the help of the load balancer we are experience a level of availability that is quite astonishing compared to our previous setup.
0 thumbs!
^
Nozler May 23, 02
How did you simulate load on the test beds?
0 thumbs!
^
IRoBoI May 24, 02
How many application servers will be needed to tax the db server?

What is the ratio of the write/query time for the db server?
^
Sponsored
Sort by date: ascending descending
Add your comment:
Name *:  Members, please LOGIN
Email:  We use this to display your Gravatar.

Sign in with
Comment *:
(0.1330/d/web6)