Hardware Recommendation for 100 Seat Setup. Part-1

Any and all non-support discussions

Moderators: gerski, enjay, williamconley, Op3r, Staydog, gardo, mflorell, MJCoate, mcargile, Kumba, Michael_N

Hardware Recommendation for 100 Seat Setup. Part-1

Postby mzulqarnain » Wed Dec 24, 2008 12:14 pm

Hi All!

I have done some small VICI Installation upto 20 Agent on a Single Server without any problem. Now i have been assigned a task to setup 100 seat setup of VICI. I have read matt florell astricon presentation and forums for large scale installations which I found pretty good for reference.

I am planning to use following Hardware with FULL 100% Recording of All Calls(Outbound and Inbound) . Majorly it will be used for Outbound Calling but we are thinking to use it as an Inbound as well for few seats among 100. I need Expert Opinion before we use in production.

3 x VICI Server for Load Balancing, INITIALLY 35 AGENT ON EACH Server with expansion plan upto 50 Agent per server for future upgradation with following Del brand specification:

1) The Dialer specifications are a quad-core xeon 2.5+ghz CPU, 4-gigs of ram, and a simple Software Raid-1 with at least enterprise-grade SATA or 15K SAS.

2) The DataBase specifications are a quad-core 2.5+ghz CPU, 8-gigs of ram, and at least a 4-drive Hardware Raid-10 with enterprise-grade 15K SAS.

3) The Web Server specifications are a quad-core xeon 2.5+ghz CPU, 4-gigs of ram, and a simple Software Raid-1 with at least enterprise-grade SATA or 15K SAS.

4) The File Archive/Recording Server are a Dual Core 2.0+ghz CPU, 2-gigs of ram and Hardware RAID 0 with 15K SAS Drive. or Dedicated NAS Device like NF500.

5) 10/100 Switches for Agent Connectivity and 100/1000 for Servers Connectivity.

6) NO TDM, ALL VOIP, We will be using SIP for Call Termination

Please see 2nd post as having problem with posting at one go....
mzulqarnain
 
Posts: 8
Joined: Fri Dec 19, 2008 1:47 am

Hardware Recommendation for 100 Seat Setup Part-2

Postby mzulqarnain » Wed Dec 24, 2008 12:17 pm

From First Post continue....

I want clarification on following:
- Will one Web Server be enough for handling 100 Agent Request even though Server is Powerful or should I add another Server for load balancing. As I have read that for more then 70 agent another server is recommended?

- From the user experience on same forum it seems to me that 4 Quad Core 2.5Ghz are more enough for 100 Agents(probably wastage of resources and money). Can we go with 2 Server and 50 Agent Load on EACH instead of 3 Server with 35 Agent each?? please suggest...

- Regarding File Archive Server, I need to know either can we use Any Low End Entry Level Server with 1 GB RAM and standard SATA Hard Drive with larger capacity ? We will be using native gsm for call recording then stored as an MP3 on Recording Archive Server.

- Recently I came to know that Del Servers had strange behaviors associated with VICI for no apparent reason. If yes then what HP or IBM Model have proven record with VICI?

I will appreciate if any user would like to suggest his opinion specially regarding which RAID Level to use, Software or Hardware, or NO RAID due to performance issue associated with each RAID Level ?

All Valuable Suggestions are welcome!

Thanks
mzulqarnain
 
Posts: 8
Joined: Fri Dec 19, 2008 1:47 am

Postby pylinuxian » Wed Dec 24, 2008 12:52 pm

if you have no problem with your 20 agents install just put 4 more servers of same size & go on with your life ... plus you win on redundancy if one goes down you still have 80% of agents working.

for RAID stuff I would say "no RAID" is better, a scsi disk works better if configured as a single disk (writing redundancy data takes up to 40% of its performance) & you can look up in internet "RAID means reduced performance of discs".

RAID is when you can't have a backup server. or if you have expencive proprietary software.
pylinuxian
 
Posts: 147
Joined: Tue Feb 26, 2008 2:21 pm

Postby mzulqarnain » Wed Dec 24, 2008 2:17 pm

Hi
Thanks for your valuable suggestion. As per single server with 20 agents setup that is ordinary quad core p4 server in tower caring.
For 100 seat setup it would be difficult to manage 6 or 7 tower system with 24/7 setup including seperate web and database server.
The major concern is that after on site installation all server will be managed remotely thus wanted to have few but reliable server with raid setup.
I need suggestions on server brand why to choose HP, super micro or Ibm and why not to go for DELL as suggested by experts.
Also does it matter which os to used ? I am comfortable with cent os even vicidial now is based on it. But i have read that it does work well with debian, ubunto, and opensuse etc. And i should avoid centos.
For 20 seat setup i have used cent os 5.2 and found no problem. May be i haven't notice any at this level.
Thanks
mzulqarnain
 
Posts: 8
Joined: Fri Dec 19, 2008 1:47 am

Postby williamconley » Thu Dec 25, 2008 4:48 pm

pylinuxian wrote:if you have no problem with your 20 agents install just put 4 more servers of same size & go on with your life ... plus you win on redundancy if one goes down you still have 80% of agents working.

for RAID stuff I would say "no RAID" is better, a scsi disk works better if configured as a single disk (writing redundancy data takes up to 40% of its performance) & you can look up in internet "RAID means reduced performance of discs".

RAID is when you can't have a backup server. or if you have expencive proprietary software.


Most of the time i agree with that statement, but in an enterprise-level "down time = thousands of dollars lost per minute" scenario, I do not. redundant power supplies and hardware RAID 5 can keep a server running during what may otherwise be a sudden "down" situation. power supplies and hard drives ... die. properly set up, hardware RAID 5 increases drive performance and eliminates down time entirely. redundant power supplies are just plain cool.

being able to keep running with a down hard drive and/or power supply can be cost effective in the extreme in a situation where you are pretty much guaranteed to lose $5,000 if you are down for more than 30 minutes ... well, it is just a waiting game if you didn't set up RAID and redundant power supplies (as i said, hard drives die, so do power supplies). After all, it won't cost $5,000 to set that up, and if the system has at least SOME normal down time, you can modify the system later to include these items, but do it before your first drive failure (if you want to avoid a reinstall and data loss).
Vicidial Installation and Repair, plus Hosting and Colocation
Newest Product: Vicidial Agent Only Beep - Beta
http://www.PoundTeam.com # 352-269-0000 # +44(203) 769-2294
williamconley
 
Posts: 20258
Joined: Wed Oct 31, 2007 4:17 pm
Location: Davenport, FL (By Disney!)

Postby mflorell » Fri Dec 26, 2008 8:58 am

You should NOT be using a RAID 5 on a database or any other VICIDIAL-related server(with the only exception possibly being a recording archive server).

Using RAID 1 does NOT degrade your drive performance. This is mirroring and it should be fairly neutral as far as performance goes. Using RAID 10 will actually SPEED UP your drive throughput, this is striping and mirroring.

As for hardware, who is recommending Dell? I would suggest SuperMicro or HP.

For the OS, I would recommend anything BUT CentOS/RedHat/Fedora. Slackware, OpenSuse or Debian/Ubuntu would all be better.

Quad core CPU with 4GB RAM should be enough for a 100 agent web server if it is properly configured.
mflorell
Site Admin
 
Posts: 18387
Joined: Wed Jun 07, 2006 2:45 pm
Location: Florida

Postby mzulqarnain » Fri Dec 26, 2008 9:20 am

Thanks every one for suggestions to put me on right track.

What I understood that for VICIDIAL and Web Server I should go for RAID 1 but still i need to have recommendation either software RAID 1 should work fine to reduce hardware cost or It is necessary to have Hardware RAID Card like PERC etc.

For MySql I am for sure going to use RAID 10 with Hardware RAID Card.

I will appreciate if I can have a SuperMicro or HP Model number in Rack or Blade specification which has proven record with VICIDIAL like i was thinking to use Dell PE 2950 because I am not very much experienced with HP or Supermicro. I am using dell for my windows network therefore i was thinking to use it for VICIDIAL. Please suggest HP Model with Quad Core CPU.

FOR OS may I know is there any specific reason not to use CENTOS/Fedora/Redhat etc as i have seen VICIDIALNOW is based CENTOS.

Thanks
Regards,
Zulqarnain
mzulqarnain
 
Posts: 8
Joined: Fri Dec 19, 2008 1:47 am

Postby mflorell » Fri Dec 26, 2008 10:41 am

Software RAID should be fine for the web server.

As for model numbers for Supermicro/HP, just about anything will work. We have used many different models of each and they all performed well and had a lower defect rate than Dell.

As for RedHat family(CentOS/Fedora/RedHat) Yes VicidialNow is based on CentOS but the developers of VicidialNow have made many changes to the stock release including recompiling the kernel to make it function better so it is not just a stock CentOS install. I recommend not using Redhat-family OSs because of the kernel issues, the custom utilities, the issues with default process prioritization and the poor history of maintaining things like the Perl package(one serious incident as recent as 3 months ago). The other distros do not have these issues, and the fact that VICIDIAL usually pushes servers and the OS harder than most other applications means that you will see these faults that would normally not affect your system under other applications.

We have a lot of experience troubleshooting high-volume VICIDIAL systems all over the world and many problems on these systems are solved by wiping the servers and loading something other than Red-hat-family Linux on them.
mflorell
Site Admin
 
Posts: 18387
Joined: Wed Jun 07, 2006 2:45 pm
Location: Florida

Postby mzulqarnain » Fri Dec 26, 2008 11:11 am

Thanks Matt!

Does recompiling kernel on Minimal Centos Install (without any GUI) as instruction given on Scratch Install with "Linux kernel 2.6.17 *RECOMMENDED*" can overcome these problem you just discussed in last reply? or else I would definitely go for Slackware of Debian OS etc.

- Is there any performance optimization with 64bit install of OS with asterisk/vicidial as compare to default 32bit installation?

- Regarding Dialing Load Can I put 50 agent each on 2 server with Quad Core Xeon CPU, 4GB RAM, Ztdummy, Full Recording, (Web and DB on separate Server) at dial Ratio of 1:3 ?

- Please Suggest if X100p or any other card required for hardware timing interface for asterisk/vicidial or ztdummy can do this job. As I have read some user posts about IRQ Sharing issue with X100p card and they removed from their system later on.

- I have planned to put 100 agent load on 3 Server but it would be good to know if 2 servers can do it at any time if 3rd servers go offline due to any reason?

After every post and reply, I am getting more knowledge and confident to complete 100 seat setup.

I am Thankful to all of you for your time and suggestions specially Matt.

Thanks
Regards,
Muhammad Zulqarnain
mzulqarnain
 
Posts: 8
Joined: Fri Dec 19, 2008 1:47 am

Postby mflorell » Fri Dec 26, 2008 1:42 pm

A kernel.org kernel does help on a CentOS install, but we recommend just avoiding RedHat-family altogether.

There are some performance gains and losses from using 64bit depending on the application and the hardware. If you have a lot of RAM(8GB or greater) on the DB server then you would benefit from 64bit.

Will you be doing any audio recording with this setup?

What kind of trunks will you be using?
mflorell
Site Admin
 
Posts: 18387
Joined: Wed Jun 07, 2006 2:45 pm
Location: Florida

Postby williamconley » Fri Dec 26, 2008 5:53 pm

mflorell wrote:You should NOT be using a RAID 5 on a database or any other VICIDIAL-related server(with the only exception possibly being a recording archive server).

Using RAID 1 does NOT degrade your drive performance. This is mirroring and it should be fairly neutral as far as performance goes.


RAID 1 (mirroring) requires that everything written on one drive must also be written on the other drive. It is not fairly neutral, it slows down the system to require that both drives write every piece of information.

HARDWARE RAID 5 allows the RAID controller to decide which device is available for writing (in case a previous write has not yet completed, it can shunt to another drive for the next write). Properly installed will give superior performance to RAID 10 (also properly installed, this is just pure math). Especially if you use more than the 3 drive minimum for RAID 5 and especially if you are talking about a system requiring Zero down time for financial reasons. (Like losing thousands of dollars per hour in lost sales.)

The improvement is offset by a RAID 10 which requires mirroring and WOULD actually be approximately neutral if you do not have enough drives on the striping portion of the RAID to allow for a significant speed increase (because you are using them for Mirroring).

RAID 1 is excellent to keep a live duplicate of the system, but in the case of an imperfect SQL database engine (free version of MySQL for instance, vs MSSQL which is fairly solid and almost never corrupts its data) a problem can arise when the corrupted database is then copied to the mirror, thus completely negating any benefit of the Live Backup Drive and requiring an actual data restore.
Vicidial Installation and Repair, plus Hosting and Colocation
Newest Product: Vicidial Agent Only Beep - Beta
http://www.PoundTeam.com # 352-269-0000 # +44(203) 769-2294
williamconley
 
Posts: 20258
Joined: Wed Oct 31, 2007 4:17 pm
Location: Davenport, FL (By Disney!)

Postby mflorell » Fri Dec 26, 2008 6:51 pm

RAID 1 using a hardware RAID controller will see NO CHANGE in performance at all from a single drive. RAID 1 software RAID will see a very slight decrease in drive write performance, but it will be a lot more of a decrease in write performance if you are using IDE drives on the same channel.

RAID 5 is very much geared toward very heavy read access:
http://searchstorage.techtarget.com/new ... 68,00.html
"In software implementations of RAID-5, which are fairly common, performance will often become unacceptably slow if writes make up any more than about 15% of disk activity."

And if you still think you want to use a RAID5, take a look at this:
http://www.miracleas.com/BAARF/

Further reading on RAID:
http://en.wikipedia.org/wiki/Standard_RAID_levels
http://en.wikipedia.org/wiki/Nested_RAID_levels

As for your imperfect SQL engine issue, "a problem can arise when the corrupted database is then copied to the mirror" I'm not really sure what you're getting at here in relation to RAID1. I've had clients with MSSQL servers that had corrupted data, just like on Oracle, PostgreSQL and MySQL. No DB engine is perfect even if you spend a lot of money on it.
mflorell
Site Admin
 
Posts: 18387
Joined: Wed Jun 07, 2006 2:45 pm
Location: Florida

Postby williamconley » Fri Dec 26, 2008 9:17 pm

mflorell wrote:"In software implementations of RAID-5, which are fairly common, performance will often become unacceptably slow if writes make up any more than about 15% of disk activity."


This would be why I said "HARDWARE" RAID in all caps. Because YES software raid is commonplace, and kinda sucks. But in my experience HARDWARE RAID is essential for enterprise level installations which have zero tolerance for downtime.

Explaining to the CEO that it will only be down for an hour is NOT what I consider to be a fun day (when he quotes what it will cost the corporation in real dollars).

Explaining that it will operate slowly for another 20 minutes while the RAID rebuilds the recently replaced drive, that I can live with. My experience has been the realm of 80% capacity during the rebuild. Add redundant power supply and you have removed TWO of the normal "down system" generators from your system. Next we have an extra NIC (twin gigabit is actually fairly inexpensive thse days), this reduced your "down system" to things like memory, motherboard, router, etc., none of which is extremely commonplace (well, depending on how cheap your routers are).

The articles you pointed to complained about software RAID and said that MOSTLY read access is improved (more so than Write Access), but consider what happens when your Mix Monitor goes to work after a large quantity of recordings have been made. Speeding the read portion of that up won't hurt a bit. And I will take any increase in drive speed I can take that has no impact on CPU.

It also said that RAID 1 "does not require significant levels of CPU for normal operation or recovery", I'm betting that ANY CPU usage is a bad thing, and HARDWARE RAID is essential to remove any CPU hit. If the RAID is on the motherboard, similar to shared video memory, I'm thinking there's a CPU hit, so ... HARDWARE RAID means "buy a card".

Also, as the article says, everything on disk 1 is duplicated onto its mirror, therefor, as I said before, a corrupt sql table will also be corrupt in the mirrored disk, so you'll still need to get a tech and have him do a "restore". 20-30 minutes even if he's in the building. So you cut your hard drive usage in half, slowed down your system and still didn't keep the system "up". Which is why I said RAID 1 is marvelous, but not for a MySQL system.

How many MySQL problems have I seen posted that were resolved by rebuilding tables? ... but a Tech has to do this, and there's still the possibility that the rebuilding of the tables will fail and a FULL backup from last night would be beautiful.

So I recommend a nightly full sql backup and not the mirror (unless you want to spend the money, and then I still prefer the speed help and realiability of HARDWARE RAID 5) for a system with MySQL. If this is a "scratch" MySQL install, it's all in one box ... so MySQL is there.

In the end, the advantage to RAID 1 is ... you may not need a tech (right then), but you will not see a speed advantage, and my personal experience (on several systems) has been that when the drive dies (and yes, I've responded to a lot of dead drive), the system goes DOWN or slows down so far (as the drive fails) that the entire mirrored system seems like it's at 20% speed (because the mirror is trying to write on both drives, but one is dragging it down).

As with those "commonplace software RAID 5's" ... Most "mirror" systems are cheap and on the motherboard (not true "RAID" controllers) and cannot recover from a bad situation easily. Generally a tech must get involved and yank the dead drive. This sort of situation has always translated to a "Down System" phone call, and an immediate discussion of RAID 5 to avoid this in the next dead drive occurrence.

But, yes, RAID 5 costs even more money than RAID 1, however, my personal experience with RAID 5 is that you must purchase an actual RAID controller with the ability to detect and remove a dying/dead drive which immediately puts the system back to operation with whatever drives are left (down from 5 to 4 = not as good as 5, but still faster than a single drive). If the dead drive drops you to 2 ... you now have a mirror. And with all of them (mirror or RAID 5) you have automatic healing when a new drive is introduced if the system is any good.

Please tell me you didn't read all that.

Someone needs to drive to Florida so we can build parallel systems and really compare this on two machines sitting side by side. I'd love to see a comparison beyond my personal collection of "moments" with senior management. Noone wants to spend the money up front, but after they lose the money in a two-hour down ... suddenly the Operations Department wasn't loud enough about the importance of spending more money to "do it right".

Hard drives die ... if they die when the system is offline, or if you can just "shunt" to another machine, that's fine, but if you can't ... Redundant Power Supply, Spare network interfaces, RAID 5 can keep your system online until the shift ends.

Please excuse my rant. Should I delete it? :oops:
Vicidial Installation and Repair, plus Hosting and Colocation
Newest Product: Vicidial Agent Only Beep - Beta
http://www.PoundTeam.com # 352-269-0000 # +44(203) 769-2294
williamconley
 
Posts: 20258
Joined: Wed Oct 31, 2007 4:17 pm
Location: Davenport, FL (By Disney!)

Postby mflorell » Sat Dec 27, 2008 12:11 am

I guess I never really consider that there is a "copy" of data on two separate drives in a RAID 1 because they are mirrored and are treated as a single drive since the writes are instantaneous, which is why I didn't really understand what you were trying to get at with that one. As far as DB backups, that's why I wrote the ADMIN_backup.pl script, and that's what it will do, mysqldump then gzip and FTP to another server.

As for my RAID moments, I have had far more troubles with IDE and less-so with SATA drives(almost no trouble at all with SCSI systems).

One RAID5 problem I was brought into after it had happened was a Windows software RAID5 that had a MSSQL-based accounting system on it. Needless to say it failed and would not rebuild the 2 drives that it said were corrupted, but it kept trying to and the system was pretty much unusable. The client sent their machine off to one of those very expensive data recovery companies who could not recover anything from it. In the end they recovered from a week-old backup on to a different machine(which was set up to RAID1) and had to re-create all changes since then and hand-enter in all transactions from paper records that their accountant always insisted that they keep.

My other bad RAID5 experience was with a 3ware hardware RAID card and IDE drives. When one of the drives failed it took 2.5 days to rebuild it and the server was horribly unusable to the point of stopping the operations of an entire department inside the company for 3 days.

My experience with RAID 1 and RAID10 are much better with faster and more tolerable rebuild times and less problems on both hardware and software RAIDs. For example I always try to use LSI Logic RAID cards which allow you to define the rebuild priority so that you can have the rebuild only use 10%(for instance) of the resources of the RAID card which can be extremely useful on a production system.

As for corruption on MySQL, I have only seen two root causes of this, One is quite rare but I have seen it twice, an extremely high load on the server while doing temp table build for an ill-timed large SELECT statement and then a system freeze causes corruption. These were inadequate systems that had been recommended for replacement for months, but of course management thought that since things ran "just fine" that they shouldn't "waste" the money on a properly sized database server.

The other much more common corruption problem is caused by power issues. I can't even count how many companies I have seen that will spend thousands on a nice DB server only to plug it directly into the power plug on the wall or use a $30 desktop-UPS and think everything will be fine when they get a spike/surge on their lines. I cannot stress enough that a properly sized UPS with surge and spike leveling is a requirement not a suggestion for your high-transaction database server.

As for table or index rebuilds, for one client in the Caribbean I had set an automatic myisamchk script to run upon reboot since the only time the system rebooted was for power failures which were almost always unplanned emergency outages as far as the DB was concerned. Now, after power is restored, the system rebuilds selected Index tables and no tech is needed for the recovery.

Another idea for successful MySQL redundancy is Master/slave servers. We have installed master/slave configurations at several large call centers and of course none of the Master servers have failed up to this point, but there are settings in place to have the database pointers hot-swap to the slave server if the master server becomes unreachable. In this case you would have to manually sync the master server back to the newer slave server content, but you would not experience much down-time if any from the failure of on of the database servers.

Please don't delete your rant :)
mflorell
Site Admin
 
Posts: 18387
Joined: Wed Jun 07, 2006 2:45 pm
Location: Florida

Postby mzulqarnain » Sat Dec 27, 2008 2:24 am

beautifull discussion on RAID with advantages and disadvantages. Thanks for sharing your experiences.

Well we will be doing full recording of all calls. No tdm it is all voip. We will be using sip for call termination.

- Ok so should we use x100p for timing device or ztdummy is just fine.

- Do we still need to compile and optimize kernel from kernel.org for debian or other os release as per instruction given on scratch install or default kernels of debian are just fine ?

Thanks
Muhammad Zulqarnain
mzulqarnain
 
Posts: 8
Joined: Fri Dec 19, 2008 1:47 am

Postby mflorell » Sat Dec 27, 2008 8:39 am

We have seen incompatibility issues with the X100P cards in some newer systems. What happens is that as soon as you load the zaptel module and the x100p module(wctfxo I think) then the system freezes.

Usually we recommend one of the newer cards(yes they are significantly more expensive) because they are more reliable in newer servers. We are also working on another option for a separate hardware timer and if all of the testing goes well we will announce it on the forums of course.

As for the optimized kernel, it is always a good idea if you have the time and ability. For debian the stock kernel should be fine, although you may want to check if process-preemption is set to SERVER(which it should be). Of course you always want to install your distro with kernel source and development tools so it is easier to get a new kernel compiled.
mflorell
Site Admin
 
Posts: 18387
Joined: Wed Jun 07, 2006 2:45 pm
Location: Florida

Postby mzulqarnain » Sat Dec 27, 2008 12:29 pm

How about Sangoma Wanpipe VoiceTime: USB Voice Sync Tool:
http://wiki.sangoma.com/sangoma-wanpipe-voicetime

Have anybody tried it in production as Hardware Timing Device for meetme/vicidial?

Regarding compatibility of Servers brand with VICIDIAL I have found good alternative of Dell 2950 server and that is HP Proliant DL380 G5 Series Server. Should it work fine with VICIDIAL and Asterisk 1.2.27 on Debian ?

Any one is welcome to share his experience of using HP Proliant DL380 G5 with VICIDIAL in production or please point out any known problem identified with this Hardware.

Thanks
Regards,
Zulqarnain
mzulqarnain
 
Posts: 8
Joined: Fri Dec 19, 2008 1:47 am

Postby okli » Sat Dec 27, 2008 2:59 pm

mzulqarnain wrote:...Any one is welcome to share his experience of using HP Proliant DL380 G5 with VICIDIAL in production or please point out any known problem identified with this Hardware...

Beware of these issues if you are planning on using the embedded E200/P400 disk controllers:

http://forums13.itrc.hp.com/service/for ... Id=1178606
http://forums13.itrc.hp.com/service/for ... Id=1240003

For both you will have to enable "physical drive write cache", or write performance could be horrible. Read the threads for details.
okli
 
Posts: 671
Joined: Mon Oct 01, 2007 5:09 pm

Postby mflorell » Sat Dec 27, 2008 4:32 pm

We are testing the Sangoma voicetime modules right now actually. We have had them in production on a few servers for several weeks and they do a good job. When we are finished with our tests and Sangoma is ready to start actually selling them we will announce our results.
mflorell
Site Admin
 
Posts: 18387
Joined: Wed Jun 07, 2006 2:45 pm
Location: Florida

Postby mzulqarnain » Sun Dec 28, 2008 8:59 am

okli wrote:
mzulqarnain wrote:...Any one is welcome to share his experience of using HP Proliant DL380 G5 with VICIDIAL in production or please point out any known problem identified with this Hardware...

Beware of these issues if you are planning on using the embedded E200/P400 disk controllers:

http://forums13.itrc.hp.com/service/for ... Id=1178606
http://forums13.itrc.hp.com/service/for ... Id=1240003

For both you will have to enable "physical drive write cache", or write performance could be horrible. Read the threads for details.


Well, It seems to me a very common problem with these controllers as so many users have almost same issues with these controllers. What I understood that either enable write cache on physical drive or if it didn't work simply go for Software RAID which should work better then these controllers performance by default.
What would be alternate Controller we can use with HP servers other then P200/P400.

Thanks
Regards,
Zulqarnain
mzulqarnain
 
Posts: 8
Joined: Fri Dec 19, 2008 1:47 am

Postby mflorell » Sun Dec 28, 2008 6:08 pm

I always recommend LSI Logic controllers, we have installed over 100 of these over the years and we have never had one that failed in production. They offer great performance and they have a full range of RAID cards.
mflorell
Site Admin
 
Posts: 18387
Joined: Wed Jun 07, 2006 2:45 pm
Location: Florida


Return to General Discussion

Who is online

Users browsing this forum: No registered users and 110 guests