Worth picking up these for BOINC?

CaptainUnlikely

[H]ard|DCer of the Month - May 2014
Joined
Mar 20, 2013
Messages
311
I have a line on a decent amount of rackmount servers, which will have dual L5320 CPUs and 2GB of FBDIMM RAM each. I can pick them up for £34 each. Are they worth it, and worth running? I don't know how much power they will consume at full load which is I suppose the main concern for sustained running. Also, how much RAM would be optimal for 8 cores? I'd be running a mixture of projects so would like them to be suitable for pretty much anything, with the exception of GPU projects.
 
Most projects are less then 1GB per work unit. However, there are some projects that have gone up to 6 or 8GB per work unit. The latter though is a rare site and they even mention it in their forums. I think Neurona was one of them. So, for the most part 1GB/thread should be more then enough on the minimum side.

As far as power, look here: http://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+L5320+@+1.86GHz
 
Repost. the forums are evidently having issues this AM.
 
Last edited:
To bring them up to 8GB each adds another £20 or so to the price of each machine, unless I find a better price for memory (£3.20/GB is the best so far).
I know the CPUs have a TDP of 50W each, so 100W for the CPUs, but I know FBDIMMs are fairly power hungry, plus power for the board and drives. Do you think 200W would be a reasonable estimate?
 
I'm not sure of the power draw on the memory. So, perhaps someone else with more experience with them can chime in.

I think that price on the memory would probably be worth the investment. Do you know what type of hard drive you will be using? A solid state drive obviously would speed things up, but I don't know what you budget looks like. You could also shave some ram off the budget if you had one because the caching would be much faster. Some people are worried about SSD's wearing out too quick, but they actually tend to outlast the normal life cycle of the PC even when crunching.
 
Well, they would come with 2xSAS drives each, I do have a couple of older SSDs I could throw in a few machines and upgrade the others as and when budget allows, I'm entirely with you that SSDs won't wear out from normal workloads until well after the PC they're in is obsolete. I include crunching in normal workloads because it's not that disk-intensive.
I prefer to throw in SSDs anywhere I can, sure wish I could source a few more of the ones I have! They're old now, 32GB SLC Samsungs, but they do 100 read/100 write and decent random performance, plus they scale amazingly in RAID.

The memory is my concern now as I'm reading anywhere from 7-15W, per DIMM! So 8 DIMMS could potentially, depending which source I believe, draw an extra 120W, which is madness.
 
But then again that would only be when using the full amount too right? Most projects use less then that, so that should bring it down a bit. I have also noticed that my systems don't draw the max power draw when crunching full speed, so I think that you would probably realistically draw less as well. But then again, I'm not using server grade hardware.
 
I think as far as memory goes, it's a constant draw, at least with FBDIMMs. This is, from some reading earlier, mainly due to the AMB, which doesn't really power down even when the DIMM isn't in use. FB-DIMMs and socket 771 in general are new to me, I've had a dual socket F system before but that used Registered ECC, not fully buffered RAM. I think if I could find 2GB DIMMs at a reasonable price, I'd be happy, as that would halve the amount of them in use and thus drawing power, but unfortunately it's quite a lot more expensive vs. 1GB DIMMs.
 
Then I would start out with 4GB and slap a SSD in it. That way if your system starts using a lot of virtual memory, it is much faster then standard hard drive. You could also use an old thumb drive for Ready Boost, but some people say they don't notice a difference. I think it couldn't hurt, so if you have one laying around you might as well plug it in. By going 4 1GB sticks, you would still get dual channel speed out of them.
 
Sounds like a decent plan. Being as I will end up with a few of these servers, I can run one at first, and try it with different combinations of RAM, see how much it needs to stay happy, before I go ahead and buy more memory. Like you say, the SSD should stop it completely tanking if it hits virtual memory, and maybe a fast thumbdrive for caching is worth a try, to see if it does make any difference. I can also measure power draw and see how extra DIMMs do affect it, then I can d ecide whether it's worth the extra £ for 2GB or even 4GB modules to reduce the power draw.
Just need to wait for payday now...oh, and convince the girlfriend that buying a truck load of old junk, I mean servers, is actually a great idea! :D
 
Keep in mind that servers like that can be extremely noisy too. Some sound like jets taking off. So, plan ahead for proper placement and cooling. Some will even replace the fans inside with larger ones to increase air flow, reduce power draw, and reduce noise.
 
I'm planning on racking these up elsewhere, I have no room at my place even if they weren't going to be noisy as hell. It'll be a bit of an ongoing project to get them all online and situated, but I anticipate having them all up and crunching by September at the latest.
 
I'm currently scrapping out a ton of old desktops and recycling them. Some of them are going into a yard sale in about a month. They all have BOINC on them right now, but will definitely be shut down by summer. They have helped heat my home over the winter. Since they are all single core (some hyper threaded), it wont be any real loss to my production. And, maybe they will leave the software on it and be borged systems. (I always get permission first)

Currently have 5 desktops and 2 laptops that will go into the yard sale. The two laptops are actually P3's that were just given to me. (1 is 1GHz and the other is 1.2GHz)
 
Makes sense to scale back for the summer, I always seem to time my upgrades badly...e.g. had a 2P hex low voltage Opteron in the winter, upgraded to a 4.2GHz i7 with GTX470 for the summer. Now, I'm going from a dual core Celeron and 5770 to an SR-2, just in time for summer.
I do need to have a think about where to put these servers though, I just realised the two places that immediately sprung to mind are lacking something rather important - internet connectivity :rolleyes: yay for thinking ahead. I do have other options though so we will see. If I can't pull it off, I'll have something in place for when the next deal comes along - it's an IT recycling company so they'll have another load of servers before too long if I can't grab this set.
 
Yeah the internet thing is kinda crucial. Is it lack of wireless availability or what? If you could get wireless going, you could always hook a cheap router up and use internet connection sharing to the other blades. Not ideal, but technically doable.
 
Nah, it's a complete lack of connectivity. I was going to rack them up in a family member's garage - lots of room, always nice and cool, and easy to ventilate if it does get warm - but they don't have internet at all, and nor does the other family member who was my backup. I was focusing more on having physical space for them, and forgot that not everyone has internet :)
My aunt just moved into a new house though and I think she will have space, so I'll make some inquiries. I'm hoping to be able to put them on a wired connection for reliability and hook them up to a switch, but wifi and connection sharing would be doable in a pinch.
 
Yeah...I do ICS to reach my systems upstairs. I don't want to run the wire right now. Maybe in a year or two when I convert a bedroom into a bathroom I will run the wire then since I will have to run some electrical. So, for now I have 3 wireless networks on my property. Two of which are daisy chained using ICS. Those systems just crunch, so not like I have a lot of bandwidth to really worry about doing it that way. I have even considered converting those P3 laptops I was just given into wifi extenders. Tell BOINC to run at like 60% and leave the rest of the processing for routing internet.
 
How much data per month would you expect to consume for 11 systems running BOINC?
I know it will depend on the projects and how much work is done. I'm not at home so I don't have access to any of my systems to see how much data they're using at the minute to roughly extrapolate it, but if anyone has a rough guess that would be great.
The reason I ask, is maybe I could rack these up in one of the places I first thought of, and use mobile tethering for data, but obviously it will depend how much data I'll need.
 
a dual L5320 box probably does not draw in excess of 150~200W depending on disks etc... so 1650-2200w
 
Good, I was hoping a max of 200W or so per system, and can do away with the (presumably power hungry) SAS disks for SSDs or cheaper SATA drives to keep power usage down. Thanks for the input :)

Any idea on data usage?
 
OK, so, I did some numbers and the cost to run these won't be prohibitive, and I have a potential place to put them. It means either convincing my uncle to get internet though, or buying a 3G dongle and router, and then it's £15/month for 10GB of data, which again isn't too bad. 10GB should be enough, right?
Hoping to get moving on these within the next week or two.
 
10GB should be plenty unless you are going to remote host in and leave the connection turned on...lol
 
Cool.
I was thinking I may go with Linux on these rather than Windows now, and just go with shell access (I think that's the right term? Remote command line?) so no need for a graphical remote access. 3GB is a little cheaper but I thought that might be cutting it close, not really sure.
 
I would start with 3GB and then upgrade as needed. If all they are is a set and forget system, then only time you need to log in is when changes are needed. Rarely should you need to upgrade anything for WCG. I would go with one of the newer clients though since WCG is looking to support one from the later 7.X.X series. Supposedly they will block the app_info in favor of app_config and so later clients would be the choice. Use an account manager like BAM! and maybe even use BoincTasks to do all of your project changes and monitoring. That may reduce some of the overhead.
 
I was planning on using BAM or something of the sort to avoid having to tinker too much. The idea is that I set them all up, individually, at home - so all updates, drivers, and BOINC software is installed and ready to rock, then rack them up elsewhere. I'll take a monitor, keyboard and mouse with me which can stay there in case I need physical access at some point, but in theory then all I'll need to do is set up the modem/router and all management can then be done via remote command line or BAM.
I should in theory be able to update the client remotely if needed, though I'll try and start with a newer client, thanks for the tip. With BAM I'll also be able to detach from or attach to projects easily, so I can move resources from project to project as needed.
If you think 3GB will be enough I'll go with that to begin with, I suppose to be fair there won't be all that much data being transferred. I'll have a look around and see what my options are, if it's barely any more to go with say 5 or 10GB I may as well do that just in case I expand or want to do something else with it.
 
Set up the projects from home before moving too...that will be a lot less data transfer for the initial start. Just in case the new location has the occasional temperature control issues... if you were to decide to run Windows later eFmer's tthrottle program is genious for throttling the CPU when it gets to certain temps.
 
That's what I was thinking, too - would it be a good idea to set them up with a few projects that I may want them to crunch in the future, and then just suspend them? That way they should have the binaries and whatnot, obviously if they download any tasks I'll let them crunch through those before setting no new tasks and suspending the projects.
Temperature hopefully shouldn't be a problem, it'll be in a location that stays quite cool even through summer and due to the cost of electricity I'll only be running them overnight when the cheaper electric rates cut in - I had to make this change the my original plan of running them 24/7 because at the average cost per kwh, assuming around 2200W for all of them I'd be paying around £8 a day which is too much for me. Running them overnight (well, for 7 hours) brings that down to £1 per day. Cutting the cost by over 7/8 but only cutting the runtime by roughly 2/3 is a good thing to me and makes it much more cost effective. Anyway, the temperature there should be fine, if not I can add some extract fans or ducting behind the machines, I'll see how it goes.
 
It would be a really good idea to add a few backup projects. You don't have to suspend them either, just change the priority to 0 so that they only pick up work when the other project doesn't have any. It is rare for WCG, but it does occur from time to time. Mostly when their servers go down for maintenance. However, if you don't run the backup projects for a long time, apps may change requiring a few new downloads, but that shouldn't be too bad. You may even be better served finding a balance between long running work units and report deadlines. Since you are only running ~7 hours, that is only 70 total run time hours a week per core/thread. With your system, it should have no trouble at all meeting deadlines, but if you add other projects, that may change. If the deadlines are too short, you will be doing a lot more data transfer over your internet. The other thing is that you can tell BOINC to only connect at certain intervals. This should cut down on unnecessary communications. Because even if BOINC is not active for processing, I don't know if that stops network too. So, look in your network usage tab and make sure to change those times accordingly.
 
Ah, I've learnt something new then. Though, does changing the priority to 0 stop it picking up any work, too? I wouldn't want the deadlines to expire on backup projects, but if setting it to 0 means it doesn't pick up work then that's perfect. I know there may be a few changes by the time they come to crunch any backup projects but it seems like a valid plan then.
I won't throw on anything with super short deadlines to avoid missing them, I know 7 hours a day may not work for some projects but it's just a case of striking a balance between work done and the cost of running the machines. The reason for running them 7 hours is that's the period of cheap electricity, during the night, and as said above it's a fairly hefty difference and much more cost effective. During the rest of the time they'll be in sleep or hibernation to save power, so they won't be doing any data transfer outside those 7 hours. I may get one of those smart plugs on a timer and have them shut down, then the plug goes off, and have the BIOS set to power on when power is detected. That way they can shut down at a set time, after shutdown the power goes off, then at another set time the power comes on which kicks them all back into life.
I sure wish power wasn't so expensive, it'd make things a lot simpler!
 
Changing the priority to 0 just means it wont ask for work unless the other projects aren't getting any. Then it will download enough to keep processors busy until there is work at the primary projects. So for example if you only wanted to focus on Malaria cures, You could select Go Fight Against Malaria at WCG and set your preferences there to not send work from any other project. Then you could put a 0 resource share at Malaria Control and 0 resource at Fight Malaria. The only time you would get work at the other two is if WCG didn't have any GFAM work available or you couldn't connect to their servers.

Now, not to create a misunderstanding... The work from the 0 resource share projects will still finish like any other WU's. It just wont ask for more unless the primary project(s) have no work.
 
Back
Top