CV's questions for SETI WOW 2017

ChristianVirtual

[H]ard DCOTM x3
Joined
Feb 23, 2013
Messages
2,561
Q1: Seems the Deadline is much longer; I have now (for WU assigned on July 8th) as deadline of Aug 30 ...
do we need to start bunkering now for WOW event ?
 
Last edited:
Q2: the "group" assignment is not too relevant for team success ? just choose the one from my birthday ?
 
Well, you can download a maximum of 100 tasks per device. So if you have 1 GPU, 1 intel iGPU and 1 CPU, you can run a max of 3 x 100 tasks = 300. The CPU will take much longer though. If anyone know how to easily trick the system, I"m all ears. I think it is an advantage to run now if you are concerned about wingman validation but you need to disable the network or do IP blocking. Just my one cent.
 
found a nice script over in a thread over http://www.overclock.net/t/1628924/guide-setting-up-multiple-boinc-instances/70#post_26211964

While it seems running fine for CPU projects (running SETI-CPU and WCG on the same box; not make sense; just for testing) it fail to detect the GPUs.

Basically the script run multiple boinc instances on different ports in different data folders. Idea would be be start each instance up; get the queue filled and block network until time is good.

But without detecting hardware it don't make much sense.

Any idea how to get around that ?
 
Yeah... we had a how to similar but it doesn't go into as great detail nor does it have Linux. I honestly don't know how to switch GPU selection to each client I certainly want to load up multiple clients at the same time and have each one trying to load a GPU work unit. Have you tried closing the main client instance and then manually launching the new one? Also, you had mentioned in that OCN thread that you were running BOINC as a service... is this letting you run GPU now?

https://hardforum.com/threads/goofyxgrid-multi-client-setup.1924298/
 
Oh, I forgot about the goofy method...

I just setup one new instance for test purpose and I'm able to download enough tasks for both CPU and GPU using the boinccmd line method as outline in Gilthanis thread in goofy. I'll report later if everything works fine.

Computer ID 8298997 is new client setup using the above method of the original computer 8050233. The only issue is that there are some tasks where the dateline is before the WOW event. See the 190 task lists that I've downloaded for that new client. So I think there is no point to run now this early.

After you are done, you can create a new client and upload another 200 tasks (one for CPU and one for GPU) after the first one is done with the tasks. You need to disable the network using the option "--set_network_mode never".

I'm testing this and see if there is any limitation.


upload_2017-7-9_9-34-43.png

upload_2017-7-9_9-35-10.png
 
As per the service question: instart BOINC via /etc/init.d/boincclient start (or similar;not sure on the right spelling right now)

The goofy method is exactly the same way as the Linux script from our dear friends over in OCN.
What I will try once more to make sure the "main instance" via service is stopped and not blocking GPU cards (also not sure if I allowed multiple clients on the main instance) ; need to check tonight when comming home)
 
Yeah.. it is the same because mmonnin doesn't give credit to sources he uses. He just likes people to think he's the go to source. But heaven help you if you correct him when he is wrong. I will say that he put some time into covering several portions and getting his guide well made for the Pentathlon though. I personally don't like all the effort necessary to do all of it. Bunkering is an art form. I'm finding the challenges are more about how disciplined you are in tricking the projects than it is about running the projects.
 
Not sure if stupid, but I'm piling up >600 validation pending WU for SETI, few sink through.

Is my assumption right that many people already bunker like hell and hence bunker me indirect too ?

Are you guys bunker already yourself or just not running SETI right now ?
 
I'm not really running it right now. Since more and more people are learning how to use multiple clients and VM's, it wouldn't surprise me if they aren't already bunkering as long as the deadline isn't too soon. What deadlines are you seeing?


This event is about the only real time I do run SETI as I don't really believe in their "science".
 
Looks like Brent got you cleared up or at least in the right direction. Sorry, I've not had a lot of time on the net lately with the second kiddo keeping us hopping. :)
 
I have seen deadlines far into September , but with the newer cuda80 the deadlines are shorter, I think right now more early August, before the event.

I have similar "concerns" on the science behind SETI , not sure if ETs would communicate the way we neanderthaler think they would. Because it's just the way we understand. But on the other side it's a pinoneer project of DC and as such deserve some attention al least during that time. And if nothing "healthier" is around.
 
The pioneer portion is why I run this event...lol. However, if you micromanage to get your coffers full of longer deadlines, by all means begin bunkering. CPU is easiest because you can always do the same thing using VM's. GPU is the trick but multi-client strategy should work for it. pututu are you sure that is 100 per device as you explained it? or is that 100 per host? I would check out the thread you linked but the project is down right now.
 
I assume if you create two different computer ids (see post #6) of the same physical PC, it will be treated as two different hosts. I would run each host one at a time. Certainly I need to do more testing but being busy with something else. I'll keep you guys updated later when the dateline is close as the last time I checked it downloaded some tasks which has dateline well before the event starts.
 
I have now 30 clients on my first Ryzen1700 and filling up each and hiding on SETI page.

Once the SOP is approved I will rollout the script to the second box. The issue with GPU-bunkering is still not solved; second instance don't identify GPUs at all. Even if first instance is down. Still looking ...


Potentially stupid question I guess I know the answer:
Q: is it better to let each client run individually or all clients at the same time
A: as task switching is "expensive" its more efficient to run the client one after the other (or in smaller groups; if no time during the day to switch clients)

Right ?
 
... or better this way ?

load with maximum cores the queue full and then switch to 1/n-th of core/thread on CPU load per client and let them run ?
 
2nd ryzen box prepared with 16 parallel clients with due dates far into September ...

and some WU until 9 Aug; those I will crunch and let through in the hope they get validated later. Else it helps FormularBoinc.

So, now I still need to find a way for GPU bunkering. CPU workflow is established. Still quite work-intensive; specially with the need to reduce the cpu usage to 6.25% for each instance. All for the team :D
 
run one client at a time full bore. When the work is complete..switch to another. Make sure you block boinc transfers via hosts file for best results. You should be able to use GPU with other clients..tell the boinc manager to connect to the other client rather than command line. See if the GPU's work then.
 
I still get quite a number of WU with shorter deadline before the event, can I dump them without consequences ? I thought I remember that the server scheduler might bench you for a bit when the return rate is not within a certain range ?
 
I'm not sure if SETI does it but most likely they do. However, returning a couple quickly fixes that. I don't have an easy solution for you on that.
 
no problem; I just don't overdo the "abort" and return some WU ahead of time ... try to play a bit fair in the overall "unfair" bunker game ... man, I like QRB ;-)
 
Finally I also have a way to bunker for GPU ... but its very rude ...

While for CPU bunkering its enough to multiply the number of data folder within on BOINC installation folder (e.g. in /home/boinc/ ...) as outlined in the script earlier referred the way for GPU is really to duplicate the installation itself.

I use the installation via source code and then like this


adduser boinc3
su - boinc3
cp -rP /opt/boinc_source/boinc_source .
cd ~/boinc_source
./_autosetup -f
./configure --disable-manager --disable-server --enable-client --prefix=$HOME
make
make install
cd ..

echo '<cc_config>' > cc_config.xml
echo '<options>' >> cc_config.xml
echo '<allow_remote_gui_rpc>1</allow_remote_gui_rpc>' >> cc_config.xml
echo '<ncpus>0</ncpus>' >> cc_config.xml
echo '</options>' >> cc_config.xml
echo '</cc_config>' >> cc_config.xml


echo '<optional WAN IP>' >> remote_hosts.cfg
echo '<gui-password>' > gui_rpc_auth.cfg

~/bin/boinc --allow_multiple_clients --allow_remote_gui_rpc --daemon --suppress_net_info --gui_rpc_port 30003


boinccmd --host <ip-address>:30003 --passwd <gui-password> --project_attach http://setiathome.berkeley.edu/ <weak-account-key>


The "trick" was eventually to have the --prefix and let all binaries also installed in the home folder of the boinc ... specially if you do it while running on other instances already on the same box.

Then once one instance is full the normal isolation via network suppression take place. And once all WU are crunched start the next instance.

As I said: very rude but a good start; sorry, linux only (but in principle it should work under Windows too if you have the compiler experience and tools; Visual Studio as Community edition is free)


Oh, and on the Mac I use the following command to connect remotely to my instances from the shell

/Applications/BOINCManager.app/Contents/MacOS/BOINCManager -n <ip-adress> -m -p <gui-password> -g 30003
 
Update: eventually the creation of dedicated users and each one with it own installation can be prevented when the full path to boinc is given; then also the GPU detection is working for bunkering !


sudo -u boinc /usr/local/bin/boinc --allow_multiple_clients --allow_remote_gui_rpc --gui_rpc_port 30002 --daemon --dir /home/boinc/zdatadirs/boincdata2 --suppress_net_info >>/home/boinc/zdatadirs/boincdata2/boinc_client.log 2>>/home/boinc/zdatadirs/boincdata2/boinc_client_err.log


With help of some helpful SETIs: https://setiathome.berkeley.edu/forum_thread.php?id=81735
 
Back
Top