Formula Boinc 2018 Strategy

I've started running my GPU's at 75% and turned my CPU down to the highest overclock with just over 1.1v and it seems to really work well for me. That's with my 7820X and I'm going to start messing with the 2700X.

7820X will do 4.5ghz at 1.14v
 
Some projects by there very nature don't seem to go to high in power targets. Milkyway@home wasn't going much over 60% in the logs so i found tweaking power target didn't really do much for that project, but on others it certainly makes a difference and ppd loss shouldn't be too much, but it all depends on the project for sure.
 
Second attempt, lol.

A bit of info on the sprint target TNGrid.

  • It's a bio project, yay.
  • Tasks take about 3 to 3.5 hours on a TR.
  • Max number of tasks per thread is 8.
  • IP for blocking is 193.205.194.62 (I think)
  • Tasks availability is extremely limited for a sprint. They trickle mostly with a bit of a wave approx. every half hour or so.
  • There's a timeout on updates so the script I use on Linux is: watch -n 375 boinccmd --project http://gene.disi.unitn.it/test/ update. The client won't make repeat requests when no tasks are received so a more appropriate timing script for when there's a work drought is: watch -n 270 boinccmd --project http://gene.disi.unitn.it/test/ update.
  • Wingman required
  • Memory demand is fine. Requires approx. 50MB per task.
  • Deadline of 4 days
  • Invite code for project sign up is: science@tn
Edit: Amended best guess at work shortage sweet spot for update frequency.
 
Last edited:
How can I make it keep downloading tasks, and also not upload the completed ones.]

I'm willing to fire up a handful of my Linux VMs. Just need to figure out the above if I want to bunker today
 
You can change upload to .01 and it'll keep downloading them until you get too many uploads pending. Then you can only run multiple instances and block network on the instance itself.
 
That would be helluva easy then. lol

So who's a programmer around here?
 
I'm not sure this project likes multiple instances or if i just have bad luck. i had a full client of WU and was crunching away for a few hours and then i decided to add it to my other client and boom it dumped the tasks all aborted. then it wouldn't get any new ones on either until i removed from one of them and it did this:
2018-09-12 5:39:41 PM | TN-Grid Platform | Generated new computer cross-project ID: cebb0a6exxddfffeee:pb0ea85e
 
And make sure to [H]ard code it so that the button only works while on team [H]...that way other teams couldn't download it and use against us...lol

Actually...to release the bunker, you have to be on team H....
 
Second attempt, lol.

A bit of info on the sprint target TNGrid.

  • It's a bio project, yay.
  • Tasks take about 3 to 3.5 hours on a TR.
  • Max number of tasks per thread is 8.
  • IP for blocking is 193.205.194.62 (I think)
  • Tasks availability is extremely limited for a sprint. They trickle mostly with a bit of a wave approx. every half hour or so.
  • There's a timeout on updates so the script I use on Linux is: watch -n 375 boinccmd --project http://gene.disi.unitn.it/test/ update. The client won't make repeat requests when no tasks are received so a more appropriate timing script for when there's a work drought is: watch -n 270 boinccmd --project http://gene.disi.unitn.it/test/ update.
  • Wingman required
  • Memory demand is fine. Requires approx. 50MB per task.
  • Deadline of 4 days
  • Invite code for project sign up is: science@tn
Edit: Amended best guess at work shortage sweet spot for update frequency.

That IP appears to be correct. Looks like they do not have an IPV6 address, so no need to worry about that if you happen to have a working IPV6 address.

645370ab-13d6-4c59-8b13-35c763d62c4f-original.jpg


Here are some sample work unit run times on my machines:

All capable CPU's have HT enabled.

(hh:mm)

TR [email protected]: ~3:30
[email protected]: ~4:05
[email protected]: ~3:40
[email protected]: ~2:46
[email protected]: ~2:48

Will have TR 2950X times to add tomorrow. :D
 
Last edited:
My times so far are as follow:

6174@Stock: ~6:45
i5-2400@Stock: ~4:40
i5-4690k@Stock: ~3:40
E5-4620v3@Stock: ~6:50
 
I wonder if XS is deviating from there normal drop hard right away strategy. definitely alot of their team doesn't seem to be showing up .....yet like where is stoneageman?
I think we need a much wider lead than we have if he and Viet OZ have been bunkering.


Capture.PNG
 
I would bet they are bunkering now in order to drop their loads at the last minute. I am giving it everything I have got - 173 CPU threads are now hammering away.
 
Last edited:
pututu they are but it's not entirely OK. I downloaded more tasks than I can crunch whilst I slept with the script running on the first night. Now that I'm home I've done the decent thing and started aborting the excess which has impacted the last 3 instances you listed. The tasks were all immediately picked up by somebody else and will hopefully be completed by them before the sprint finishes.

I'm going to try and keep an eye on run times in case I still have too many.
 
XS is hitting WCG really hard for some reason right now. If you look at their free-dc stats for WCG you'll see all their big hitters are putting up big numbers on WCG even stoneageman.
 
They didn't visit the forums for the usual pre-game taunts this time. Either this is a superb bluff or we've upset them and the toys are out of the pram.
 
Well pay attention. If we can bump EVGA above them let's do it.
 
The start of the EVGA boost plan could've gone better. Out of almost 4K tasks dropped only about a third validated for about 200K credits.

A large proportion of the tasks awaiting validation have someone I will refer to as 'Coco the Clown' as their wingman. He's set his ncpu count to 250 cores on an slow rig and then downloaded 1500 tasks with no chance of completing them all by their deadline.

To cap it all he's using the old 'anonymous' app which causes any task he does happen to complete to move to 'validation inconclusive'.
 
Universe@home tasks deadlines are October 2nd. This means you can build a huge bunker starting now and be ready on the 27th if it's chosen at the sprint. Limit of 8 tasks per thread so use ncpu cc_config tweak to trick it into thinking you have more CPUs/Cores.

Seems to be a limit of 512 per host.
 
Currently trying to get more CSG work, but it's not working out very well. Not sure what the issue is. Currently have over 2k+ Universe tasks that I am crunching. They should be done before the Sprint starts and then I will grab some Acoustics tasks (windows only), but I will be very limited there since my big boxes are all Linux.

Will also bunker some PrimeGrid and Einstein tasks as I get closer to the start date. I know PrimeGrid is like 7 days and Einstein is 14 days. I could technically bunker Einstein now, but I'm gonna wait 3 days so if it's not picked this Sprint I could use it next Sprint.
 
PrimeGrid does dynamic deadlines, they are partially based on how long your tasks have took in the past. The PrimeGrid admins have said in their forums and on the Discord channel that they will reward tasks returned after the deadlines as long as the tasks haven't been purged from the system and obviously that your results validate.

Edit: I don't know if these dynamic deadlines are project wide or only on some subprojects. The rewarding of completed tasks after the deadline appears to be for any project based on the comments I've read.
 
That's true on almost any project. If you return the past deadline result back before someone who has the new resent deadline then you'll get credit and they'll get a "cancelled by server" error message in most cases.
 
That's true on almost any project. If you return the past deadline result back before someone who has the new resent deadline then you'll get credit and they'll get a "cancelled by server" error message in most cases.

My understanding is that the grace is even more generous than that. You'll get credit no matter what as long as your result is valid compared to the other returned results. There could be two valid results already returned and the WU is done, and you return a late result 3 weeks later as the 3rd person and you'll still get credit since the WU is still in the system.
 
Acoustics also runs under Linux. It does run faster under Windows though.

Universe used to run much faster under Linux but recent stats seem to suggest that this may have reversed itself.
 
finished my CSG bunker and can't get any more at the moment so i am bunkering universe now
 
Guys, where should I point my 2 - 280X's and 2 - 280's once I hit my 100 million goal in MW in the next few days? Collatz? Moo?

My nVidia GPU's are staying on PG for another 250 million points, which is about another 31 days at current production rates. I can then switch those to whatever is best for the team, as well.
 
I think the plan is for the team to stay on Collatz until a good 2nd place lead is in place. Then we're all going to switch to Moo! Wrapper since it's the closest to gaining positions and atp1916 is going to take the wheel on Enigma to break up a little cat fight going on there.
 
Anyone have a good .config file for the 7970 / 7950 / 280X / 280 GPU's? I haven't run Collatz since 2015, when I made it my first 1 billion point project. The work there has changed since then, as has the .config file.

Update: I found some info on the new .config file settings in the Collatz forums, so I am good.
 
Last edited:
Back
Top