FB 2017 strategy to overtake Gridcoin

pututu

[H]ard DC'er of the Year 2021
Joined
Dec 27, 2015
Messages
3,152
CV created a nice overview of the project threats and opportunities. Now is to figure out the battle plan strategy till end Dec.

Here are the list of projects which [H] members are focusing now.

Project | current points | max points gain (@9/4/2017)

Yoyo | 25 | 0
Distributed DataMining | 25 | 0
Primaboinca | 25 | 0

Numberfield (CPU) | 18 | +7
Collatz (GPU) | 18 | +7
PrimeGrid (GPU is the best) | 18 | +7
MindModeling Beta (CPU) | 18 | +7
Leiden Classical (CPU) | 12 | +13 (if need account, use HardOCPtest)??

Assuming we do not lose further positions in the marathon and we get gold in the remainder of the sprint events, the first 4 outstanding projects will net us + 28 points. Currently (9/4/2017) Gridcoin is at 1059, and [H] at 1055, so we are likely to overtake GC soon.
 
Last edited:
I should have Primaboinca gold in less than a week. Then I'll move back to Numberfield to try to reclaim that position for us.

Most all of my rigs are offline for the Summer though. It's just too hot. I did fire up two 4P rigs yesterday to help us with the SETI WOW event. I think with my added boxen we can get to 16th place.

Once the cooler weather hits and I can leave my AC off I'll be back 100% as I am sure a bunch of others will as well. We're sitting in a good spot right now all things considered.
 
I think we will loose on Rosetta soon; the US Army will run over us ...
Plus Enigma is at risk (- 7 points)
Wanless Mersenne also a -7-point-risk

We are ok in sprinting ; but not good in sustaining position ... and hell, I don't have enough CPU; I'm more a GPU guy these days.
 
I did fire up two 4P rigs yesterday to help us with the SETI WOW event. .
don't have some GPU under Linux ? Thats where the points are laying. I more or less stop CPU for SETI (only CPU in the GPU boxes as I can't prevent them; at least my knowledge in playing with config is not enough)

Most of my Pre-WOW bunker was on CPU -> but the points were quite low at the and and for the work :-(
 
I've reasonable CPU power, so I can continue to focus on CPU projects but not the Milkyway, Moo, SETI, PrimeGrid, Einstein, etc.

As long we we don't fall too much behind Enigma and Wanless Mersenne, I think we can catch up during the cooler period.
 
Not at the moment I do not have any GPUs running Linux. They're all on Windows since I converted my Linux GPU box to Windows for Primaboinca.

I have Wanless Mersenne in lock. It might look like those pesky Christians are going to catch us, but I wont allow it. I can out produce them by a LONG shot.

I have all my GPU rigs only running GPU and those CPUs are running Prima... I went to SETI's account page and unchecked CPU for the project. Then I set a HOME profile to allow CPU and set both my 4P CPU rigs to HOME so they'll grab CPU work.
 
Temps are getting much hotter here this week. So, my output will be reduced again. I've got no AC going to my office these days and so things get quite warm. Sorry guys. My electric was $600+ again this month and lets just say that was pretty much all cooling the house. Yes I have an old house and have done all the insulation updates I possibly can without ripping out the walls and doing it all new. We are also considering buying a new house so things will hopefully get better.
 
So based on this table, where would you guys assign a computer that has a Nvidia 1080 GPU? It is running Windows. The CPU is Intel Westmere Xeon X5670 with 6 cores at 4GHz.

If it was running Linux, I'd put it on Seti to crunch some serious PPD with the Linux special sauce build, but it isn't on Linux right now.

I think it can do 700K points per day in Prime Grid, which sadly isn't enough to get us to 1st on that project.

I guess I'm looking for suggestions, for what you all think it should be assigned to.
 
I'm ramping up Collatz to see how far I can get us. GPUs make serious points on that project.
 
I'm ramping up Collatz to see how far I can get us. GPUs make serious points on that project.

How many GPUz do you have control of? To me the spread between 1st and 2nd in PrimeGrid looks a lot more doable, than the difference in Collatz.

I'm going to run PPS Sieve tasks in Prime Grid for the next two days to see if I can really hit my theoretical mark of 763,000 points a day. Then I could move to Collatz, if we want to make a push on that.
 
I can do around 35M PPD with my GPUs. Your single 1080 Ti will probably do 5 or 6M PPD.
 
... but then putting them on PrimeGrid is not a bad thing; can join after Seti there with my three BOINC GPUs (980Ti, 1080 and 1080Ti, Linux); With my still 2000 pending validations I even could put the 1080 already earlier.

Just tell me: PrimeGrid is good for Bio/Health ... you can also lie to me....
 
CUDA or OpenCL ? What is better ? first estimation: my 1080 could make 1.6M1.1M with CUDA ? Sounds right ?
 
Last edited:
CUDA theoretically would be better as it is better supported. However, that comes down to the coders ability. I don't know if any of us have done extensive testing for it.

As for PrimeGrid, I'm sure the primes that are found and theoretically could be used for encryption could be somehow used in BIO/Medical science some how. Perhaps through protecting their networks or encrypting their data...lol.
 
I think I may have figured out how to get 1.5 million PPD on Prime Grid out of one 1080TI. Run PPS (Sieve) tasks for sure.

Just upping the power target from 70% to 100% results in 47% more points. Estimated 1.12 million per day.

Then running 2x tasks gains a further 48%. Estimated 1.67 million and change.

I'm going to test this out today. Any thoughts on if I should post this out to the main forum if it is successful?
 
migrate my three BOINC-GPUs over to Milkyway; but no bunkering. Have no time to control the bunker right now.

And WOW can meanwhile validate my pending WUs
 
here the performance of my nV cards under Linux
980TI (MSI SeaHawk) 153.41 43.34 227.23 MilkyWay@Home v1.46 (opencl_nvidia_101)
1080Ti (FE) 109.48 32.04 227.23 MilkyWay@Home v1.46 (opencl_nvidia_101)
1080 (??) 137.69 41.66 227.23 MilkyWay@Home v1.46 (opencl_nvidia_101)

I don't bother with CPU (keep that on dDM and others)
 
What do the three numbers mean? I have never looked at the Linux client.

There may be no point in bunkering on MilkyWay. Their servers would only give me 78 tasks at once and I was completing them on average in 90 seconds (when running 2 tasks at once). I'm new to MilkyWay, so maybe once I process more they will up my quota.

Edit: If my PrimeGrid plan works, my computer will be in their list of top 10.:nailbiting:

here the performance of my nV cards under Linux
980TI (MSI SeaHawk) 153.41 43.34 227.23 MilkyWay@Home v1.46 (opencl_nvidia_101)
1080Ti (FE) 109.48 32.04 227.23 MilkyWay@Home v1.46 (opencl_nvidia_101)
1080 (??) 137.69 41.66 227.23 MilkyWay@Home v1.46 (opencl_nvidia_101)

I don't bother with CPU (keep that on dDM and others)
 
Last edited:
970
Run time - 277.97 CPU time - 95.39 points - 227.23 application MilkyWay@Home v1.46 (opencl_nvidia_101)
Running on Windows
 
Thanks, I was spacing out the CPU time and points (even though I was just looking at those PPWu yesterday.
 
Yes. Knowledge is good no matter who uses it.

I'll write something up. Would kind of like a screencapture of my computer being in the top 10 before all the others scream up there. I've seen posts in this subforum linked all over other DC sites. People are reading our stuff. :eek::cool:
 
Take your time. lol

Good, perhaps we'll get new members. Plus, Kyle can benefit from the publicity of his site being linked all over the webs.
 
<app_config>
<app>
<name>milkyway</name>
<gpu_versions>
<gpu_usage>0.5</gpu_usage>
<cpu_usage>0.1</cpu_usage>
</gpu_versions>
</app>
</app_config>


For anyone wanting the app_config.xml to run more than one work unit per card. This example runs 2 work units and feeds each work unit with .1 cpu thread/core each.

I've seen some people report running up to 20 at a time on Titans.
 
Gilthanis Why 20 tasks? My 1080TI was nearly maxed at 2, with the very occasional dip down. This was watched through EVGA Precision X Hardware Monitor GPU Usage graph. I could see running 3 tasks at once to pick up those last few percent, but I don't understand running 20.

I forecasted 218,000 ppd with 2 tasks on my card for Milky Way. Is this wrong in some way?
 
MN Scout whata your username on milky way? Im not seeing you on the list.

Full transparency, I flipped MilkyWay to another team for this weekend to get them higher on the FB sprint list. I'll do at least an equivalent points to [H] before I end crunching to help our Marathon position out.
 
Ah okay. I found you earlier and figured you was helping them. I remember you asking about them on another project not too long ago.
 
I wasn't even going to crunch MilkyWay for this sprint, since the lure of PrimeGrid and HUUGE numbers had me in a trance. :woot:

Was a late decision early this morning to crunch for an underdog. My GPU will be back to [H] by Sunday afternoon. CPU still crunching WCG for [H]. (y)
 
You don't know huge numbers until you put your GPUs on Collatz lol

Let's not start using the plural form of GPU for me. I have looked at the current cost of EVGA 1080 TI cards....:ROFLMAO: Then I'd have to get a more powerful PSU, and a larger case, since the bottom PCI 16 is too close to the PSU. My case is fitting too tight, maybe I should do the case upgrade.:whistle:
 
I don't see a problem with the above usage of plural............... :wacky:
 
Collatz GPU, Citizen Science Grid for CPU, and Grid Computing Center for non-CPU Itensive using multi-clients. Point Whoring tri-fecta!:cigar:
 
phoenicis has 6 1080 Ti GPUs and his production was around 44M points per day. You do the math. Haha.

I had 6 980 Ti, 2 1080 NonTi, and 2 970s and was producing around 42M PPD. I got rid of the two 970s though. So I'm guessing once they're rocking the project 100% ill be around 35M.
 
I have a 970, 780, and a 750Ti. Well..that is all that is worth mentioning. I still need to swap the 780 for a 970. At some point I will buy a better PSU and move the 780 to the 750Ti box and then move that 750Ti and possibly another 750Ti to other boxes depending on form factor and location.
 
IMG_0273.JPG IMG_0277.JPG

I had that one with two 780Ti and a 970 ... and another Xeon 1240 running the same wall socket as ESXi server

Crappy Japanese housing with poor wires

Just saying: watch the Amperes, watch the Amperes
 
Back
Top