SIDock taking over!

Message boards : Cafe : SIDock taking over!
Message board moderation

To post messages, you must log in.

AuthorMessage
BellyNitpicker

Send message
Joined: 7 Mar 21
Posts: 3
Credit: 299,972
RAC: 0
Message 628 - Posted: 7 Mar 2021, 11:54:27 UTC

Good morning all. I'm from the UK, and running BOINC on a couple of virtual machines just for European / UK projects.

I've just set up SIDock on one of those - Ubuntu Focal Fossa running five CPU threads under Virtualbox on an Intel Mac Mini. I set the relative share for SIDock to 100 initially - about 9%, and it immedately took all the available processing and put everything else into a wait state. I suspended all, set the relative share to 10 - about 0.9% and unsuspended the projects one at a time, leaving SIDock until last. As soon as I unsuspended it, it put everyone else back on hold and grabbed all the CPUs.

So I currently have the project not allowing new tasks and am releasing them one at a time to give everyone else a look in.

This is the first time I've experienced a project doing this - I'm not a master of BOINC, but have been running for about nine months with this setup. If I can't find a way around it, I'll make another VM with one thread and run it on its own, though that's not very efficient!

Any thoughts?
Nick
ID: 628 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Falconet

Send message
Joined: 24 Oct 20
Posts: 23
Credit: 9,020
RAC: 0
Message 629 - Posted: 7 Mar 2021, 13:15:33 UTC - in response to Message 628.  
Last modified: 7 Mar 2021, 13:16:41 UTC

A few things:

SiDock has a really short deadline (24 hours, I believe) and thus BOINC will immediately do that in order to finish those tasks.
Regarding the resource share, it doesn't mean it will give it 10% of resources but rather give it a weight of 10 relative to the entire resource weight of the other projects.

Once SiDock has reached around the equivalent of "10" resource share in Recent Average Credit, BOINC will work better.
I'll look for a better explanation of the Resource Share but long story short: give it time.
ID: 629 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Jim1348

Send message
Joined: 30 Oct 20
Posts: 57
Credit: 9,112,528
RAC: 0
Message 630 - Posted: 7 Mar 2021, 15:23:40 UTC - in response to Message 629.  
Last modified: 7 Mar 2021, 15:25:00 UTC

In addition to that, you can shorten the time it takes for BOINC to converge to the desired resource share by using a cc_config.xml file in the BOINC Data directory.
There should be one there already, and you can just add this in the Options section:
<rec_half_life_days>1.000000</rec_half_life_days>


My entire file looks like this, and probably won't do you any harm:
<cc_config>
<options>
<rec_half_life_days>1.000000</rec_half_life_days>
<use_all_gpus>1</use_all_gpus>
<allow_multiple_clients>1</allow_multiple_clients>
<allow_remote_gui_rpc>1</allow_remote_gui_rpc>
<max_file_xfers>8</max_file_xfers>
<max_file_xfers_per_project>4</max_file_xfers_per_project>
</options>
</cc_config>

The number of simultaneous file transfers is doubled, and you can increase it more though I think it can cause problems if it goes too high.
It also allows the use of more than one GPU, and remote monitoring, though that may be irrelevant to you, but won't hurt.

Note that you can create the file in a text editor (Notepad), and then save it as an ".xml" file rather than a text file, and then place it in the BOINC data directory.
You have to activate it, as by restarting BOINC, or rebooting.
ID: 630 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
PMH_UK

Send message
Joined: 23 Dec 20
Posts: 20
Credit: 1,360,768
RAC: 0
Message 631 - Posted: 7 Mar 2021, 15:53:18 UTC - in response to Message 628.  

You could use app_config.xml to limit running tasks, sidock will send max 8 per PC.
I use this for several projects to balance load.
Put in the directory for the project.

Example for WCG:
<app_config>
<app>
<name>mip1</name>
<max_concurrent>1</max_concurrent>
</app>
<app>
<name>arp1</name>
<max_concurrent>1</max_concurrent>
</app>
</app_config>

Paul.
ID: 631 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
BellyNitpicker

Send message
Joined: 7 Mar 21
Posts: 3
Credit: 299,972
RAC: 0
Message 632 - Posted: 8 Mar 2021, 6:14:16 UTC

Thank you for your thoughts. I'll do some experimenting.
Nick
ID: 632 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote

Message boards : Cafe : SIDock taking over!

©2024 SiDock@home Team