Deadline is too short

Message boards : Number crunching : Deadline is too short
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Aurum
Avatar

Send message
Joined: 13 Jan 21
Posts: 76
Credit: 38,789,060
RAC: 0
Message 654 - Posted: 19 Mar 2021, 21:07:40 UTC

It appears your WUs come with a 24 hour deadline and that's too short. It triggers a quirk in BOINC that changes the status of the WUs to High Priority and that turns off the GPUs since their CPUs get stolen to run High Priority CPU WUs. I'm going to file an Issue on github.
For now you could change it to 72 hours or maybe 48 hours is long enough to avoid this from happening.

It's odd that you only send 2 WUs per CPU thread and those can be completed in less than half a day. Ergo, 24 hours is more than enough. But it's not you it's BOINC.
For now I'm running in Resource Zero mode to see if that'll keep all my GPUs turned on.
ID: 654 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
mmonnin

Send message
Joined: 23 Oct 20
Posts: 9
Credit: 11,177,985
RAC: 3,685
Message 658 - Posted: 20 Mar 2021, 0:04:01 UTC

So many BOINC mgr issues are solved by running multiple clients.

A smaller queue won't make them run high priority. None of my tasks are in high priority.
ID: 658 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
xii5ku

Send message
Joined: 3 Jan 21
Posts: 24
Credit: 30,794,790
RAC: 6,830
Message 662 - Posted: 20 Mar 2021, 6:30:19 UTC
Last modified: 20 Mar 2021, 6:56:31 UTC

To run a GPU project and a CPU project concurrently, a single client can be used.
Just tell the client via app_config that the GPU application requires next to no CPU time.

(Seperate clients for concurrent projects are still better. This way you can set different workqueue parameters per project.)
ID: 662 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Aurum
Avatar

Send message
Joined: 13 Jan 21
Posts: 76
Credit: 38,789,060
RAC: 0
Message 665 - Posted: 20 Mar 2021, 13:24:55 UTC - in response to Message 662.  

To run a GPU project and a CPU project concurrently, a single client can be used.
Just tell the client via app_config that the GPU application requires next to no CPU time.

(Seperate clients for concurrent projects are still better. This way you can set different workqueue parameters per project.)

I will never run multiple clients because it's a waste of time and effort. It's also the main tool of bunkering to cheat and get more WUs than one deserves.
Very few projects can run at optimum performance with less than a dedicated CPU to go with each GPU.
ID: 665 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Dr Who Fan
Avatar

Send message
Joined: 24 Oct 20
Posts: 19
Credit: 458,046
RAC: 2
Message 673 - Posted: 20 Mar 2021, 17:33:22 UTC - in response to Message 654.  

It appears your WUs come with a 24 hour deadline and that's too short. It triggers a quirk in BOINC that changes the status of the WUs to High Priority and that turns off the GPUs since their CPUs get stolen to run High Priority CPU WUs. I'm going to file an Issue on github.
For now you could change it to 72 hours or maybe 48 hours is long enough to avoid this from happening.

It's odd that you only send 2 WUs per CPU thread and those can be completed in less than half a day. Ergo, 24 hours is more than enough. But it's not you it's BOINC.
For now I'm running in Resource Zero mode to see if that'll keep all my GPUs turned on.

I don't run GPU tasks and have Zero cache settings and still see this project wanting to take over my PC's and shoving other projects to the back burner by making these task run High Priority if they have not started within 12 hours of downloading.

I agree the 24 hour deadline is WAY TOO SHORT - 3 days like Rosetta's tasks is a more reasonable MINNIMUM turnaround, but ideally 7 to 14 days like what most projects use as a deadline would be better suited.

ID: 673 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Aurum
Avatar

Send message
Joined: 13 Jan 21
Posts: 76
Credit: 38,789,060
RAC: 0
Message 677 - Posted: 20 Mar 2021, 18:11:05 UTC

Yea I see that too. SiDock does not play well with other CPU projects due to the 24 hour deadline. But one can always find mismatched pairings where the shorter deadline elbows out others.
That's why I submitted a request on BOINC github for a new command called Project_Priority or some such thing as I can't find the actual text I posted. The idea was that I should be able to specify the order I want work done. E.g.:
In my SiDock app_config:
<Project_Priority>1</Project_Priority>
In my TN_Grid app_config:
<Project_Priority>2</Project_Priority>
In my WCG app_config:
<Project_Priority>3</Project_Priority>
In my Universe app_config:
<Project_Priority>4</Project_Priority>

Another approach was <Run_Until_Done>0|1</Run_Until_Done>
But it fell on deaf ears. I imagine it would take a lot of work to provide an alternative or a replacement for the current BOINC estimation routines that determine which of multiple projects to get work for next and to run now.
In general BOINC works best when one runs a single CPU project per computer.
ID: 677 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
hoarfrost
Volunteer moderator
Project administrator
Project developer

Send message
Joined: 11 Oct 20
Posts: 324
Credit: 23,439,368
RAC: 11,239
Message 678 - Posted: 20 Mar 2021, 18:17:30 UTC
Last modified: 20 Mar 2021, 18:22:28 UTC

Hello Aurum!

Now such deadline vitally for project, because if we increase deadline, project exhaust HDD space in current server. Short deadline -> short time for results hold. :) We plan to migrate to other server.
ID: 678 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Buro87 [Lombardia]

Send message
Joined: 23 Nov 20
Posts: 28
Credit: 771,948
RAC: 0
Message 685 - Posted: 20 Mar 2021, 20:54:45 UTC - in response to Message 678.  

Hello Aurum!

Now such deadline vitally for project, because if we increase deadline, project exhaust HDD space in current server. Short deadline -> short time for results hold. :) We plan to migrate to other server.


Do you hve some estimation on space occuped by wu of each target?
ID: 685 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
shift
Avatar

Send message
Joined: 6 Feb 21
Posts: 8
Credit: 330,759
RAC: 0
Message 691 - Posted: 21 Mar 2021, 0:09:41 UTC - in response to Message 654.  

Any idea if "project_max_concurrent (A limit on the number of running jobs for this project.)" reduces the number of jobs downloaded for the project too? I was planning on using that to stop sidock hogging all the compute time. I'm hoping that it only downloads 2 jobs per thread with respect to the project_max_concurrent setting and not just based on the number of threads in the PC, otherwise the deadline issue will be even worse.
ID: 691 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
hoarfrost
Volunteer moderator
Project administrator
Project developer

Send message
Joined: 11 Oct 20
Posts: 324
Credit: 23,439,368
RAC: 11,239
Message 699 - Posted: 21 Mar 2021, 6:41:21 UTC - in response to Message 685.  

Do you hve some estimation on space occuped by wu of each target?

Each uncompleted workunit consume ~2 Mb HDD space.
ID: 699 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Profile PDW

Send message
Joined: 24 Oct 20
Posts: 10
Credit: 20,268,477
RAC: 5,324
Message 700 - Posted: 21 Mar 2021, 7:06:28 UTC - in response to Message 691.  

Any idea if "project_max_concurrent (A limit on the number of running jobs for this project.)" reduces the number of jobs downloaded for the project too?
No it doesn't, this setting only controls tasks already downloaded on your computer.

2 jobs per thread
This is a server side setting.
ID: 700 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
xii5ku

Send message
Joined: 3 Jan 21
Posts: 24
Credit: 30,794,790
RAC: 6,830
Message 701 - Posted: 21 Mar 2021, 7:36:06 UTC - in response to Message 665.  

Aurum wrote:
xii5ku wrote:
To run a GPU project and a CPU project concurrently, a single client can be used.
Just tell the client via app_config that the GPU application requires next to no CPU time.
Very few projects can run at optimum performance with less than a dedicated CPU to go with each GPU.
Indeed. Therefore, the number of CPUs to be used by BOINC needs to be configured accordingly.

When the GPU project is defined by the user to appear to use almost no CPU time, contrary to its actual CPU time demand, then the total number of CPUs to be used by BOINC obviously needs to be equal to the number of CPUs to be used just by the CPU-only project alone.

An example:
There is an 8-core 2-GPU computer. Each GPU task wants 1 core. Additionally, a CPU-only project shall run in the same client, with 1 core used by each task of that project. That is, it is desired to run 2 GPU tasks and 6 other tasks the whole time on this computer. A possible implementation:
– Tell the client that GPU tasks use only 0.1 logical CPUs, or even less to be sure.
– Tell the client to use only 6 of the 8 logical CPUs.
Result: The client will always keep 6 tasks of the CPU-only project running, and 2 tasks of the GPU project.
If the client-side scheduling priority of the GPU project gets very high, it will still launch no more than 2 GPU tasks at once since there are just 2 GPUs in this example host after all. The other way around, if the scheduling priority of the CPU-only project gets very high, the client will still launch no more than 6 of these tasks because it is allowed to use only 6 logical CPUs, plus, the client will still launch additional 2 GPU tasks because the client was told that these GPU tasks won't cut into CPU time to the detriment of the other project.

Another example:
Same, but just the CPU-only project runs in BOINC; the GPU project is Folding@home which has its own client. The implementation:
– Set the BOINC client to use only 6 of the 8 cores.

Third example:
Let's go back to the case of the CPU project and the GPU project both being BOINC projects. Alternative implementation to the initial example:
– Run two BOINC client instances.
– Attach one client to the GPU project and let it freely use all resources that it needs.
– Attach the other client to the CPU-only project but let it use only 6 out of 8 cores.


Aurum wrote:
xii5ku wrote:
(Separate clients for concurrent projects are still better. This way you can set different workqueue parameters per project.)
I will never run multiple clients because it's a waste of time and effort.
Let's say you want a 2 hours deep buffer for the CPU project and an 18 hours deep buffer for the GPU project. This is straightforward to implement if you use separate clients.

Another example: You have just one computer at your disposal. (OK, bad example, you have several – but most users have just one or a couple.) You would like it to run SiDock on half of its cores, and TN-Grid on the other half, for as long as you desire. Again this is straightforward to implement by means of separate clients.

Speaking of partitioning a computer by means of concurrently running BOINC clients (going off-topic to the subject of the thread, but on-topic to the use of multiple BOINC clients for finer grained workqueue control):
Somebody wants to donate the entire computer time of a 256-threaded host to SiDock. Yet SiDock currently has got a limit of tasks in progress of 2 * active_logical_CPUs combined with counting only up to 64 active logical CPUs. Possible solutions:
a) The donor could ask the project admins to maintain a separate limit of tasks in progress for this host. Whether or not this is feasible at SiDock specifically, I don't know.
b) The donor could partition the physical 256-threaded host into four virtual 64-threaded hosts and be on his way. This can be implemented with four operating system instances on the host, each one running one BOINC client, or it could as well be implemented with four BOINC clients within one operating system instance.

Whether or not the gains are worth the time and effort is of course in the eye of the individual user.


Aurum wrote:
It's also the main tool of bunkering to cheat and get more WUs than one deserves.
On the topic of how many tasks in progress a host "deserves":
Let's say there is a host which is at risk to lose internet connection for 10 hours. For example, the host is attached to an unreliable home internet connection, and the owner can correct connection losses only by resetting the cable modem, and only before leaving home for the day job and after returning from the day job. Does, or doesn't, this host "deserve" a 10...12 hours deep work buffer?

I add that my own opinion is that the tool of multiple client instances per physical host needs to be used responsibly. But this is the same as using the default of a single client instance per host. Public BOINC projects obviously rely on the majority of their contributors to setup and maintain the clients and hosts reasonably.
ID: 701 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Buro87 [Lombardia]

Send message
Joined: 23 Nov 20
Posts: 28
Credit: 771,948
RAC: 0
Message 702 - Posted: 21 Mar 2021, 11:17:29 UTC - in response to Message 701.  
Last modified: 21 Mar 2021, 11:52:51 UTC

I saw that deadline of new (and longer) wus is 2 days. Thanks
ID: 702 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Rasputin42

Send message
Joined: 12 Jan 21
Posts: 13
Credit: 2,513,888
RAC: 0
Message 709 - Posted: 21 Mar 2021, 15:44:56 UTC

Yes, two days is better.But i still think 3 days would be better to allow for slower computers.
ID: 709 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Rasputin42

Send message
Joined: 12 Jan 21
Posts: 13
Credit: 2,513,888
RAC: 0
Message 710 - Posted: 21 Mar 2021, 15:46:15 UTC

Yes, two days is better.But i still think 3 days would be even better to allow for slower computers.
ID: 710 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
crashtech

Send message
Joined: 5 Jan 21
Posts: 7
Credit: 15,763,895
RAC: 10,882
Message 902 - Posted: 10 May 2021, 14:08:55 UTC

Is there any grace period at all for work units? Under the current circumstances (BOINC Pentathlon) 48 hours is pretty short.
ID: 902 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
hoarfrost
Volunteer moderator
Project administrator
Project developer

Send message
Joined: 11 Oct 20
Posts: 324
Credit: 23,439,368
RAC: 11,239
Message 903 - Posted: 10 May 2021, 20:02:13 UTC
Last modified: 10 May 2021, 20:04:44 UTC

Hello! We return deadline to 4 days after Pentathlon. During challenges many workunits left unprocessed and if deadline is not short, many time need for those completion.
ID: 903 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Aurum
Avatar

Send message
Joined: 13 Jan 21
Posts: 76
Credit: 38,789,060
RAC: 0
Message 904 - Posted: 10 May 2021, 22:25:31 UTC

This 2 day deadline causes all SD WUs to Run High Priority. This is very annoying since it interferes with any other BOINC project one wants to run. Sometimes it even prevents a GPU from having a CPU. Requires a lot of babysitting to constantly suspend some of the SD WUs so BOINC will behave a little better.
ID: 904 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
xii5ku

Send message
Joined: 3 Jan 21
Posts: 24
Credit: 30,794,790
RAC: 6,830
Message 905 - Posted: 11 May 2021, 7:21:05 UTC - in response to Message 904.  
Last modified: 11 May 2021, 7:31:23 UTC

Aurum wrote:
This 2 day deadline causes all SD WUs to Run High Priority. This is very annoying since it interferes with any other BOINC project one wants to run. Sometimes it even prevents a GPU from having a CPU. Requires a lot of babysitting to constantly suspend some of the SD WUs so BOINC will behave a little better.
Maybe so. But why should this concern the SiDock admins? The deadlines should be those which suit the scientific requirements and the server limitations.

If you want to donate to two projects at the same time with one and the same computer, and uphold a strictly fixed resource allocation between these two projects on this computer, then it naturally is on you to configure the computer accordingly.

Now, boinc-client, as it is, is not designed to support this use case of a constant ratio of resource usage between simultaneously enabled projects.¹ Therefore your configuration needs to address this client design limitation. And there are two ways to do it:

1) The proper way
Simply run each project in a separate client instance. Done. This method scales even to more than 2 simultaneously active projects easily.

2) A workaround
This method needs only a single client instance but scales only to two projects, and foremost is applicable if one project is a CPU-only project and the other is a GPU+CPU using project. Pick one of the projects (preferrably a GPU+CPU project) and define via app_config.xml that its applications use only, say, 0.01 CPUs. Then lower the global CPU usage in the client such that you don't end up with an overcommitment.

[Or 3) request this use case to be supported in boinc-client and wait for it to be implemented by somebody.]

––––––––
¹) boinc-client is designed to support the use case of time-averaged resource allocation between projects. This design allows it to react to events like sever outages in a largely unattended mode of operation.
ID: 905 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
Crystal Pellet

Send message
Joined: 26 Oct 20
Posts: 53
Credit: 2,520,836
RAC: 0
Message 906 - Posted: 11 May 2021, 9:46:45 UTC - in response to Message 904.  

This 2 day deadline causes all SD WUs to Run High Priority. This is very annoying since it interferes with any other BOINC project one wants to run. Sometimes it even prevents a GPU from having a CPU. Requires a lot of babysitting to constantly suspend some of the SD WUs so BOINC will behave a little better.
You just have set your cache buffer too high.
Set it low to e.g. ~1 hour = 0.04 in setting: "Store at least ... days of work" and
you may set "Store up to an additional .... days of work" much higher.
Normally you would get rid of 'High priority'.
ID: 906 · Rating: 0 · rate: Rate + / Rate - Report as offensive     Reply Quote
1 · 2 · Next

Message boards : Number crunching : Deadline is too short

©2024 SiDock@home Team