1)
Message boards :
News :
SiDock@home September Sailing
(Message 1228)
Posted 23 Sep 2021 by xii5ku Post: Thanks to the team for organizing this event. :-) And special thanks to hoarfrost for all the work put into this. |
2)
Message boards :
News :
SiDock@home September Sailing
(Message 1226)
Posted 22 Sep 2021 by xii5ku Post: xii5ku wrote: Nevertheless, the user over-bunkered but aborted+reported excess tasks late and incompletely.Dear friends, if you bunker at a project with variable task run times, and especially at a project with a quorum of 2, please monitor the progress of your computer and abort + report tasks which the computer won't finish, as early as you feasibly can. If you know how to bunker many tasks, you certainly also know how to report aborted tasks early while leaving completed tasks for later reporting. Or you are knowing somebody who can tell you how to do it; it's trivial. Thank you. Don't be like the owner of host 21573 who aborted 732 tasks 4 days after download but just 4 hours before conclusion of the contest. |
3)
Message boards :
News :
SiDock@home September Sailing
(Message 1219)
Posted 21 Sep 2021 by xii5ku Post: Thanks, indeed. The host must have started downloading the buffer which it reported yesterday much earlier than it occurred to me, such that the tasks were old enough that result deletion removed a lot even within the short time between when results were reported and when I looked. (Nevertheless, the user over-bunkered but aborted+reported excess tasks late and incompletely.) The host retains only 3CLpro work currently, so it could work out if it runs mostly uninterrupted. Edit: The good news is that between my post yesterday and now, the workunits of which the host cancelled the tasks or had them cancelled by the server were almost all completed already. (Replica tasks were promptly sent out, and completed by other hosts, thanks to very shallow buffers of these hosts.) Just 3 of these are left in progress now; their replicas were soaked up into other deep bunkers. |
4)
Message boards :
News :
SiDock@home September Sailing
(Message 1217)
Posted 20 Sep 2021 by xii5ku Post: Some fellow DC'ers have an awkward approach to this contest. The owner of computer 21557 for example.
I have no solid idea of what he plans to do with 270 tasks during the next six days, if he managed to complete just 168 tasks in the past 6 days. |
5)
Message boards :
News :
SiDock@home September Sailing
(Message 1215)
Posted 20 Sep 2021 by xii5ku Post: @Greg_BE, previously, the "estimated computation size" of both 3CLpro and Eprot was configured as 50,000 GFLOPS.¹ (This caused the client to assume the same 'estimated time remaining' for new tasks of either kind.) Now the estimated computation size of 3CLpro is 40,000 GFLOPS.¹ I don't know about Eprot. If you had very good time estimates in your client before, then only because it had completed a good number of tasks of only one of the two types before, and therefore adjusted its time estimate for this type of workunits. ________ ¹) Both figures were observed from a very small sample, hence may not be generally applicable. |
6)
Message boards :
News :
SiDock@home September Sailing
(Message 1197)
Posted 17 Sep 2021 by xii5ku Post: Some users know how to edit cc_config, but don't know yet how to edit it responsibly. example host Hopefully those who taught step one find the time to teach step two too. |
7)
Message boards :
News :
SiDock@home September Sailing
(Message 1178)
Posted 16 Sep 2021 by xii5ku Post: hoarfrost wrote: In any case, next bunches of will be mixed with bunches of Eprot_v1_run_2 tasks.Thanks! as far as I can tell, everything works smoothly now. The client's estimation of task durations is thrown off now of course, therefore it's good that you have the 2-tasks-in-progress limit, preventing the clients from putting more on their plate than they can chew. :-) A bit off topic: yoyo_rkn wrote: I run yoyo@home, which is in the meantime mostly stable and fast also in big races. The server has only 2 cores and 8 GB ram and hard disks.That's nice that you can get by with a severely (and in these days, unnecessarily) under-powered server. But the price is drastically reduced functionality.
Furthermore, the boinc server version at yoyo@home seems curiously outdated, but I have no idea if this too is in place because of performance reasons. Given that e.g. results tables in the web interface cannot be filtered, which makes them practically useless, I guess there are performance considerations in play too. |
8)
Message boards :
News :
SiDock@home September Sailing
(Message 1163)
Posted 15 Sep 2021 by xii5ku Post: @walli, one thing which is good to remember when bunkering for a contest is to check whether or not the stats site has initialized its tables, and only then report the bunkered results. (I know, hindsight is 20/20.) @hoarfrost, the low limit of tasks in progress of 2 per client CPU, combined with the current small workunit size, obviously makes it difficult for many users to keep their computers busy. Have you considered raising the limit a bit as long as the workunits are this small? |
9)
Message boards :
News :
SiDock@home September Sailing
(Message 1159)
Posted 15 Sep 2021 by xii5ku Post: Michael H.W. Weber wrote: Please take a look at these guidelines which my team colleague Yoyo has written downThis guide is about keeping the server responsive, not so much about keeping the hosts utilized. (One central point of the guide is to reduce the number of tasks in progress. But high utilization of contributor hosts ultimately requires a high number of tasks in progress.) |
10)
Message boards :
Number crunching :
Deadline is too short
(Message 905)
Posted 11 May 2021 by xii5ku Post: Aurum wrote: This 2 day deadline causes all SD WUs to Run High Priority. This is very annoying since it interferes with any other BOINC project one wants to run. Sometimes it even prevents a GPU from having a CPU. Requires a lot of babysitting to constantly suspend some of the SD WUs so BOINC will behave a little better.Maybe so. But why should this concern the SiDock admins? The deadlines should be those which suit the scientific requirements and the server limitations. If you want to donate to two projects at the same time with one and the same computer, and uphold a strictly fixed resource allocation between these two projects on this computer, then it naturally is on you to configure the computer accordingly. Now, boinc-client, as it is, is not designed to support this use case of a constant ratio of resource usage between simultaneously enabled projects.¹ Therefore your configuration needs to address this client design limitation. And there are two ways to do it: 1) The proper way Simply run each project in a separate client instance. Done. This method scales even to more than 2 simultaneously active projects easily. 2) A workaround This method needs only a single client instance but scales only to two projects, and foremost is applicable if one project is a CPU-only project and the other is a GPU+CPU using project. Pick one of the projects (preferrably a GPU+CPU project) and define via app_config.xml that its applications use only, say, 0.01 CPUs. Then lower the global CPU usage in the client such that you don't end up with an overcommitment. [Or 3) request this use case to be supported in boinc-client and wait for it to be implemented by somebody.] –––––––– ¹) boinc-client is designed to support the use case of time-averaged resource allocation between projects. This design allows it to react to events like sever outages in a largely unattended mode of operation. |
11)
Message boards :
Number crunching :
Resource Zero
(Message 885)
Posted 4 May 2021 by xii5ku Post: The resource share does not really influence the project server's scheduler directly. It really just influences the client's behaviour — whether it requests work of the respective project, and how much. The project server does not care for resource share, it only responds to the client's work requests. (That's how I understand it.) |
12)
Message boards :
Number crunching :
Formula Boinc 2021: Sprint calendar
(Message 726)
Posted 22 Mar 2021 by xii5ku Post: Thumbs up to the SiDock@home team for continuous availability of workunits, scheduler, validator, server file space... This sprint on short(?) notice went really well, IMO. |
13)
Message boards :
Number crunching :
Deadline is too short
(Message 701)
Posted 21 Mar 2021 by xii5ku Post: Aurum wrote: xii5ku wrote:Indeed. Therefore, the number of CPUs to be used by BOINC needs to be configured accordingly.To run a GPU project and a CPU project concurrently, a single client can be used.Very few projects can run at optimum performance with less than a dedicated CPU to go with each GPU. When the GPU project is defined by the user to appear to use almost no CPU time, contrary to its actual CPU time demand, then the total number of CPUs to be used by BOINC obviously needs to be equal to the number of CPUs to be used just by the CPU-only project alone. An example: There is an 8-core 2-GPU computer. Each GPU task wants 1 core. Additionally, a CPU-only project shall run in the same client, with 1 core used by each task of that project. That is, it is desired to run 2 GPU tasks and 6 other tasks the whole time on this computer. A possible implementation: – Tell the client that GPU tasks use only 0.1 logical CPUs, or even less to be sure. – Tell the client to use only 6 of the 8 logical CPUs. Result: The client will always keep 6 tasks of the CPU-only project running, and 2 tasks of the GPU project. If the client-side scheduling priority of the GPU project gets very high, it will still launch no more than 2 GPU tasks at once since there are just 2 GPUs in this example host after all. The other way around, if the scheduling priority of the CPU-only project gets very high, the client will still launch no more than 6 of these tasks because it is allowed to use only 6 logical CPUs, plus, the client will still launch additional 2 GPU tasks because the client was told that these GPU tasks won't cut into CPU time to the detriment of the other project. Another example: Same, but just the CPU-only project runs in BOINC; the GPU project is Folding@home which has its own client. The implementation: – Set the BOINC client to use only 6 of the 8 cores. Third example: Let's go back to the case of the CPU project and the GPU project both being BOINC projects. Alternative implementation to the initial example: – Run two BOINC client instances. – Attach one client to the GPU project and let it freely use all resources that it needs. – Attach the other client to the CPU-only project but let it use only 6 out of 8 cores. Aurum wrote: xii5ku wrote:Let's say you want a 2 hours deep buffer for the CPU project and an 18 hours deep buffer for the GPU project. This is straightforward to implement if you use separate clients.(Separate clients for concurrent projects are still better. This way you can set different workqueue parameters per project.)I will never run multiple clients because it's a waste of time and effort. Another example: You have just one computer at your disposal. (OK, bad example, you have several – but most users have just one or a couple.) You would like it to run SiDock on half of its cores, and TN-Grid on the other half, for as long as you desire. Again this is straightforward to implement by means of separate clients. Speaking of partitioning a computer by means of concurrently running BOINC clients (going off-topic to the subject of the thread, but on-topic to the use of multiple BOINC clients for finer grained workqueue control): Somebody wants to donate the entire computer time of a 256-threaded host to SiDock. Yet SiDock currently has got a limit of tasks in progress of 2 * active_logical_CPUs combined with counting only up to 64 active logical CPUs. Possible solutions: a) The donor could ask the project admins to maintain a separate limit of tasks in progress for this host. Whether or not this is feasible at SiDock specifically, I don't know. b) The donor could partition the physical 256-threaded host into four virtual 64-threaded hosts and be on his way. This can be implemented with four operating system instances on the host, each one running one BOINC client, or it could as well be implemented with four BOINC clients within one operating system instance. Whether or not the gains are worth the time and effort is of course in the eye of the individual user. Aurum wrote: It's also the main tool of bunkering to cheat and get more WUs than one deserves.On the topic of how many tasks in progress a host "deserves": Let's say there is a host which is at risk to lose internet connection for 10 hours. For example, the host is attached to an unreliable home internet connection, and the owner can correct connection losses only by resetting the cable modem, and only before leaving home for the day job and after returning from the day job. Does, or doesn't, this host "deserve" a 10...12 hours deep work buffer? I add that my own opinion is that the tool of multiple client instances per physical host needs to be used responsibly. But this is the same as using the default of a single client instance per host. Public BOINC projects obviously rely on the majority of their contributors to setup and maintain the clients and hosts reasonably. |
14)
Message boards :
Number crunching :
Temporarily failed upload // HTTP error: SSL connect error
(Message 698)
Posted 21 Mar 2021 by xii5ku Post: This is most certainly due to the ongoing contest (announced on Thursday, started Friday, finishes Monday). As far as I can tell, such SSL errors no longer occur or are rare now. I suppose now that task run times are quite a bit longer, there is less traffic between clients and server. |
15)
Message boards :
Number crunching :
Formula Boinc 2021: Sprint calendar
(Message 696)
Posted 21 Mar 2021 by xii5ku Post: Aurum wrote: With a 24-hour deadline on SiDock WUs this is the best project to spot bunkering.No, it is harder to spot bunkering, because bunkers can only hold one day worth of results, i.e. can only be small in comparison to baseline output. Aurum wrote: Bunkerers start out with a big flash in the pan then trail off as the race progresses.There are more use cases of bunkering. Tactically much more relevant is bunkering towards the end of a contest, in contrast to bunker towards the start. |
16)
Message boards :
News :
Fourth target (corona_RdRp_v1)
(Message 695)
Posted 21 Mar 2021 by xii5ku Post: On server_status.php, completion status of corona_RdRp_v1 dropped from almost 100 % to 48.7 %. Just a mistake, or did plans for this target change? |
17)
Message boards :
Number crunching :
Suspicious Host
(Message 663)
Posted 20 Mar 2021 by xii5ku Post: xii5ku wrote: If the wrapper claims that build\cmdock.exe was started but exited immediately, could this be due to a malware protection software [...]Furthermore, is the wrapper checking the exist status of cmdock.exe? -------- Computer 3396 produces 100 % invalid results (but fetches work and reports results at a low rate). |
18)
Message boards :
Number crunching :
Deadline is too short
(Message 662)
Posted 20 Mar 2021 by xii5ku Post: To run a GPU project and a CPU project concurrently, a single client can be used. Just tell the client via app_config that the GPU application requires next to no CPU time. (Seperate clients for concurrent projects are still better. This way you can set different workqueue parameters per project.) |
19)
Message boards :
Number crunching :
Suspicious Host
(Message 661)
Posted 20 Mar 2021 by xii5ku Post: Example stderr.txt of this host: <core_client_version>7.16.11</core_client_version> <![CDATA[ <stderr_txt> 09:57:29 (1856): wrapper (7.17.26016): starting 09:57:29 (1856): wrapper: running build\cmdock.exe (-r target.prm -p "C:\ProgramData\BOINC\slots\3\data\scripts\dock.prm" -f htvs.ptc -i ligands.sdf -o docking_out) 09:57:30 (1856): build\cmdock.exe exited; CPU time 0.000000 09:57:30 (1856): called boinc_finish(0) </stderr_txt> ]]> If the wrapper claims that build\cmdock.exe was started but exited immediately, could this be due to a malware protection software preventing execution or preventing creation of required files, such as DLLs? |
20)
Message boards :
Number crunching :
Something wrong with the validator?
(Message 510)
Posted 2 Feb 2021 by xii5ku Post: cpalmer wrote: cpalmer wrote:The owner of these hosts is waiting for the upcoming contest to start.Quite a few of my tasks are being validated within an hour, but others are still pending validation after 4 daysAnd most of my tasks that are still pending validation haven't yet reached their deadline and were also given out to a host that hasn't been online for quite a while. (BTW, there were and are other projects in which workunits can easily take several months until 2+ valid results came together. E.g. SETI@home, QuChemPedIA. It happens comparably quickly at SiDock, actually.) |
©2024 SiDock@home Team