Bible Pay

Read 417376 times

  • Rob Andrews
  • Administrator

    • 4097


    • 97
    • June 05, 2017, 08:09:04 PM
    • Patmos, Island Of
    more
Updated the node.

Don't worry about my questions from above, I'll just google it. Thanks.

UPDATE:
The node stopped again.

Yeah, I valgrinded it and now it died a couple lines before the same place.  I checked in 1093c, please grab that one now, LOL!

So far I think 1093c got past it- Ive been running 30 minutes now.

Anyway I do see a sanc mnsync issue on my 2nd sanc also.  But let me research that a little further, as our chainparams might not be calling for all parts of sancs to start until block 5000, Or, it could be that we only have 5 sancs and 3 of them are only preenabled (IE not a supermajority).  Im thinking it is one of these two things, Ill take a look.



  • Rob Andrews
  • Administrator

    • 4097


    • 97
    • June 05, 2017, 08:09:04 PM
    • Patmos, Island Of
    more
Yeah, I valgrinded it and now it died a couple lines before the same place.  I checked in 1093c, please grab that one now, LOL!

So far I think 1093c got past it- Ive been running 30 minutes now.

Anyway I do see a sanc mnsync issue on my 2nd sanc also.  But let me research that a little further, as our chainparams might not be calling for all parts of sancs to start until block 5000, Or, it could be that we only have 5 sancs and 3 of them are only preenabled (IE not a supermajority).  Im thinking it is one of these two things, Ill take a look.

Its not our chainparams; our masternodes are fully enabled at block 202 in testnet.
I think its the fact that none of the 5 pre-enabled masternodes have been on long enough; too many people have crashed and restarted over night.
Lets give it a good run and see if our masternodes start communicating over the next 8 hours or so, then we regroup and check mnsync status.



  • Rob Andrews
  • Administrator

    • 4097


    • 97
    • June 05, 2017, 08:09:04 PM
    • Patmos, Island Of
    more
Its not our chainparams; our masternodes are fully enabled at block 202 in testnet.
I think its the fact that none of the 5 pre-enabled masternodes have been on long enough; too many people have crashed and restarted over night.
Lets give it a good run and see if our masternodes start communicating over the next 8 hours or so, then we regroup and check mnsync status.

Just to accelerate our Sancs sync in testnet, anyone who has 0 PS compatible masternodes showing in : Tools | Information: Total Number of Masternodes (PS Compatible 0/0), please rm blocks, rm chainstate and resync.  I didnt have to delete the *.dat files, I just resynced, and then all 5 of the masternodes showed up.  Once that happens, I think our other masternodes will start syncing mnpayments again, and then the 'mnsync status'will move to 999 for everyone quicker.


In my case, my dev machine had the correct MN list, my vultr machine didnt, I resynced it, and now both agree.



  • T-Mike
  • Sr. Member

    • 375


    • 2
    • February 06, 2018, 06:12:58 PM
    more
Just to accelerate our Sancs sync in testnet, anyone who has 0 PS compatible masternodes showing in : Tools | Information: Total Number of Masternodes (PS Compatible 0/0), please rm blocks, rm chainstate and resync.  I didnt have to delete the *.dat files, I just resynced, and then all 5 of the masternodes showed up.  Once that happens, I think our other masternodes will start syncing mnpayments again, and then the 'mnsync status'will move to 999 for everyone quicker.


In my case, my dev machine had the correct MN list, my vultr machine didnt, I resynced it, and now both agree.

My node is synced now! Hope it won't stop anymore.


  • jaapgvk
  • Hero Member

    • 558


    • 31
    • September 01, 2017, 08:02:57 PM
    • Netherlands
    more
Could someone send me 500.000 tBBP so I can set up a new masternode?

yaxDh4ioa8bg3QWzDYeycFyuvt9yxi8xWJ


  • T-Mike
  • Sr. Member

    • 375


    • 2
    • February 06, 2018, 06:12:58 PM
    more
Could someone send me 500.000 tBBP so I can set up a new masternode?

yaxDh4ioa8bg3QWzDYeycFyuvt9yxi8xWJ

Sent.


  • Rob Andrews
  • Administrator

    • 4097


    • 97
    • June 05, 2017, 08:09:04 PM
    • Patmos, Island Of
    more
Alright, windows 1093 is out there, now lets try to increase our Sanc count to ensure all sancs sync to 999.


  • jaapgvk
  • Hero Member

    • 558


    • 31
    • September 01, 2017, 08:02:57 PM
    • Netherlands
    more
Alright, windows 1093 is out there, now lets try to increase our Sanc count to ensure all sancs sync to 999.

Just fired up my sactuary again. Synced nicely up to 999 and is PRE-ENABLED now. Also installed the latest windows wallet.

One (minor) thing: the 'setgenerate 1' option makes the wallet hash at the old speed again (not 1%) on both the linux and windows wallets. I think it was also the case with the previous version.
So, to be clear: with the current wallet, when I do 'setgenerate 1 true', on my quadcore 4 thread system, I'm using about 25% of my CPU.

(And T-Mike: thanks for the tBBP!)


  • togoshigekata
  • Hero Member

    • 527


    • 31
    • September 01, 2017, 10:21:10 AM
    • USA
    more
Rob whatever you fixed in latest commit fixed issue I had on my one of my linux sanctuaries,
folders and .dat files deleted, I couldnt get it to reindex, it would be stuck at block 0 for a little bit and crash, works now!


  • T-Mike
  • Sr. Member

    • 375


    • 2
    • February 06, 2018, 06:12:58 PM
    more
OK, 3 nodes up and running. I'll set up some more tomorrow.


  • Rob Andrews
  • Administrator

    • 4097


    • 97
    • June 05, 2017, 08:09:04 PM
    • Patmos, Island Of
    more
Rob whatever you fixed in latest commit fixed issue I had on my one of my linux sanctuaries,
folders and .dat files deleted, I couldnt get it to reindex, it would be stuck at block 0 for a little bit and crash, works now!

Great!

Yes, we had at least 2 bugs, and Im a little happier when I see the reason for them instead of programming around the issue until we drop.

The first bug, the reason we couldnt sync from 0, was due to a 2 digit rounding error in the old contract.  I found out that the grand total magnitude with 25 researchers, out to the 2nd digit in scale was 1000.03, which created an attempted overpayment, and due to the grandfather rule we synced over that block but it "covered" up the problem.  Now the system shoots for a 998 magnitude, and uses 3 digit scale, so that appears to have solved the syncing issue permanently.

I have a lot of code coming today so we wont be in this lull long.  Ive been working on the integrated integrity feature.  I truly think if we can reconcile the rosetta work with utxo's, we will be the first coin to have trustable PODC.  PODC that exceeds the righteousness of the Pharisees and scribes, just kidding.  No PODC that exceeds the trust of POS, because POS is based on UTXO (unspent outputs).

So with greater integrity comes a greater workload for the controller wallet and the sancs, that means more propensity for clerical errors or should I say technical errors in the sancs.  For example what if a sanctuary fails to verify one task out of 100, due to a vultr network tcp error, I would hate to throw the entire contract off (meaning our SanctuaryQuorum would fail and no one would be paid).  And that would happen if one unverified task lowered a researchers task integrity level from 100 to 91.123 and other sancs felt a different way about various researchers.  The other one is UTXOWeight, although Im not worried about that as much, because it will always verify correctly, but it needs to established that the window of UTXOWeight must be exact, for example if one researcher sent 2 UTXO's in 8 hours of PODC updates, in 24 hours period, then the Sanc creating the contract must have an exact start-endtime snapped into the grid as well. 

To handle this, Im creating a UTXO Weight breaks chart.  This means researchers will receive a Percentage reward based on the UTXO Stake Amount inside the PODC update from a chart, with relatively large breaks.  Something like 1-50000 BBP is in the chart, (the chart ends at 50,000 bbp max to not give much advantage to whales), and we have a break every 10000.  This means only 6 allowable reward levels: 0, .2, .40, .60, 1.00 Etc.  This way the contracts will tend to "jive" across all sancs.  Im doing the same thing with the Task integrity system - the validated tasks as compared to total researcher tasks will snap to a grid, so that the user receives a percentage float result based on .20 breaks.  I think what happens in the end is everyone is assessed with 3 values per day that comprise ones magnitude:  UTXOWeight (0-100%), TaskWeight (0-100%) and RosettaRAC (Current RAC from Boinc).  We multiply your UTXOWeight*TaskWeight*RosettaRAC to arrive at your magnitude while preprocessing the file.  These levels will allow us to write an exportable excel report with trustable integrity per day with actual provable details per researcher.  (If for example Rosettas SQL DBA is being held up at gunpoint after one of our timestamps, it will disrupt the integrity of those affected researchers due to timestamp manipulation).

Finally, I am adding sporks for system enabling of the UTXOWeight, and TaskWeight as distinct features.  This allows us to shutdown those two features and revert back to plain vanilla PODC in case something goes haywire in prod.  The most common thing I can think of is not that UTXO "blows up", but that the Task Validation system malfunctions - possibly because the boinc servers are down while Rosetta is up.  Im thinking if something in the public interface changes, for example, if boinc shoots out emergency updates that break compatibility with biblepay, we can shut off TaskWeight with a spork, and still survive in our old plain vanilla PODC mode (daily RAC rewards) until that other piece is fixed again.  So we would have more granular survival levels:  Plain PODC, PODC + UTXO + TaskAudits, or POBH mining only - etc.




  • orbis
  • Full Member

    • 215


    • 7
    • February 08, 2018, 04:37:14 PM
    more
...
Otherwise, require proof of every task, every task start time, and a UTXO.  So for the small fish in the block (usually < 100 RAC or so) we could let them in.    I wouldnt mind programming that in, especially for usability in 3rd world countries.
...
Rob, to the small fishes :)
My Nexus 4 testing is going really great. Just for imagination my RAC on that device is 280 ;) Of course it runs almost 24/7.
https://boincstats.com/en/stats/-1/host/detail/333426255/lastDays
Oh... to the data consuption... In those 6 days it spent approx. 1GB.


  • Rob Andrews
  • Administrator

    • 4097


    • 97
    • June 05, 2017, 08:09:04 PM
    • Patmos, Island Of
    more
Rob, to the small fishes :)
My Nexus 4 testing is going really great. Just for imagination my RAC on that device is 280 ;) Of course it runs almost 24/7.
https://boincstats.com/en/stats/-1/host/detail/333426255/lastDays
Oh... to the data consuption... In those 6 days it spent approx. 1GB.

Ok, thanks, let me take a look at what the sanctuary can find out about the host.  We should definitely consider allowing tablets and phones without any extra work from the controller. 


  • orbis
  • Full Member

    • 215


    • 7
    • February 08, 2018, 04:37:14 PM
    more
Thanks. And small detail to the future. I think, that will be fair that Biblepay team country on Rosetta should be "International", not "Canada". And why is it Canada? :)


  • T-Mike
  • Sr. Member

    • 375


    • 2
    • February 06, 2018, 06:12:58 PM
    more
I didn't start the other nodes today because it looks like there will be an update soon.