Good job so far. I have a question:
1/3 sancs that I have reports a different health output:
"votes": 1,
"required_votes": 3,
"last_superblock": 14985,
"next_superblock": 15190,
"next_superblock_triggered": true,
"Healthy": true,
"GSC_Voted_In": [b]false[/b]
mastenode outputs
{
"626a61b0dfa151374bb42f1c432853efacb1742292318f9389c121f51b3310e4": "1"
}
-------------------------
2/3 report this:
"votes": 5,
"required_votes": 3,
"last_superblock": 14985,
"next_superblock": 15190,
"next_superblock_triggered": true,
"Healthy": true,
"GSC_Voted_In": true
Is that normal behaviour?
Thanks for the testing, so let me explain a little more about health, this is a relatively long explanation
but Ill try to make it succinct.
So the fact that we see "1" positive vote on your first node, the answer from a high level is, Its "probably OK", but we should do a little more investigation to try to find out whats going on.
On a side note, the underlying goverance-obj hash, that is the native Dash hash for the object, but the PAM hash is the hash of the payments and addresses in the GSC contract.
The node will be able to recover by the time the superblock hits using various methods, it will attempt to sync (mnsync status, govobjs) first, then it will attempt to unpack the gsc contract manually (this is since it actually runs the same code as the server side), and finally before it fails it will still follow normal superblock rules (IE mark the block as good if its a node thats out of sync), but its still worth going a little further with this one, to see if we have any gov-obj data sync errors.
So lets try to isolate the missing govobj hash.
If you run 'exec health' you will see the height is 15805 (for the next superblock).
Then go to a healthy node and type 'gobject listwild all triggers 15805' and you should see one or more contract triggers (preferably one).
Then copy the governance-object hash (this is the very first hash on the page) to notepad.
Then cat.debug.log | grep hash in your node that only has one vote. And see if an error occurred in syncing that hash (it might say something like Exception: failed to sync governance-object nnnnnn) but not exactly this wording.
If we can find that exception, then I can track down why that node did not fully sync. Also let us know if it recovered.
And also if 'mnsync status' shows that the node is out of sync (IE the row that reads 'IsSynced'). If that failed, the node didn't get all of its gobjects.
Another possibility is being on testnet, we have 4 enabled sancs, and 4 down, it could be that node is still trying to iterate through to the enabled nodes.
I think during testing if we are still having gobject sync problems they will most likely reveal themselves as a lot of data will be passing back and forth.
Right now all 4 of mine are in sync, with exec health true and 4 votes, but I will check my logs to see if any sync errors have been flying around through the night.
(I know we need to shorten the delete duration of gobjects in the next release, this will cut some of this spam and chattiness down also).