Bible Pay

Recent Posts

Pages: 1 2 3 4 5 6 7 8 [9] 10
81
Again I notice that all the sancs appear to be "valid" although I know that only one of my sancs (45.62.239.200) is online and the other (64.180.194.238) is  not (see below). Would this affect the quorum if we have zombie sancs?

Code: [Select]
telnet 45.62.239.200 40001
Trying 45.62.239.200...
Connected to 45.62.239.200.

telnet 64.180.194.238 40001
Trying 64.180.194.238...
telnet: Unable to connect to remote host: Connection refused

So I took at look at the "bottleneck" in TestNet (with POSE not working) - IE in laymans terms, Sancs are not banning each other.
Also we have an issue in TestNet where LLMQs are failing.

So, the problem is our LLMQ in TestNet is pointing to params that are for the wrong quorum.  (I did update them at one point, and triple checked them, but Dash's testnet is bigger than ours, and that led me to believe we were using RegTest in TestNet and they were using 5_60 in TestNet but in reality, Dash was requiring 7 masternodes in TestNet minimum, and our RegTest params were only being used in RegTest LOL).  So thats good this was discovered and makes sense, and explains why our LLMQ quorums have all failed so far.

So, we will need a mandatory in TestNet (I need to force a protocol version increase) in order to release this next test.

So MIP, lets build a new testnet release.

We will announce asap.

EDIT:  Please remember to erase your debug.log before starting the new version.

82
Dear Rob,

Given your stressing the importance in testnet of the sancs testing of LLMQ (kindly remind me what this is again?), I have (surprisingly) resurrected one of my initial DIP3 sancs (just using old biblepay.conf and masternode.conf files left on the server WITHOUT the controlling wallet which I may have destroyed....), so I am hoping that it will be okay.

Let's test this to ensure, as far as we are able with a fraction of the network in testnet, we roll out a stable product in mainnet. Could you kindly help us to assist you in this by providing a checklist  (in simple layman's terms, please. Thanks  :) )  so that we can test specific aspects and report on.

Blessings,
oncoapop


Code: [Select]
{
  "outpoint": "51a7d0cdb93fb3377d274f1b448af3c260618eb23c145a52c6c2fc081192b4dc-1",
  "service": "45.62.239.200:40001",
  "proTxHash": "3fdfa35533185427856e07d7a966714d118c7434994b5d6a349c11558e79e290",
  "collateralHash": "51a7d0cdb93fb3377d274f1b448af3c260618eb23c145a52c6c2fc081192b4dc",
  "collateralIndex": 1,
  "dmnState": {
    "service": "45.62.239.200:40001",
    "registeredHeight": 65387,
    "lastPaidHeight": 171297,
    "PoSePenalty": 0,
    "PoSeRevivedHeight": -1,
    "PoSeBanHeight": -1,
    "revocationReason": 0,
    "ownerAddress": "yhDJCUZq19CVLiuDa4wEoY4PxWg17bZuTx",
    "votingAddress": "yhDJCUZq19CVLiuDa4wEoY4PxWg17bZuTx",
    "payoutAddress": "yRKdex8fFcDyjqx618Cz4hsQcQFjp55jRg",
    "pubKeyOperator": "05f58c1b79e898cf31a0375ded7a8e5bb6b0e5aacb48dc79f7c87e38eb0533bdf85c610cee5cf94b79c425c20f423cc4"
  },
  "state": "READY",
  "status": "Ready"
}

Great on adding another sanc!

Yes, I will make a list; but know that I have no intention of moving towards prod til after we fully believe testnet and chainlocks are tested in and out :).

There are so many things we need to test after Prod transitions to deterministic sancs in 0.14 against prod.  We definitely need a list, and it will make me feel better when we not only test chainlocks in testnet, but LLMQs against the deterministic sancs in prod.

We will see the GSC gov data and votes in testnet after prod upgrades to determistic.  LLMQ will also be enabled in .13 in prod, but won't be enforced like it is in testnet.

83
Again I notice that all the sancs appear to be "valid" although I know that only one of my sancs (45.62.239.200) is online and the other (64.180.194.238) is  not (see below). Would this affect the quorum if we have zombie sancs?

Code: [Select]
telnet 45.62.239.200 40001
Trying 45.62.239.200...
Connected to 45.62.239.200.

telnet 64.180.194.238 40001
Trying 64.180.194.238...
telnet: Unable to connect to remote host: Connection refused

Well actually POSE gets enabled in tandem with LLMQ (long living masternode quorums).  So technically, when the 4 good sancs start forming quorums, they will start d-dossing the other sancs.  This will cause the other sancs POSE scores to increase.  Once they move to 100%, they will be banned and wont get paid.

The reason you see 0 for every sancs pose score is the LLMQs havent been forming yet.
I did ensure both the Dip and the LLMQ height were activated in testnet; so next I need to do some deeper analysis; Ill look today.

Let me know if you need more info on LLMQ other than this; you can read this and this should give you most of the info:
https://blog.dash.org/mitigating-51-attacks-with-llmq-based-chainlocks-7266aa648ec9

This too:
https://github.com/dashpay/dips/blob/master/dip-0006.md
84
Production Proposals / Compassion - July 2019
« Last post by Rob Andrews on July 26, 2019, 09:37:07 pm »
We sponsor 55 compassion children @$2090.00 per month.
Seeking 1 mil.

85
Production Proposals / Cameroon One Installment 4 2019
« Last post by Rob Andrews on July 26, 2019, 09:16:17 pm »
We have 11 children with cameroon one, we have donated approx $2393 in 2019, and our annual due is $4026.00.

Requesting 2 million.

86
Production Proposals / July payroll (March) 2019
« Last post by Rob Andrews on July 26, 2019, 09:05:51 pm »
Commits on Mar 31, 2019
1.4.0.7d-TestNet mandatory upgrade 

@biblepay
biblepay committed on Mar 31
 
1.4.0.7c-Testnet Mandatory Upgrade 

@biblepay
biblepay committed on Mar 31
 
1.4.0.7b-Testnet mandatory upgrade 

@biblepay
biblepay committed on Mar 31
 
1.4.0.7-TestNet Mandatory Upgrade 

@biblepay
biblepay committed on Mar 31
 
Commits on Mar 30, 2019
Merge branch 'master' into develop 
 
1.4.0.6b-TestNet Mandatory Upgrade 

@biblepay
biblepay committed on Mar 30
 
1.4.0.6-TestNet Mandatory Upgrade 

@biblepay
biblepay committed on Mar 30
 
Commits on Mar 29, 2019
1.4.0.5 - TestNet Mandatory Upgrade 

@biblepay
biblepay committed on Mar 29
 
Commits on Mar 28, 2019
1.4.0.4-TestNet Mandatory Upgrade 

@biblepay
biblepay committed on Mar 28
 
Commits on Mar 27, 2019
1.4.0.3-TestNet Mandatory Upgrade 

@biblepay
biblepay committed on Mar 27
 
Merge branch 'master' into develop 

Commits on Mar 26, 2019
1.4.0.2-TestNet Mandatory Upgrade 

@biblepay
biblepay committed on Mar 26
 
Commits on Mar 25, 2019
1.4.0.1d-Testnet RC 

@biblepay
biblepay committed on Mar 25


Commits on Mar 24, 2019
1.4.0.1c-TestNet RC 

@biblepay
biblepay committed on Mar 24
 
1.4.0.1b-Testnet RC 

@biblepay
biblepay committed on Mar 24
 
1.4.0.1-Testnet Release Candidate 

@biblepay
biblepay committed on Mar 24
 
Commits on Mar 22, 2019
1.2.0.1o-Genesis 

@biblepay
biblepay committed on Mar 22
 
Commits on Mar 20, 2019
1.2.0.1n-Genesis 

@biblepay
biblepay committed on Mar 20
 
1.2.0.1m-Genesis 

@biblepay
biblepay committed on Mar 20
 
Commits on Mar 18, 2019
1.2.0.1l-Genesis 

@biblepay
biblepay committed on Mar 18
 
Commits on Mar 16, 2019
1.2.0.1k - Genesis 

@biblepay
biblepay committed on Mar 16
 
Commits on Mar 12, 2019
1.2.0.1j-Genesis 

@biblepay
biblepay committed on Mar 12
 
Commits on Mar 10, 2019
1.2.0.1i-Genesis 

@biblepay
biblepay committed on Mar 10
 
Commits on Mar 6, 2019
1.2.0.1-Genesis 

@biblepay
biblepay committed on Mar 6
 
Commits on Mar 5, 2019
1.2.0.1g-Genesis 

@biblepay
biblepay committed on Mar 5
 
1.2.0.1f-Genesis 

@biblepay
biblepay committed on Mar 5
 
1.2.0.1e-Genesis 

@biblepay
biblepay committed on Mar 5
 
Commits on Mar 4, 2019
1.2.0.1d-Genesis 

@biblepay
biblepay committed on Mar 4
 
1.2.0.1c-Genesis 

@biblepay
biblepay committed on Mar 4
 1   
1.2.0.1b-Genesis 

@biblepay
biblepay committed on Mar 4
 
1.2.0.1-Genesis 

@biblepay
biblepay committed on Mar 4


120 hours = $4800 - capping at 3 mil.

87
Dear Rob,

Given your stressing the importance in testnet of the sancs testing of LLMQ (kindly remind me what this is again?), I have (surprisingly) resurrected one of my initial DIP3 sancs (just using old biblepay.conf and masternode.conf files left on the server WITHOUT the controlling wallet which I may have destroyed....), so I am hoping that it will be okay.

Let's test this to ensure, as far as we are able with a fraction of the network in testnet, we roll out a stable product in mainnet. Could you kindly help us to assist you in this by providing a checklist  (in simple layman's terms, please. Thanks  :) )  so that we can test specific aspects and report on.

Blessings,
oncoapop


Code: [Select]
{
  "outpoint": "51a7d0cdb93fb3377d274f1b448af3c260618eb23c145a52c6c2fc081192b4dc-1",
  "service": "45.62.239.200:40001",
  "proTxHash": "3fdfa35533185427856e07d7a966714d118c7434994b5d6a349c11558e79e290",
  "collateralHash": "51a7d0cdb93fb3377d274f1b448af3c260618eb23c145a52c6c2fc081192b4dc",
  "collateralIndex": 1,
  "dmnState": {
    "service": "45.62.239.200:40001",
    "registeredHeight": 65387,
    "lastPaidHeight": 171297,
    "PoSePenalty": 0,
    "PoSeRevivedHeight": -1,
    "PoSeBanHeight": -1,
    "revocationReason": 0,
    "ownerAddress": "yhDJCUZq19CVLiuDa4wEoY4PxWg17bZuTx",
    "votingAddress": "yhDJCUZq19CVLiuDa4wEoY4PxWg17bZuTx",
    "payoutAddress": "yRKdex8fFcDyjqx618Cz4hsQcQFjp55jRg",
    "pubKeyOperator": "05f58c1b79e898cf31a0375ded7a8e5bb6b0e5aacb48dc79f7c87e38eb0533bdf85c610cee5cf94b79c425c20f423cc4"
  },
  "state": "READY",
  "status": "Ready"
}

Again I notice that all the sancs appear to be "valid" although I know that only one of my sancs (45.62.239.200) is online and the other (64.180.194.238) is  not (see below). Would this affect the quorum if we have zombie sancs?

Code: [Select]
telnet 45.62.239.200 40001
Trying 45.62.239.200...
Connected to 45.62.239.200.

telnet 64.180.194.238 40001
Trying 64.180.194.238...
telnet: Unable to connect to remote host: Connection refused
88
As a side tip for anyone who is upgrading, remember you can do the 'exec reassesschains' if you end up at a lower height than us (it worked for me) due to LLMQ errors.

In the latest version I increased the LLMQ start height to 170,000.

Starting up LLMQ is much trickier (and dangerous) than I expected.

Since LLMQ drives chainlocks, the wallet is going to throw a bad block error, mark the block as dirty and put the wallet in a non recoverable state if it finds any block greater than 170,000 that is not in a quorum.

What this means is either the network has a lot of sancs, and a healthy quorum environment, or absolutely fails with a nightmare scenario.

This is obviously a decision Dash made to ensure there are no exceptions to the quorums once they are up and running. 

So in the current state of testnet, we would need to try to bring one more sanc on before 170,000 and see if a quorum forms, otherwise we need to keep increasing the LLMQ height.

What frightens me is if we start the quorums and then take down 70% of the nodes.  I think that means, we will need to regroup and bring the sancs back online. 

But from what I see, if lets say we lose those sancs VMs, and recreate them all, then the sigs will be new and the old quorums will be invalid (and I think that means a chain rollback).  We will cross that bridge when we come to it as our prod environment should be OK, as its always going to have for the most part more than 100 reliable sancs, so theoretically the quorums will never fail.

Dear Rob,

Given your stressing the importance in testnet of the sancs testing of LLMQ (kindly remind me what this is again?), I have (surprisingly) resurrected one of my initial DIP3 sancs (just using old biblepay.conf and masternode.conf files left on the server WITHOUT the controlling wallet which I may have destroyed....), so I am hoping that it will be okay.

Let's test this to ensure, as far as we are able with a fraction of the network in testnet, we roll out a stable product in mainnet. Could you kindly help us to assist you in this by providing a checklist  (in simple layman's terms, please. Thanks  :) )  so that we can test specific aspects and report on.

Blessings,
oncoapop


Code: [Select]
{
  "outpoint": "51a7d0cdb93fb3377d274f1b448af3c260618eb23c145a52c6c2fc081192b4dc-1",
  "service": "45.62.239.200:40001",
  "proTxHash": "3fdfa35533185427856e07d7a966714d118c7434994b5d6a349c11558e79e290",
  "collateralHash": "51a7d0cdb93fb3377d274f1b448af3c260618eb23c145a52c6c2fc081192b4dc",
  "collateralIndex": 1,
  "dmnState": {
    "service": "45.62.239.200:40001",
    "registeredHeight": 65387,
    "lastPaidHeight": 171297,
    "PoSePenalty": 0,
    "PoSeRevivedHeight": -1,
    "PoSeBanHeight": -1,
    "revocationReason": 0,
    "ownerAddress": "yhDJCUZq19CVLiuDa4wEoY4PxWg17bZuTx",
    "votingAddress": "yhDJCUZq19CVLiuDa4wEoY4PxWg17bZuTx",
    "payoutAddress": "yRKdex8fFcDyjqx618Cz4hsQcQFjp55jRg",
    "pubKeyOperator": "05f58c1b79e898cf31a0375ded7a8e5bb6b0e5aacb48dc79f7c87e38eb0533bdf85c610cee5cf94b79c425c20f423cc4"
  },
  "state": "READY",
  "status": "Ready"
}

89
As a side tip for anyone who is upgrading, remember you can do the 'exec reassesschains' if you end up at a lower height than us (it worked for me) due to LLMQ errors.

In the latest version I increased the LLMQ start height to 170,000.

Starting up LLMQ is much trickier (and dangerous) than I expected.

Since LLMQ drives chainlocks, the wallet is going to throw a bad block error, mark the block as dirty and put the wallet in a non recoverable state if it finds any block greater than 170,000 that is not in a quorum.

What this means is either the network has a lot of sancs, and a healthy quorum environment, or absolutely fails with a nightmare scenario.

This is obviously a decision Dash made to ensure there are no exceptions to the quorums once they are up and running. 

So in the current state of testnet, we would need to try to bring one more sanc on before 170,000 and see if a quorum forms, otherwise we need to keep increasing the LLMQ height.

What frightens me is if we start the quorums and then take down 70% of the nodes.  I think that means, we will need to regroup and bring the sancs back online. 

But from what I see, if lets say we lose those sancs VMs, and recreate them all, then the sigs will be new and the old quorums will be invalid (and I think that means a chain rollback).  We will cross that bridge when we come to it as our prod environment should be OK, as its always going to have for the most part more than 100 reliable sancs, so theoretically the quorums will never fail.


90
cli -version
BiblePay Core RPC client version 1.4.6.0

cli getblockhash 168736
c317c3de030df7c215f39b8568e9065b71a9e8e16af8d1fff5a09809e1ad18cd

leaderboard
Code: [Select]
{
 "Prominence": "Details",
 "CAMEROON-ONE: yUNSEjjtC9pdeHp4spswdFWh1npfV5Jvqe [N/A], Pts: 0.00": "0.00%",
 "CAMEROON-ONE: yfqGyVvuyidYytq5o2QvN1VdVeXtH9Lrkt [oncoapop1], Pts: 2667.00": "0.67%",
 "HEALING: yUNSEjjtC9pdeHp4spswdFWh1npfV5Jvqe [N/A], Pts: 0.00": "0.00%",
 "HEALING: yfqGyVvuyidYytq5o2QvN1VdVeXtH9Lrkt [oncoapop1], Pts: 111670149.00": "0.00%",
 "POG: yUNSEjjtC9pdeHp4spswdFWh1npfV5Jvqe [N/A], Pts: 596289318.00": "65.00%",
 "POG: yfqGyVvuyidYytq5o2QvN1VdVeXtH9Lrkt [oncoapop1], Pts: 0.00": "0.00%",
 "Healing": "Diary Entries",
 "oncoapop1": "Prayed for the salvation of the saints.",
 "Prominence": "Totals",
 "ALL: yUNSEjjtC9pdeHp4spswdFWh1npfV5Jvqe [N/A], Pts: 596289318.00, Reward: 537521.670": "65.000%",
 "ALL: yfqGyVvuyidYytq5o2QvN1VdVeXtH9Lrkt [oncoapop1], Pts: 111672816.00, Reward: 5553.030": "0.670%"
}

(my VPS is too small for GUI, so have to reply on cli, sorry)
cli exec analyze 168736 oncoapop1
Code: [Select]
{
 "Command": "analyze",
 "Campaign": "Totals",
 "0": "CAMEROON-ONE|yfqGyVvuyidYytq5o2QvN1VdVeXtH9Lrkt|2667|0.00671502|oncoapop1|2668",
 "1": "HEALING|yfqGyVvuyidYytq5o2QvN1VdVeXtH9Lrkt|111670149|0.00000000|oncoapop1|111670150",
 "2": "POG|yfqGyVvuyidYytq5o2QvN1VdVeXtH9Lrkt|0|0.00000000|oncoapop1|596289319",
 "3": "",
 "Campaign": "Points",
 "0": "User: yfqGyVvuyidYytq5o2QvN1VdVeXtH9Lrkt, Diary: , Height: 168733.00, TXID: fba1f5a89c130c8a1b20b822261c3bca93c732ced1b96c1e2d692c9f99bdec09, NickName: oncoapop1, Points: 2666.67, Campaign: CAMEROON-ONE, CoinAge: 56159643.2778, Donation: 0.0000, UserTotal: 2666.67",
 "1": "User: yfqGyVvuyidYytq5o2QvN1VdVeXtH9Lrkt, Diary: Prayed for the salvation of the saints., Height: 168733.00, TXID: 82758735ba9c7f2b63a477f15f1bdb98b245ae1e61e20ca1fef8b982e572af91, NickName: oncoapop1, Points: 111670149.47, Campaign: HEALING, CoinAge: 111670149.4654, Donation: 0.0000, UserTotal: 111672816.13",
 "2": ""
}


Ok, I see you in the leaderboard now.  (See pic).  Both of us look OK.

Yes, I see your exec analyze 168736.  The issue earlier was my machine was stopped on a bad llmq block before your block so it was actually me that was out.  (I just checked in a new patch that allows us to sync from zero with llmq activated).

Looks like we still need another sanc, I think MIP had one, MIP is yours down?

Pages: 1 2 3 4 5 6 7 8 [9] 10