Testnet Push Announcement for 2/26

Hello,

We will be upgrading testnet with 086ed3b (https://github.com/libra/libra/commit/086ed3b917a61604ea9df235139cbb6f194d742b) this afternoon. testnet will be down briefly.

1 Like

testnet push is complete.

1 Like

Thank you for the heads up. (https://libexplorer.com/) Is back online and reindexing the 21st testnet revision

1 Like

Thanks for the update. We are still getting a lot of 429 errors on the accountstate requests. About 20% and at random. That was the same with the previous release

1 Like

Thanks for flagging it. Can you share detail of the error you were getting?

1 Like

Thank you for your reply. This is the error we are getting on testnet while doing a setGetAccountStateRequest and using it for an updateToLatestLedger. It happens at random and quite often. The error we are getting is:

{
code:1
details:“Received http2 header with status: 429”
message:“1 CANCELLED: Received http2 header with status: 429”
metadata:Metadata {_internal_repr: Object, flags: 0}
stack:"Error: 1 CANCELLED: Received http2 header with status: 429\n … at Object.onReceiveStatus
}

It seems network related but only happens while doing an accounStateRequest.

It seems that the rate limiter is quite strict on the accountState requests. This has to do with same IP requests and this is very limiting right now…

For instance; to get a confirmed transaction, an account state polled as there is no way to know when a transaction is confirmed in advance. Just like the transferb functionality in the rust client.

The same is true for the rust client when running against he testnet. If an accountState request is made a couple of times in a row the following error is displayed: [ERROR] Error getting latest account state: grpc-status: Internal, grpc-message: “Unexpected compression flag: 60”. Probably this is also the rate limiter

Seems that the getTransactionsRequest poll which is running in the background is causing the rate limiter to kill other calls. We do not know if this behaviour is intended. Of course, running a full node should fix out rate limiter problems.

Checked with the team that this is a known rate limit (30 per minute) set in haproxy config.

olá estou confiante nessa moeda espero que todos sejam beneficiados inclusivel eu que estou a testa-la DEUS no comando e tudo dará certo o ceú e o limite!!