Connecting to Local Libra Network

Successfully got a local node up on Ubuntu 18.04 on an EC2 Instance after some help on the Libra Github, sample of which is below:

#!/bin/bash 
sudo apt update
sudo apt upgrade -y
sudo apt install -y unzip
sudo apt install -y cmake
sudo apt install -y zlib1g-dev
git clone https://github.com/libra/libra.git
cd libra
./scripts/dev_setup.sh
cargo run -p libra_swarm -- -s

Now, the CLI Documentation mentions that you can do something similar to this: cargo run -p client --bin client – [OPTIONS] --host --validator_set_file <validator_set_file>

My goal is to have a localized version of the testnet running somewhere, and be able to connect my own clients to it and execute my own smart contracts, hence some questions on clarification on the CLI documentation below:

Where can I find out all of the ports that are running for the local network? I read somewhere else in the documentation that it is open on TCP/30307, I get this log from running the local network Connected to validator at: localhost:34497 are there any others? Want to make sure I do not have to mess around with Security Groups too much (and eventually I’d like to move this over to Fargate or EKS)

Would it be simple enough to run that example output from the documentation like “cargo run -p client --bin client – --host 10.99.0.1 --port 30307” ?? If I was in the same VPC, or the Public IPv4 address from the outside? And where can I find other values like --validator_set_file ?

More along the same vein, how do you reconnect to a local network you created? I noticed the cargo swarm command creates different validator files (I forget the exact word, I want to say Root CA but wrong blockchain)

Thanks in advance, all!

2 Likes

Hi @jonrau1, if you simply want to run a network locally, it’s probably easiest to just run libra swarm with multiple nodes (check the help of libra_swarm to specify the number of nodes). If you want to run on multiple machines, you’ll need to generate the config files and share them with all machines (passing in via the parameters). To see the AC port for a client to connect to, look at the config file for config.admission_control.admission_control_service_port

1 Like

Hi @jonrau1!
When you run libra_swarm it prints out local paths to generated configs.
Something like:

Faucet account created in file: “/var/folders/xd//temp_faucet_keys”
Base directory containing logs and configs: “<local_path>”

You can check config file in latter directory:

cat <generated_path>/*node.config.toml | grep admission_control_service_port | awk ‘{print $3}’

It will give you port of local validator. Now you can use it to connect via separate client process:

cargo run -p client --bin client – --host 10.99.0.1 --port <port> --validator_set_file <generated_path>/trusted_peers.config.toml

Note: in order to mint you’ll also need to pass down faucet_key file path via -m option
Hope this helps!

Hey there @phoenix - this helps a ton!

Just to confirm my own thoughts, the Config File (Validator Set File) and Faucet Key File just need to be generated once, and if shared with remote Clients they would all need to same to transact on the localized private testnet?

And last one ,just want to confirm that I do not need to pass --faucet_server in the cargo run -p client command as it will interpolate that value from the --host ?

Thanks again!

@kph thanks for the response! I think between you and @phoenix I should be well sorted out for this

1 Like

Just to confirm my own thoughts, the Config File (Validator Set File) and Faucet Key File just need to be generated once, and if shared with remote Clients they would all need to same to transact on the localized private testnet

Yes, validator_set_fle is required to run remote client. Faucet key file only if you want to mint money (transfers should work without it)

just want to confirm that I do not need to pass --faucet_server

yes, in a way described above you don’t need to pass this option

@phoenix @kph

Making some progress now, stood up Swarm on another EC2 instance, copied out the *.node.config.toml and temp_faucet_keys (Is that the correct Faucet Key File?) into S3, pulled those down into my C9 IDE ENV, amended the Security Groups and ran this command:
cargo run -p client --bin client -- --host 10.99.2.202 --port 41363 --validator_set_file ./shortened.node.config.toml --faucet_key_file_path ./temp_faucet_keys

And now I have received this as an error message

 Finished dev [unoptimized + debuginfo] target(s) in 0.32s
     Running `target/debug/client --host 10.99.2.202 --port 41363 --validator_set_file ./8deeeaed65f0cd7484a9e4e5ac51fbac548f2f71299a05e000156031ca78fb9f.node.config.toml --faucet_key_file_path ./temp_faucet_keys`
C0624 21:52:46.926169 139935107980224 common/crash_handler/src/lib.rs:68] reason = 'Unable to parse Config: Error { inner: ErrorInner { kind: Custom, line: None, col: 0, message: "missing field `peers`", key: [] } }'
details = '''[no extra details]. Thread panicked at file 'src/libcore/result.rs' at line 999'''
backtrace = '''
stack backtrace:
   0: backtrace::backtrace::libunwind::trace::h615fdae148e786a4 (0x56014567401d)
             at /home/ubuntu/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.31/src/backtrace/libunwind.rs:88
      backtrace::backtrace::trace_unsynchronized::he6ba61b93eed5d16
             at /home/ubuntu/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.31/src/backtrace/mod.rs:66
   1: backtrace::backtrace::trace::h803a14062477e393 (0x560145673f9e)
             at /home/ubuntu/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.31/src/backtrace/mod.rs:53
   2: backtrace::capture::Backtrace::create::h75239dd6daa71b72 (0x56014566739d)
             at /home/ubuntu/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.31/src/capture.rs:163
   3: backtrace::capture::Backtrace::new::hb6c80e0e8091570f (0x560145667294)
             at /home/ubuntu/.cargo/registry/src/github.com-1ecc6299db9ec823/backtrace-0.3.31/src/capture.rs:126
   4: crash_handler::handle_panic::h22346bde87ac60e7 (0x560143f6b7c9)
             at common/crash_handler/src/lib.rs:61
   5: crash_handler::setup_panic_handler::{{closure}}::h496b18a7b97a8816 (0x560143f6c118)
             at common/crash_handler/src/lib.rs:31
   6: rust_panic_with_hook (0x5601456c8769)
             at src/libstd/panicking.rs:478
   7: continue_panic_fmt (0x5601456c8202)
             at src/libstd/panicking.rs:381
   8: rust_begin_unwind (0x5601456c80e6)
   9: panic_fmt (0x5601456e05ed)
             at src/libcore/panicking.rs:85
  10: core::result::unwrap_failed::h6a87b7a57017422b (0x560144a1f0bd)
             at /rustc/50a0defd5a93523067ef239936cc2e0755220904/src/libcore/macros.rs:18
  11: core::result::Result<T,E>::expect::hc80bde1d3245a7cf (0x560144a20464)
             at /rustc/50a0defd5a93523067ef239936cc2e0755220904/src/libcore/result.rs:827
  12: config::trusted_peers::TrustedPeersConfig::parse::h998e0b2475004943 (0x560144a2b092)
             at config/src/trusted_peers.rs:160
  13: config::trusted_peers::TrustedPeersConfig::load_config::h5dcab4140673966d (0x5601441f5961)
             at /home/ubuntu/environment/libra/config/src/trusted_peers.rs:99
  14: client::client_proxy::ClientProxy::new::hd183a0a5b0a886d3 (0x5601441c5ceb)
             at client/src/client_proxy.rs:104
  15: client::main::ha5a5bbaa482bc190 (0x560143f46469)
             at client/src/main.rs:58
  16: std::rt::lang_start::{{closure}}::h2f019a63211e175c (0x560143f6afc2)
             at /rustc/50a0defd5a93523067ef239936cc2e0755220904/src/libstd/rt.rs:64
  17: {{closure}} (0x5601456c8083)
             at src/libstd/rt.rs:49
      do_call<closure,i32>
             at src/libstd/panicking.rs:293
  18: __rust_maybe_catch_panic (0x5601456cc26a)
             at src/libpanic_unwind/lib.rs:85
  19: try<i32,closure> (0x5601456c8b8d)
             at src/libstd/panicking.rs:272
      catch_unwind<closure,i32>
             at src/libstd/panic.rs:388
      lang_start_internal
             at src/libstd/rt.rs:48
  20: std::rt::lang_start::h1a6165737a7b6474 (0x560143f6af89)
             at /rustc/50a0defd5a93523067ef239936cc2e0755220904/src/libstd/rt.rs:64
  21: main (0x560143f48d4a)
  22: __libc_start_main (0x7f452d84bb97)
  23: _start (0x560143f3855a)
  24: <unknown> (0x0)'''

Any idea what to troubleshoot next for this?

Make sure you use right file for validator_set_file. trusted_peers.config.toml should be generated in target dir
From paste looks like you used node.config.toml instead

@phoenix

Gotcha, I guess I misunderstood at first – I thought the validator_set_file was the longer node.config.toml

I replaced it, and got a timeout error, I suppose it’s something on the network level I should troubleshoot next?

AD-Fed-Admin:~/environment/libra (master) $ cargo run -p client --bin client -- --host 10.99.2.202 --port 41363 --validator_set_file ./trusted_peers.config.toml --faucet_key_file_path ./temp_faucet_keys                                               Finished dev [unoptimized + debuginfo] target(s) in 0.58s
     Running `target/debug/client --host 10.99.2.202 --port 41363 --validator_set_file ./trusted_peers.config.toml --faucet_key_file_path ./temp_faucet_keys`
E0624 22:32:03.277087 139893671369664 client/src/client_proxy.rs:661] Failed to get account state from validator, error: RpcFailure(RpcStatus { status: DeadlineExceeded, details: Some("Deadline Exceeded") })
Not able to connect to validator at 10.99.2.202:41363, error RpcFailure(RpcStatus { status: DeadlineExceeded, details: Some("Deadline Exceeded") })

Yes, it means you can’t access target. Make sure that port is open
For sanity check try to run client locally

Confirmed I can run locally with the EC2 Instance, generate accounts and send Libra between them

ubuntu@ip-10-99-2-202:~/libra$ cargo run -p client --bin client -- --host 10.99.2.202 --port 41363 --validator_set_file /tmp/.tmpZPoBWq/trusted_peers.config.toml --faucet_key_file_path /tmp/keypair.QdyGky2gxWpH/temp_faucet_keys
    Finished dev [unoptimized + debuginfo] target(s) in 0.97s
     Running `target/debug/client --host 10.99.2.202 --port 41363 --validator_set_file /tmp/.tmpZPoBWq/trusted_peers.config.toml --faucet_key_file_path /tmp/keypair.QdyGky2gxWpH/temp_faucet_keys`
Connected to validator at: 10.99.2.202:41363
usage: <command> <args>

Use the following commands:

Also found out I boneheaded my Security Group setup - I was referencing the wrong one :grimacing:

For final confirmation – all I need to copy across to my Remote Clients is trusted_peers.config.toml and (only if I am minting on the clients) temp_faucet_keys ?

yep! Hope this helps

You have definitely helped a ton!

I did have one last set of questions (For now)

  1. How do you keep the Local Libra Network on without having to have a SSH session open? I noticed after I issue the Swarm, if I close down the SSH terminal I will lose connectivity from the network. Is there another CLI command to pass to keep it up and running much like you do on the Main Testnet?

  2. After exiting out, how do I configure Swarm to allow me to re-instantiate the Local Libra Network I was previously on? I ran into an issue where I had kept re-running swarm and it ate up all of my local disk memory on Cloud9

Thanks again in advance

@jonrau1

  1. The easiest way would be to use something like screen: https://linuxize.com/post/how-to-use-linux-screen/, but you could also run libra_node directly (see the internals of how libra_swarm works and you can see how it spins up the nodes). If you build the binary, you could run it in the background easily enough.

  2. I’ll let phoenix answer this one. I can’t recall if we have an easy way to do this right now

  1. to save some space run libra_swarm with -d option. It will disable detailed logging.
    libra_swarm is just shortcut for bootstrapping local cluster and generates new db location every time. You can find path to storage files via

cat <generated_path>/*node.config.toml | grep dir | tail -n 1 | awk ‘{print $3}’

@phoenix & @kph

Thanks again for both of you assisting – I am using the Screen for #1 but working on building my own binary to have it run permanently in the background. As an aside, have you folks benchmarked performance for handling TPS and overall load? I believe the public Testnet has 10 Validators? That was one of the things I was interested in doing but it doesn’t look like you can run scripts from within the Libra CLI to initiate multiple account creations, transfers and minting operations in parallel.

And Phoenix, for #2 I was able to pass the -d flag to greatly lessen the size, I also go into the /tmp/ and remove the previous artifacts as well which is not a huge administrative burden. When I cat out the DB location, is that a flag I can pass in Swarm to reconnect to it?

Thanks again folks!

unfortunately libra_swarm doesn’t have such option for now. If you really want to, you can locally change this line locally (https://github.com/libra/libra/blob/master/config/config_builder/src/swarm_config.rs#L35) and specify predefined path to data dir

@phoenix

Thanks for that info, I will add that as an option to my walkthrough on Github after I run through it a few times.

Before I actually start on another sub-project, I already can proxy out connections to the Node via an AWS ELB Network Load Balancer, any idea if it would actually load balance if I specified a shared template.base.data_dir_path location for multiple nodes I spin up on different hosts and changed the config.node.toml to all have the same access port?

My idea would to be connect an EFS share to multiple EC2 Instances in the same VPC, kick off Swarm on one of them manually, copy out that DB into EFS, kill swarm and then restart it everywhere with the changed template.base.data_dir_path in the swarm_config and also modify the access control service port for all of them and have the NLB handle all inbound connections and put the nodes behind a NATGW in a private subnet to give them a little network isolation

Of course if it’s not possible, then I won’t do that lol