Weekly Dev Update #58

Loki Network Development Update 58

Hey Y’all, 

The Loki 4.0.0 hardfork is fast approaching! It’s happening in approximately 24 hours, so if you haven’t upgraded your Service Node, Miner or Mining Pool, now is your chance. If you don’t update in time, you will be left on an alternate chain and won’t be able to talk to the majority of the network.

A full guide on how to update can be found here: https://loki.network/2019/07/12/hefty-heimdall-mandatory-upgrade-period/

Loki Core


Lokinet

If you’re on our Discord you might catch Jeff, the lead developer of LLARP, live streaming as he codes: https://www.twitch.tv/uguu25519.  He typically streams on Tuesday mornings, 9am-12pm Eastern (US) time.

What’s going on this week with Lokinet:

Lokinet is entering a feature freeze for an upcoming release to the public, and is undergoing heavy internal testing to see how the network performs under various types of load.  We don’t have a fixed release date yet — it will depend on how testing goes this week, but look for one soon. The last several weeks of development have fixed a myriad of issues, big and small, and we think Lokinet will be ready for public testing soon.  Hence, we have internally frozen the codebase* to not add anything new (just important fixes!) between now and the 0.5.0 release.

* There is one exception; see below.

Changelog:

New Pull Requests:


Loki Messenger Desktop 

Storage Server


Loki Blocks Onion Explorer 

The Loki block explorer has been expanded to show a number of new things including checkpoints and their votes, and decommissioned or inactive nodes. 


Messenger Mobile (iOS and Android)


Thanks,  

Kee 

Hefty Heimdall Changes for Service Node Operators

There are a number of new and changing rules being implemented in the Loki Hefty Heimdall hardfork, and Service Node operators should make sure they’re aware of the new requirements.

The following changes will be implemented on July 24 at block height 321,467.

Changing Rules:

  • We have relaxed the Service Node deregistration rules in order to be more lenient on Service Nodes, particularly those that have previously demonstrated quality performance. 

New Rules

  • All Service Nodes must now run the Loki Storage Server to receive rewards.
  • All Service Nodes must now have an accessible Public IP address and must open ports for the Loki Storage Server.
  • There is now a penalty for Service Nodes that send their uptime proofs from different IP addresses.

More Detail:

Relaxed Deregistration: 

After reviewing feedback from Service Node operators on Discord and Github over the past 6 months, the Loki team proposed a number of relaxations (included in Hefty Heimdall) with regard to how Service Nodes are deregistered when they fail to meet the expected requirements. 

A few basic changes were made to implement the new system, including the introduction of a credit scheme and decommission/recommission transactions. The basic scheme is detailed below:

  • Each Service Node starts with 2 hours of credit.
  • Each Service Node earns 0.8 hours of credit per day of uninterrupted operation up to a maximum of 24 hours.
  • If a Service Node has been assessed to be no longer meeting the standards of the network (for example, not submitting uptime proofs) the quorum looks to see if they have any credit.
  • If the Service Node has 2 hours or more in credit, it will be decommissioned (instead of deregistered) for the amount of time it has in credit.
  • When a Service Node is decommissioned, its funds are not locked for 30 days but it is removed from the rewards pool and cannot participate in normal network functions.
  • If the Service Node starts sending uptime proofs again during the decommission period, the current quorum will detect this and submit a recommission transaction which will reset the Service Node’s credit balance to zero, and insert the decommissioned Service Node back to the bottom of the rewards list.
  • If, during the decommission period, the Service Node’s credit runs out before it can successfully submit an uptime proof, it will be deregistered.

TL;DR Service Nodes now have a longer grace period which grows the longer your Service Node is up and performing well. Service Nodes that have been running without interruption for 30 days will now have 24 hours where they can be offline before they are deregistered and your funds are locked for 30 days.

Read the full details of the relaxed deregistration rules here: https://lokidocs.com/ServiceNodes/DeregistrationRules/


Loki Storage Server:

The Loki Storage Server is an application that exposes a public endpoint for Loki Messenger users to be able to store and retrieve messages on your Service Node. It is a necessary requirement for Loki Messenger.

Hefty Heimdall versions of lokid running in Service Node mode will look for a running Loki Storage Server on your machine. If lokid does not find a running Storage Server, it will refuse to start. Lokid will also periodically check to see if the Loki Storage Server is still running – if it isn’t, lokid will stop broadcasting uptime proofs.  

You can easily manage lokid and the Loki Storage Server with the Loki Launcher, which sets up the required utilities. If you are an experienced system administrator, you can manage these utilities yourself – though we recommend you test your setup on testnet.

TL;DR You need to run the Loki Storage Server alongside lokid. Loki Launcher will do this automatically, otherwise binaries can be found here: https://github.com/loki-project/loki-storage-server


Public IP address and Open Ports:

All Service Nodes must now have a public IP address and open ports for lokid P2P and the Loki Storage Server. The default port for mainnet lokid P2P is 22022, and for the Loki Storage Server the default port is 23023.

If you are currently running your Service Node without any open ports or behind NAT, you will need to look into creating port forwarding rules in your router, and/or opening these ports in your firewall. If you are running a custom setup, you can change your default ports for Loki Storage Server and lokid P2P communications.

TL;DR You need to open ports in your firewall or router and have a public IP address.


IP Address Change Penalty: 

We understand that many Service Node operators in the community are running backup servers which submit uptime proofs for their Service Node. Most operators have set up these backup servers by running another lokid with the –service-node flags on a seperate IP, but with the same Service Node key as their primary Service Node.

This setup creates a race between the two Service Nodes, which compete to send out the first uptime proof. Depending on the precise timing of when the separate lokids submit their uptime proofs, the relationship between the two Service Nodes can change. You may find one is master for a while, and then it switches. You may find the two Services Nodes swap on every announcement. 

This race condition will be a problem in Hefty Heimdall because of the inclusion of the Service Node IP address in each uptime proof. Every time the server sending the uptime proof “wins”, it changes the IP on which the network tries to reach the Service Node’s Storage Server, and of course, the backup won’t have the same messages stored as the primary server.

Without modifying the Storage Server code to create a syncing channel between a master and backup Service Node, this problem is difficult to solve. Instead, the Loki team has implemented a punishment for Service Nodes that submit uptime proofs from different IP addresses. Each time your IP address changes, you will be dropped to the bottom of the rewards list losing your previous position. 

From talking to Service Node operators, we found that one of the most common reasons they ran Service Node backups was because they didn’t have enough time to respond to outages on their Service Nodes. By addressing the core issue – which was the lack of forgiveness in the deregistration rules – we think there will be far fewer people wanting to run backup Service Nodes. 

Although this change does not explicitly prevent the running of backup Service Nodes, Service Node operators who choose to run backup servers should ensure that they only send uptime proofs from the backup Service Node when it’s clear that the master server is actually down, rather than just setting up two Service Nodes and having both submit proofs at competing times.

TL;DR Running two Loki Service Nodes with the same SN key on two machines with different IP addresses will likely lead to your Service Node being dropped to the bottom of the rewards list regularly.


We are really excited to see these changes go live on mainnet on July 24. The Loki team and Loki community are always looking for ways to improve the stability and usefulness of the Service Node network, while maintaining a simple user experience so the Service Node network can continue to grow.  

Weekly Dev Update #57

Hey Y’all, 

This week was particularly busy as we worked towards the final binaries release for the mandatory upgrade period. Thank you to everyone who jumped on testnet and set up nodes – with your help we were able to identify a host of bugs which were fixed in our final 4.0.3 release. 

If you are a Service Node operator you should upgrade your node to version 4.0.3. Instructions on how to do this can be found here: https://lokidocs.com/ServiceNodes/SNFullGuide/#updating-your-binaries 

Loki Core


Loki Launcher

The Loki Launcher is a node JS package that will allow for the independent management of all the components required to run a full Service Node. This includes managing Lokinet, lokid, the Loki Storage Server and any other future applications we require. When Loki Service Nodes begin to route data and store messages for Lokinet and Loki Messenger, we’ll recommend that every single Service Node run the Loki Launcher.

What’s going on this week with Loki Launcher:

We released two versions of the Launcher this week for the 4.0.3 release. There were a lot of fixes for various crashes, but also support to be able to run versions 3 and up of lokid on mainnet. We also added extra startup checks to improve our status accuracy and made some minor user experience improvements.

Changelog:

  • Prequal additional configuration system for Storage Server
  • Update prequal ports to test
  • Rename port names, adjust prequal output
  • Abort launcher start up if Storage Server port is not open to the public
  • Handle “port is in use” error handling better
  • Store stdout/stderr for launcher when backgrounding
  • Add *-server-port-check args
  • Wait for blockchain-rpc port to be open before considering start up a success
  • Temporarily collect Storage Server startup info for 10s in case of problems with output
  • Report if the launcher was already stopped when stopping internally
  • Make sure certain modes are only run with root
  • Activate Storage Server based on blockchain binary version
  • Only pass version 4  parameters when blockchain binary version 4 or above
  • Reminders to restart loki-launcher if mode has stopped it
  • Strip out launcher arguments from passing all the way to blockchain startup
  • Remove duplicate Storage Server port check when backgrounding
  • Put try/catch guard around process check to prevent intermittent launcher crashes
  • When running port test submit a disconnect message when done
  • Add unhandled exception logger
  • Improve HUP information format
  • Process blockchain stderr
  • Make sure blockchain rpc server is up before starting network/storage
  • Change Storage Server default port from 8080 to 23023
  • Up retries for getPublicIPv4
  • Untangle retry counter with various repos in download-binaries

Github Pulse: Excluding merges, 2 authors have pushed 41 commits to master and 41 commits to all branches. On master, 14 files have changed and there have been 590 additions and 196 deletions


Lokinet

If you’re on our Discord you might catch Jeff or Ryan, the developers of LLARP, live streaming as they code: https://www.twitch.tv/uguu25519, https://www.twitch.tv/neuroscr

What’s going on this week with Lokinet:

The configuration refactor finally reached a state close to stable. There were lots of MacOS and FreeBSD fixes, and a huge internal change to using IPv6 on tunnel adapters to allow more connections to a hidden service or client.

Changelog:

Pull Requests:


Loki Messenger Desktop 

Storage Server


Loki Blocks Onion Explorer 

The Loki block explorer has been expanded to show a number of new things, including checkpoints and their votes and decommissioned or inactive nodes. 

  • Updated RPC changes around quorum state calls in 4.0.2 (replaced batch quorum calls with new quorum states call). 
  • Add checkpoint quorum display to quorum states page and to main index. 
  • Add display of a single obligations or checkpoint quorum, linked where appropriate.
  • Cut off quorum display after 20 nodes with a “+ 7 more ↪” that links to the full obligations quorum details.
  • Add a list of last three checkpoints to the main page (and a link to a page with up to 50 fully displayed). https://github.com/loki-project/loki-onion-blockchain-explorer/pull/1

Messenger Mobile (iOS and Android)


Thanks,  

Kee 

Hefty Heimdall Mandatory Upgrade Period

Hey Everyone!

It’s hardfork time again! The Hefty Heimdall upgrade period has begun, and that means everyone has approximately 12 days to upgrade to the latest versions of the Loki software before the hardfork on July 24. 

Below is a guide on how to prepare for the hardfork for each type of user in the Loki ecosystem: 

Wallet users

You guys don’t need to do anything yet. Over the course of the next two weeks we will be pushing updates to the Loki Desktop, iOS and Android wallets to upgrade them to the latest versions of the Loki software. 

Once these updates are released we encourage everyone to upgrade their wallet to the latest version, otherwise you will no longer be able to send transactions on the correct chain. 

Service Node Operators 

If you are using the Loki Launcher you can update to the latest binaries by using these commands: https://lokidocs.com/ServiceNodes/SNFullGuide/#updating-your-binaries.

For those more experienced with system administration and who aren’t using the Loki Launcher, you can upgrade your lokid binaries manually. Please note that you will also need to update to the latest Loki Storage Server. We have compiled a guide here: https://lokidocs.com/ServiceNodes/SNFullGuideLegacy/.

Pools

Pools do not need to enable RandomXL support until the hardfork on July 24 or at block 321467, however they should make sure they are ready to switch when the fork happens, and can look at these two reference implementations for changes they may need to introduce to their software.

A cryptonote-node-JS reference implementation pool which supports the new RandomXL Loki hashing algorithm can be found here: https://github.com/jagerman/cryptonote-nodejs-pool/tree/randomx.

A reference implementation for node-cryptonight-hashing can be found here: https://github.com/jagerman/node-cryptonight-hashing/commits/master.

A list of the full Loki changes to RandomX can be found here: https://github.com/loki-project/loki-randomXL.

Miners 

Miners can continue to mine Loki using CN-Turtle, however on the July 24 at block height 321467 they will need to switch to using RandomXL which is supported in a mining software called Xmrig on their “evo” branch: https://github.com/xmrig/xmrig/tree/evo. Xmrig have not yet published release binaries with RandomXL support, and if they haven’t published release binaries by the hardfork, we willl build and distribute mining binaries from a fork of their repository. 

Exchanges 

Exchanges will need to update their lokid and wallet with the latest CLI binaries found here: https://github.com/loki-project/loki/releases/latest. We will reach out to exchanges individually with any additional instructions if required.

Happy Forking!

Your Service Node and Hefty Heimdall

Hefty Heimdall will be a big release for Service Nodes and will see them start to perform meaningful work in storing messages. We want to be clear about the level of service the Loki Service Node network will be enforcing, and we hope that by outlining these changes, we can prevent Service Nodes from deregistering when we hardfork to Hefty Heimdall on July 24.

Here are some guidelines for Service Node operators on what to expect in the coming months.

Recommended Changes

We will be releasing the next version of the Loki Launcher before the hardfork – and we STRONGLY RECOMMEND that all Service Nodes update.

The Loki Launcher will have a number of components which will improve the user experience for Service Node operators and reduce the chances of being deregistered, including:

  • Managing the Loki storage server, lokid, and in the future Lokinet – which will startup and restart these applications if they crash.
  • Easy access to the lokid console for preparing Service Node registrations and running other commands.
  • One unified config file to manage all parts of the Loki Service Node software suite, which includes validation on startup to make sure everything makes sense.
  • Adding an installer which will grab the latest versions of Loki binaries for new Service Nodes.
  • Adding the SNbench utility which will test your node and give you a recommendation on whether your setup meets the requirements for each release.

New Requirements

The Hefty Heimdall release corresponds with the first version of the Loki Messenger. This means that all Service Nodes will be required to have the following software/hardware:

  • Running lokid with downloaded blockchain
  • Loki storage server
  • 15 GB of available space for blockchain storage (including any blockchain you have synced)
  • 3.5 GB available for message storage for Loki Messenger storage server
  • A public IP address and specified open ports

Two client side checks have been enabled: a test which prevents the startup of lokid if a specified Public IP address and open port number are uncontactable, and an ongoing client side test that prevents the broadcast of uptime proofs if at any point during operation the local Loki message storage server fails/shuts down.

Decentralised Testing

Hefty Heimdall will also enable a number of decentralised tests which will be run on Service Nodes by other Service Nodes. We will be enabling both blockchain storage and message storage tests. This means your node will be tested at random intervals by other Service Nodes to ensure it’s holding both a full copy of the blockchain, and storing all of the messages required by its swarm.

Initially, these tests will not be enforced through deregistration. But after collecting data on the effectiveness of the system, we will enable deregistration so that malicious nodes can be removed from the network.

Hefty Heimdall 4.0.0

Loki Messenger alpha and Checkpointing!

Today we are announcing the release of our next Loki hardfork, Hefty Heimdall. This hardfork will include a number of updates for both Loki Messenger and Loki Core, including:

  • Service Node Checkpointing
  • Loki Storage Server (Stores Loki Messenger messages on Service Nodes)
  • Internode testing (Blockchain and message storage)
  • Loki Messenger alpha release

The testnet binaries will be released on June 26, so you can start testing these changes in just a few weeks.

There will be a mandatory upgrade period starting July 10.

The Hefty Heimdall hardfork will happen on July 24, with Checkpointing being enabled but not enforced.

We will start enforcing Checkpointing on September 12, completely preventing double spends after 12 blocks of confirmation.

As you may have already heard, we’re thrilled to announce the launch of the Loki Messenger alpha on the mainnet. This is a huge step forward for us and for the community, with Loki Messenger being the first Loki Service to make it out of the labs. We can’t wait to get it into your hands so that with your feedback, we can start to rapidly iterate on the design and feel of it in anticipation of the full launch later this year. We should make it very clear that the Loki Messenger alpha will not have the privacy properties that will be present in the final version. This alpha will primarily be for testing and feedback purposes.

The Basics

The Hefty Heimdall release will include an alpha version of Loki Messenger, which operates entirely on the Service Node network. Loki Messenger will be the first ever system which enables users to achieve both online and offline messaging in a fully decentralised, redundant and scalable way. The encryption used in Loki Messenger, which is also used in Signal, means your messages are only readable by you and the person you send them to. You can read more about the excellent security properties of this kind of end-to-end encryption in this article: http://www.alexkyte.me/2016/10/how-textsecure-protocol-signal-whatsapp.html

The Loki Messenger doesn’t connect to a central server like other messengers. Instead, groups of cooperative Service Nodes – called “Swarms” – store your messages while achieving a high rate of redundancy, meaning that if a Service Node goes offline, your message isn’t lost. Because Loki Messenger doesn’t use any central server, it’s extremely hard for malicious actors to shut the network down since the storage network is distributed across the world over hundreds of nodes.

However, we’ll stress again that the Loki Messenger alpha will not have many of the privacy qualities that the final version will have. Lokinet still has a while to go before it can be deployed on the Service Node network. The Loki Messenger alpha will allow you to communicate securely with your friends and family over a decentralised network while still having a comparable user experience to the chat apps you already know. And when used in conjunction with other network anonymisation tools, the Loki Messenger alpha will also have some reasonable privacy properties.

How it Works

Offline Messages (Asynchronous Mode)

The process below assumes your messenger client has never connected to the Loki Service Node network before, and you want to send a message to a user who is offline.

Sending

  1. Your messenger client gets a partial list of Service Nodes and IP Addresses from a set of hardcoded Loki seed nodes. This is done via a clearnet connection, meaning whoever runs the seed nodes can see that your IP address is requesting a list of currently operating nodes.
  2. Your messenger client contacts a single node randomly from the list, and asks them for the Service Nodes in the Swarm that correspond to your recipient’s public key. This means that a single Service Node knows that your IP address is likely messaging a recipient with X public key.  
  3. Your messenger client contacts three of the nodes inside your recipient’s Swarm and gives them the encrypted message for your recipient. These three nodes will know that your IP address sent a message to X public key.

Receiving

  1. To find your Swarm, your messenger client contacts a random Service Node and asks for the Swarm that corresponds to your public key. Without Lokinet or a VPN, that random node can assume your IP address is linked to your public key.
  2. Your client then maintains a connection to three random Service Nodes in your Swarm and polls them regularly to find out if there are new messages destined for you. This means that three Service Nodes in your Swarm know that your IP address is requesting messages for a particular public key.

Online Messages (Synchronous Mode)

Sending and Receiving

  1. Sending an online message requires knowledge of two parties being online simultaneously, and the addresses they can be contacted at. To do this, Loki Messenger periodically sends your IP address inside encrypted offline messages to your contacts.
  2. When a client comes online they can use this contact information to attempt to establish a direct P2P link with another client. Messages can then flow between the two clients without needing to use the Storage Server. However, this means when you use Loki Messenger, all of your friends can see your IP address.

As you can see from the above descriptions, the Loki Messenger alpha provides reliability, censorship resistance and encryption, however it does not provide protection against metadata collection. It’s important to understand this when participating in the Loki Messenger alpha, since – depending on your threat model – you could be leaking metadata that could be used to draw inferences about your use.

The Hefty Heimdall hardfork is a particularly large one – we hope you’re all as excited as we are!  Please help us improve Loki Messenger for everyone by downloading and testing the alpha – your reports make all the difference. Keep an eye out for further updates as we approach the hardfork date.

Weekly Dev Update #50

dev_update_50

Hey Y’all,

This was a week of planning and debriefing after Consensus. We moved lots of the team around and identified some new goals – the biggest being a plan to release a version of Loki Messenger on mainnet with the next hard fork.

We also worked on a number of improvements for Service Node operators, including a debian package for Loki which means a Service Node can now be installed with a simple “sudo apt install lokid”. Stay updated – the debian package is just experimental right now.

Loki Core


Lokinet

If you’re lucky and join our Discord you might catch Jeff or Ryan, the developers of LLARP, live streaming as they code: https://www.twitch.tv/uguu25519https://www.twitch.tv/neuroscr


Loki Messenger

The Loki Messenger client is in a mostly complete state. Right now the focus is being put on the message server and integration with Lokinet and lokid.

Loki Messenger Desktop

Storage Server

Messenger Mobile (iOS and Android)


Thanks,

Kee

Weekly Dev Update #41

Dev-Update-41

Hey Y’all,

The past week has been intense, as a critical bug was discovered which required getting Service Node operators to immediately update. You can find a full report here: https://loki.network/2019/03/22/critical-bug-report-21st-march-2019/

Lots of work went into making Loki Core stable for the Service Node network in anticipation for the network update which is happening at block 234,767 (in approximately 12 hours).

There are still about 90 nodes that have not yet updated to 3.0.2. This is your final opportunity to update, please do so in the next 12 hours or you will face deregistration.

The current Service Node breakdown by version at time of writing (25/03/2019) is: 3.0.2 [341], 2.0.4 [9], 2.0.3 [44], 2.0.2 [13], 2.0.1 [26], 2.0.0 [1]


Loki Core


LLARP / Lokinet

If you’re lucky and join our Discord you might catch Jeff or Ryan, the developers of LLARP, live streaming as they code: https://www.twitch.tv/uguu25519https://www.twitch.tv/neuroscr


Loki Messenger

The Loki Messenger client is in a mostly complete state. Right now the focus is being put on the message server and integration with Lokinet and Lokid.


Loki GUI

The new Loki GUI wallet is out for pre-release (Beta). We want everyone to have a go testing, and be sure to submit any bugs or improvements to the issues section in Github. You can download the Beta for all Operating Systems here:

https://github.com/loki-project/loki-electron-gui-wallet/releases


Thanks,

Kee

Critical Bug Report – 21st March 2019

On the 21st of March 2019, approximately 5 days before the scheduled Summer Sigyn hardfork to enable Infinite Staking, the Loki team identified a potential situation that would cause a consensus divergence in the Service Node lists between version 3.0.0 and all version 2.0.x nodes. At the time, most of the network were still running on v2.0.x nodes, but a significant portion had already updated to 3.0.0.

At that time, the versions were:

SN versions: 3.0.0 [209], 2.0.4 [36], 2.0.3 [109], 2.0.2 [43], 2.0.1 [57], 2.0.0 [11], unknown [1]

The Problem

As a part of the Infinite Staking release, a change to the curve of the staking requirement was implemented to allow nodes to be staked indefinitely. The old curve would force the staking requirement to go up from its minimum of 10,000 Loki back up towards 15,000 Loki, which would have allowed nodes to stake indefinitely at the lower staking requirement, even as it rose. The curve was modified such that the minimum was set at 15,000 and never increased from there. It also decreased the rate of decline to offset the new higher minimum.

In order to do this, a date had to be hardcoded into the software for nodes to uphold the new curve. This was originally set for the 20th of March – in line with the original Summer Sigyn hardfork date. However, due to concerns about constrained public testing times, the hardfork was pushed back 6 days to the 26th of March, but the height at which the staking requirement curve change was to take place was not adjusted in line with the new hardfork date. This was a mistake made by the development team and was caught by prominent community contributor, Jagerman, before being confirmed with the rest of the team on the (Australian) morning of the 21st of March.

The problem with this was that as the curves diverged, 3.0.0 nodes would start to require a higher staking requirement than the 2.0.x nodes. If a 2.0.x node staked at the minimum staking requirement, the 3.0.0 nodes would not recognise it as being ‘full’ and thus would not add it to their Service Node lists. This would then cause a divergence in the consensus about the state of the Service Node lists.

The Fix

A new release was quickly created which brought the staking requirement curve back into line with the 2.0.x nodes until the actual hardfork date. This release, 3.0.1, was immediately distributed to prevent Service Nodes from being deregistered due to the divergence. However we quickly realised that this alone would be insufficient, because if a divergence occurred, the 3.0.1 nodes would still be using a database which contained the old staking requirement in their databases. In order to fix this, users were asked to utilise the loki-blockchain-import utility to force a recalculation of the Service Node list. In parallel, a 2nd release, 3.0.2, was quickly created to do this automatically for users. An additional dummy field was created in the Service Node list code. This meant that when users deployed 3.0.2 for the first time, and rescanned their Service Node lists on boot, this dummy field would not be present and would force the daemon to recalculate the Service Node list.

Once the Service Node list was recalculated using 3.0.1 or 3.0.2, the node would be in line with 2.0.x nodes and would continue onto the hardfork height as planned.

The Outcome

The window between releasing the first fix and the time at which we expected a 2.0.4 node to first diverge from the 3.0.0 node’s staking requirement was a mere 6 hours. In that time, the vast majority of operators running 3.0.0 nodes upgraded. At the time of writing (10:30am AEDT 22 Mar 2019), the current version status of the Service Node network is:

3.0.2 [215], 3.0.1 [11], 2.0.4 [34], 2.0.3 [84], 2.0.2 [31], 2.0.1 [55], 2.0.0 [8], unknown [1]

3.0.0 deregistrations have been occurring slowly overnight, but all things considered, the number of operators who upgraded their nodes in time was truly impressive. Of the ~462 Service Nodes that were active before this event, only around 30 deregistrations occurred, some of which would have been routine. This accounts for approximately 6% of the Service Nodes. This was obviously not a good outcome for those operators, but overall the result is nothing short of incredible.

I am extremely happy that the community was so active in performing this upgrade, and would like to thank Jagerman and several other community members for participating in rolling out this fix. I’m also extremely proud to work with a team that can pull out all the stops when things go wrong and quickly and effectively deliver solutions, and communicate with the Loki user base to ensure everyone has the best possible experience.

Conclusion

Yesterday was a rather stressful day for most of the Loki team, and I’m sure it was for many Service Node operators, too. However as it stands, the situation now has been resolved with the last of the 3.0.0 nodes having been removed from the network. Considering the upgrade window was so short, I’m amazed by the speed at which operators were able to upgrade their nodes and keep their stakes alive.

We will be closely analysing what we can change on our end to prevent further incidents like this from occurring, and examining strategies we can implement to deal with situations like this in the future.

As per usual, you can find us on Discord, Github and Telegram if you have any thoughts, concerns, or ideas on this matter. Thanks for your patience and quick responses.

Simon Harman

Loki Project Lead