First of all, Happy Easter to everyone! Hope that you all have a great weekend. After a week of heavy development, performance optimizing and lots of other work on FABRIC, I decided to that today would be a good day for some load testing to see what it can really do. The reason behind today's test is as follows: FABRIC scales linearly as more nodes are added to the network, which is great. However I wanted to understand what the limit of a small baseline network would be with just a few nodes, as I can then extrapolate that over a larger network (and verify that it does indeed scale linearly). Before we get to that some terminology that I will be using, and that are also present in the vids. Atom = an element within the network which undergoes consensus (transactions, messages, etc) Processing = the number of atoms (transactions) being processed by the network at that moment in time Syncing = the number of atoms the local node is syncing Path = the average network settlement time in milliseconds of atoms Persist = the average local time in milliseconds to persist an atom TPS = transactions per second All the atoms within the tests were transactions as they are the most computationally expensive. Sustained Load A number of tests were performed to discover the sustainable limits on such a network. Starting at 100 TPS, we increased this in intervals until we hit the soft-limit at around 1700 TPS. The soft-limit is the local nodes saturation of throughput indicated by a sudden and increasing Persist time. The network is able to operate without disruption even if a large number of nodes are at their soft-limit. We then increased the load until we hit the hard-limit at around 2500 TPS. The hard-limit is when ALL nodes in the network are at a saturation of throughput and is indicated by a sudden and increasing Path and Persist time. The network is still able to operate at the hard-limit, but there may be delays in transaction processing and settlement. Spiked Load Next we wanted to discover what was the max peak load a small network could process over a short duration. To achieve this we place a sustained load on the network of 1000 TPS, then periodically presented an addition large amount of transactions over a short period of time of 5-10 seconds. We started at 10,000 addition transactions, and worked our way up to 100,000 which resulted in peak processing of around 10,000 TPS. We continued to add increasing load until we hit a hard-limit of 15,000 TPS, at which point additional transactions began to suffer delays and timeouts. Overview Today's tests showed that even a small network was able to handle high load volumes orders of magnitude greater than any other technology currently available with relative ease. Furthermore it was able to do so whilst retaining rapid settlement times of < 100 milliseconds in all but the most extreme cases. Even then, settlement time rarely exceeded a few seconds which is considerably better than most current technologies can muster at idle, let alone with excessive load. Node resource usage such as CPU and memory were also impressive, utilizing very little of both even in high load situations. Future testing will be performed on larger networks, and also utilize partitioning which will afford us much greater throughput in the 100,000s TPS.