Calling for thoughts on this "problem" and how to solve it. I'm sure you're all aware of the scalability possibilities that we have, and if you weren't already, you've no doubt seen the claims I've made on Twitter the past few days. On the back of those claims we now have to prove it, which is fine, I want to prove it, and do so in a public facing test net (not some closed off lab). But there is a problem. Due to the way that Radix scales, no single machine will be able to store all the transactions at high load. We'll have multiple machines serving different partitions, so while collectively they may be processing 100k TPS, each of them may only be reporting 2k TPS. I've done load testing recently while developing exceeding 70k TPS over 8 of my own machines, but none of them, even my fastest, would have any chance of storing all of those transactions and keeping up. Even tricks like turning off things like signature validation, because we know all nodes are honest, still wouldn't enable a single machine to get anywhere near. So I'm calling for ideas on how to run a high load test and report it unquestionably, over a number of machines, which as a network are processing say 100k TPS but individually are only processing a fraction of that. 100k TPS is also too many for a single machine to produce (and broadcast over a single connection) so transaction production also needs to be over multiple nodes. Ideas?