[From about paragraph 5, parts of this post assume familiarity with the plasma spec and proof of stake (relying on sharding) – it is not intended for a general audience].
It should be impossible to deny tomorrow an assertion you made yesterday.
One of the emerging uses of blockchains is to ensure that there is an accurate record of what someone did, or didn’t, do at every point in the past. This may be one of the unique benefits of blockchains for information recorded on a chain over a database.
The engineering behind the blockchain scaling solutions is getting to the point where there is a rough consensus derived from running code. [insert IETF joke here]
Those scaling solutions are based on a lot of mathematics, but it it comes down to: you get paid to make assertions about the chain, but you lose far more money if you deny past assertions you made, or make assertions that conflict with them (for clarity: it is not that you get slashed if you change your mind, but only if you change your mind and deny it. You can assert both that today is Wednesday, and that yesterday was not Tuesday, but you can’t both make those assertions and deny having done so)
In a closed system, under ubiquitous surveillance – which is being expected for AI systems – it is possible to prove a negative. It is a simple mathematical construct to say that X is either true or not true, but can not be both. And if a published denial is not true, the new large scale cryptographic trust systems from scaled blockchains allow that to be shown, proven, and punished.
Given all the investments in AI safety, and other systems requiring the highest of auditability and confidence, it is going to reach the point in the medium term where not bridging assertions from your private trust infrastructure to a public trust framework is a cause for doubt about intentions.
Between plasma, proof of stake, and sharding, not all data has to be computable by the main chain, but it has to be computable by something that reports to the main chain.
Use cases in the real world
A number of large companies are trying to build systems that “prove” trust (why they think that will help, and how it wont, is outside the scope of this note). These can also be considered “private blockchains”, or “not a blockchain” – but they are all systems which have some form of merkle tree like data structure at the bottom of them.
These systems too are based on very strong cryptography with strong privacy needs, and have been built up over time (palantir has been running theirs for years on the work they do for the spooks).
Our “friends” at DeepMind with the public to trust that their logs are always accurate and complete — in a world anchored off a public blockchain, using the testable interfaces provided by Casper (challengeable hypotheses) and plasma (scale and availability). {this blog post was written before, but published shortly after, they publicised a one way bridge going the other way}
Testing contentious assertions
While custom APIs to the detail of (private) logs will always remain – the tying of assertions about those logs to a public chain at the top of a data structure means assertions can be tested against the slashing conditions of the public tree.
If, for example, DeepMind’s system gives contradictory answers to the same question, that pair of assertions can be shown to the public chain, and demonstrate mathematically that both can not be true. It does not say which is true, but it says that both can not be.
The public system doesn’t know whether DeepMind’s logs say what happened, but what it does know is that there is a cryptographically correct answer suggesting it was not true, and a cryptographically correct answer suggesting that it was true. The emerging proof of stake and slashing conditions allow for testable hypotheses, which fail publicly.
If there is cryptographic evidence, it is impossible to deny a conflict. The “cost” in financial terms may only be $1 – a peppercorn sum – but the reputational cost of having to pay it is far higher.
Why would anyone care?
The choice to build and rely on such systems is related to the culture of technology around AI (and blockchain). This approach has been taken by those who make claims to be building highly trusted systems – the utter reliance on the cryptography and trusted systems. The question is what is publicly required of those trusted systems?
When a blog post is published saying “we didn’t feed data to an AI”, if the claims about technical verification are true, that post should be accompanied be enough signed assertions to prove the statement. (details of AIs, inputs, data, would probably all need to be enumerated sets of terms).
Each of those definitions should be testable by audit (including a hostile audit), and, if it turns out that any of those assertions are false or falsifiable, then the fact is clear that the published assertion must have been untrue.
It has long been expected of scrutineers or companies that one side or other must prove a negative in an environment of incomplete information. With the desire of large companies to have ubiquitous surveillance of their system, it is now possible for them to prove the negative in a testable hypothesis using public standards and APIs
Unless a denial of actions in systems that are fully logged includes the machine readable cryptographic assurance of what is being denied, in a manner that can be tested on a public blockchain, those systems can not be what they claim to be. The inability for Casper-style systems to rollback beyond the lasts majority staked point means that, after a defined period, history is fixed, no matter how much CPU is thrown at the problem.
In a closed system, under ubiquitous surveillance – which is being expected for AI systems – it is possible to prove a negative. It is a simple mathematical construct to say that X is either true or not true, but can not be both. And if a denial is false, the new large scale cryptographic trust systems from scaled blockchains allow that to be shown, and punished.
Given all the investments in AI safety, and other systems requiring the highest of auditability and confidence, it is going to reach the point in the medium term where not bridging assertions from your private trust infrastructure to a public trust framework is a cause for doubt about intentions.
Implementation annex: if, for some reason, you are a programmer looking to implement this it would be likely as a plasma node handing your transaction types (that you will wish to publicly assert, and which is based on data held privately) that satisfies the implementation requirements for a validation contract (including slashing conditions). Interfaces and information required for assertions are left as an exercise for the reader; it is highly implementation dependent.