No AI tool that makes logging optional should be expected to operate safely.
If “safety” is something you have to turn on, any tool will be used unsafely. Today, unsafe yet powerful AI tools are made available for anyone, for free – tensorflow, openAI, et al.
Even America has some regulations on handing out free guns, but powerful features are added for commercial reasons, and turned loose for the public. It is sheer folly that free tools will be used with the same level of responsibility by individuals, as by organisations that claim effective checks and balances. The organisations use the same tools internally as they release for others – in a dash for market share and mindshare of developers.
The release of powerful AI tools without a correspondingly available audit or safety infrastructure suggests the claims to prioritise safety are entirely hollow – by definition, AI safety mechanisms are entirely optional, with the burden on every user and not the system.
External logging infrastructures are basically a spreadsheet. Google Docs’ spreadsheet capabilities are the minimal viable necessary for everyone (with a google account), for free. Google gives 15Gb of storage space with every account, which should be good for years of run logs, and even then, google spreadsheets don’t count in that space. The spreadsheet API lets you do this already – and you can implement a blockchain in a spreadsheet to have some level of audit.
If you don’t want to use google tools tensorflow/cloud platform/gDrive, there is also AWS. Amazon will rent you high powered GPUs for cents per hour, and have APIs for everything, include write only logging to S3, but no support for ensuring the minimal audit on your AIs you run there.
For the rugged individualists replicating AlphaGo in your back bedroom, and who don’t want to hand their logs to a megacorp, those tools could allow logging to other infrastructures (Plasma would let you confirm you are logging without saying what; many other storage options exist today).
The current version of the test is: No AI tool that makes logging optional can be expected to operate safely.
The logs can be kept private, and they can be at high level, but they have to be written. What should go in them is something the well funded AI debates continue to have, but as it stands today, the commitment to AI safety is impossible to deliver, because the tools do not require anything be logged. Nothing stops a decision to delete any logs, but deleting logs is a coverup.
This won’t do everything, but currently there’s nothing at all. And nothing at all makes the claims of AI safety sound distinctly hollow.