Evidencing the integrity of supply chain

The pharmaceutical supply chain has developed a set of integrity protections against fraud, using transparency and accountability in aid of human health. The same can apply to food, if a choice is made to do so.

DEFRA collects large amounts of data on the food chain, especially around meat products, and the Government has now announced mandatory CCTV in abattoirs. While it collects a great deal of data, it still publishes very little of it in a form that is accessible and usable to the citizen in the supermarket.

From data that is already mandatory and collected, but not available, it is possible to see every regulated step of the batch of meat as it turns into the packet of mince in your hand.

We saw a failure a few weeks ago with eggs – how does a citizen know whether the eggs in their fridge may have been affected? Without factual information, they are reliant on trusting that the system really worked during a recall, when the recall is solely because the system already failed. It is not a trust generating scenario.

People may believe that British products may be more inspected and trustworthy than others elsewhere, but there is currently no competitive advantage to that. Only with Government showing what standards are met can the public see that they have been met. What does the British flag on food actually mean?

Where there are false assertions made, and proven, then other products of the perpetrators can be clearly shown as affected, and those unaffected have peace of mind based on knowledge and fact. It is insufficient to know that it shouldn’t be a problem – it is necessary to know that it wasn’t.

As Avaaz push a petition against the farmer whose neglect caused the death of 20,000 pigs burnt alive in a fire, the farmer who treats their animals well get the benefits of being seen to do the right thing. Those who choose otherwise are also seen to do so.

DEFRA’s data transformation project has done great work to make this possible – the next step is easier than the previous ones.

As the UK considers the basis for future trade arrangements, it would be a step forward for them to be based on evidence and knowledge, not merely political hope.

posted: 26 Aug 2017

AI in the school playground

Buried in Apple’s Developer Conference last month was the release of “a PDF format for AI”. Build a model on one system, and open it identically on another. As PDF did for document sharing, this will do for sharing AI models. The person who uses an AI no longer has to be the person who trained it.

Training is feeding it a load of past data where you know the outcome, and it figures out how to reproduce them. Use is feeding it new data, and it tells you what it thinks the outcome should be.

There are also additional rounds of training when “new” data has become “past” data, with both the outcome it expected and the outcome it received. (One reason the AI outfits use games to develop models is the scoring mechanism gives an instant, simple, and clear metric of success and improvement.)

This approach works easily for numbers, text is harder, but also doable, if you have enough context, or the target is important enough.

There is a twitter account @trumps_feed which is a copy of the twitter feed that Donald reads. There is another feed of what he then does. It is unimaginable that various entities around the world are not feeding both to an AI. It takes very few resources to extend that historically.

That creates you an AI to predict what DJT might say in response to something. Feed it all Donald’s tweets, and you can produce a model of him. It takes a load of processing to build, but once built, can now be freely shared. Given the commercial services that will monitor what particular targets say on twitter, pretty soon, they’ll offer more analysis.

Doing that with facebook / tumblr / instagram / twitter of popular (or otherwise) people, including teenagers, starts to get very creepy very fast. Twitter may be easy, but facebook has emotional colour.  This is also why the AIs looking for intent train a lot better when there’s an emoji for meaning in the training dataset.  


Facebook imply that they already do something like this, as their salesman brag they can tell when users feel something – of course facebook outsource “doing something” with such identifications to the highest bidder. But today’s megacorp unique tool is tomorrow’s app project – everything becomes available to everyone.

Samaritan’s radar shows what happens when institutions get this wrong – but copying of models means it’s no longer just institutions. If someone or their app can read your facebook feed, new tools mean they will be able to make the same inferences facebook claim to do.

The best analogy is simple: Cambridge Analytica’s mindset and tools in the hands of every child in the playground.

Apple would likely prevent such an app on their App Store, but Google Play would let it in with open arms. The AI people would claim that’s not their problem.

posted: 21 Aug 2017