In Malcolm Gladwell’s book ‘The Tipping Point’ he states, “Ideas and products and messages and behaviours spread like viruses do,” which is a truly insightful observation.
In the beginning of 2020, I observed a tipping point when an outbreak of the SARS-CoV-2 virus became a pandemic. However, I am not sure I noticed the tipping point when algorithms entered into every aspect of our daily lives and became endemic. This can easily be evidenced within mainstream media and social media, where algorithms make sure that we never run out of cute cat videos and other forms of online entertainment which they have worked out we would want to see.
For businesses, algorithms have become a must for targeted advertising and 95 per cent of Fortune 500 companies use algorithms to recruit new employees. However, when nearly half the world’s population is using platforms such as Facebook, are these platforms now beyond governable? Can Facebook’s 35,000 checkers even make a dent in the tsunami of hate speech and fake news posts? Why only 35,000 and not 350,000 checkers - if that’s what is required? It seems we are quickly reaching another tipping point – where only algorithms will be able to control other algorithms!
The recent A-level results fiasco brought this issue to national attention. In the end, Ofqual, the Government and of course a wave of angry teenagers and parents, all agreed that using an algorithm to calculate A-level grades was neither fair nor accurate. With hindsight, was it wrong to expect an algorithm to be sophisticated enough to bring an acceptable resolution to this problem? Or, put more directly: was it wrong for us to accept that an algorithm produced by humans could be “some magical mathematical, statistical, technological solution to the extremely difficult problem of not being able to test students properly?” (Four things Government must learn from the A-level algorithm fiasco, Gavin Freeguard)
The A-level algorithm provided a clear demonstration of bias in exacerbating inequalities by penalising high performing students from underperforming schools, particularly in those schools showing recent improvements.
In my earlier blog entitled ‘The race against the selfish algorithm’ I cautioned about algorithmic bias as it applies to the health and care sectors and discussed a voluntary code of conduct for data-driven health and care technology that was introduced in 2018. This code was a start and designed to be complementary to research ethics approvals, medical device regulations and the CE mark process as well as other regulatory approvals. However, on its own, it is insufficient.
Even before the pandemic, the pace of digital innovation far outstripped the capacity and capability of governments and regulators to keep producing updated regulations, let alone enforce them. This pandemic has provided a turbo boost for digital innovation in all sectors and the tech giants have already lapped governments in this particular race.
In the absence of binding legislation to ensure transparency and accountability for algorithmic decision-making, particularly when it affects life, liberty and equality, what are we left with? Maybe the senior executives within these tech companies could be asked to abide by the seven Nolan Principles: selflessness, integrity, objectivity, accountability, openness, honesty and leadership. Also, when politicians leave public office, they should continue to hold these values dear when sitting in corporate boardrooms.
Dr Bina Rawal is a non-executive director of the Innovation Agency.
Her blog first appeared on LinkedIn.