Skip to main content

Navigation Call to Actions

The Growing Pains of Artificial Intelligence

Though artificial intelligence in fiction and film is often depicted as far and beyond – robots taking over the world, humans rendered obsolete – real-life AI is no longer a laughing matter. Though far from any Skynet Terminator-type schemes of world domination, we are increasingly becoming aware of the issues with AI as it becomes more and more effective and intelligent. From difficulties with regulation to deep fakes and data privacy issues, these are only a handful of the growing pains related to AI. 


Today, one of the most common issues with AI are deep fakes. Easily perpetuated through social media platforms, the rise of deep fakes may be considered the first wave of AI-related consequences. 


Professor Lisa Wilson, member of our Advisory Council and Founder of Areté Business Performance weighs in: 


“For poetic licence, fiction almost always wins over fact, particularly when it comes to the emergent technologies like blockchain, VR, and AI. However, sometimes reality can come close to what we think is fiction also.” 


According to Lisa, one of the scariest abilities of AI today is “deep fake and the deepest fake hacks through current AI technology and the ability to manipulate or create digital media using these tools.” 


Having said that, Lisa continues, “Scamming and the increase of AI generated fakes still target vulnerabilities in the consumer of digital content; we must continue to be vigilant in our education, so that we recognise behavioural inconsistencies and are able to separate the hype from the fact – and take moments longer to respond rather than react. The AI techniques used to elicit a response from someone who engages with digital assets (i.e., to hand over critical passwords and other entry keys to take assets) are no different to those who only use fiat. However, because of AI and deep fakes, this may take a little more concerted effort to identify. Take that time!” 


Simon Newman, member of our Advisory Council and CEO of the Cyber Resilience Centre for London, adds:  


“The rise of ‘deep fake’, where Artificial Intelligence is used to create fake videos, audio and photographs is becoming harder to spot as the technology improves. Poor quality deep-fakes are easier to spot – perhaps the colours differ or the sound isn’t as clear as you would expect, but as these types of scams improve, the use of artificial intelligence may be the only way to detect whether you are looking or listening to a deep fake or the real thing.” 


Read more on Simon’s thoughts in The Independent: 


Though deep fake is becoming increasingly difficult to correctly identify, and presents high risks of scams, this is not to say that AI is all bad. In fact, there are plenty of benefits alongside the disadvantages and risks of this advanced technology. And there are plenty more dangerous consequences than deep fakes too.  


But how do we weigh these benefits and risks? On one hand, technological innovation is often a harbinger for better quality of life. On the other, technology like AI can impede people’s personal rights – most notably, their privacy. 


A recent report by the Ada Lovelace Institute explores the two sides and offers recommendations on how we can regulate AI to mitigate the risks, while continuing to make the most of its benefits.


Flavia Kenyon, member of our Advisory Council and Barrister at The 36 Group, explains: 


“AI has unleashed an unprecedented technological power of collecting, processing, and analysing vast amounts of personal data scraped off the Internet, which can be used for various purposes, positive and negative as discussed in the new Ada Lovelace Institute report: from medical research that saves lives, to mass surveillance, oppressive and invasive state control, the analysis of facial expressions and other biometric data, the creation of fake, misleading, biassed, or inflammatory content… that can become weaponised in the wrong hands.”


Read more about this in Emerging Risks:   


Professor Lisa Wilson, adds:  


“Quite correctly as the Ada Lovelace Institute advises, the conversations about AI and machine learning are highly polarised. There are those who can see the incredible benefits and those who are, in essence, petrified for society moving forward. In reality, there are many more pieces of the puzzle that I also think includes two other dimensions – inclusion and design. 


Global ageing is one of the greatest issues around technology. We have significantly more non-digital natives being exposed to AI, and in many ways, it is devoid of the inclusion of their data as well as the billions still not connected. 


In addition, AI must have the same approach to its design as we do in cyber – secure by design. This cannot be the case while we have an epidemic of mis/dis/fake information as another layer over lack of inclusion. These are additional challenges to bias and, of course, losing human capacity to challenge AI decisions when they get it wrong. A framework approach and investment into education, strategy, regulation and design is certainly a good start.” 


Read more about the report and Lisa’s thoughts on it in The Evening Standard: 


Unfortunately, Lisa also thinks that this good start is “a little too late with AI given the extent and pace of its adoption.” 


It raises the question – will AI trample us before we can even manage to find that framework for controlling it? 


According to Flavia, “The real concern with AI is who controls it, as well as the legal basis upon which the AI tool processes people’s data, the mechanism allowing individuals to control how their data is used, and people’s right to have their data deleted or corrected. The report is an attempt at balancing the need for technological innovation with the protection of people’s rights. This is about our individuality. In truth, the battleground is over whether AI controls us, or whether we control it.” 


In other words, we need to be able to control AI, not the other way around. 

View all Blog