Skip to main content
07 Jul 2023

In the News: June 2023

In the News: June 2023

The Clop ransomware gang certainly made some noise in June, and we suspect they will continue to do so into July 2023. Following a vulnerability discovery in Progress Software’s MOVEit file transfer app, the cybercriminal group swooped in to infiltrate the payroll services provider, Zellis, affecting household names such as the BBC, British Airways and Boots. Since then, the tally of victims has risen to over 120 organisations, or the equivalent of at least 15 million individuals. 

 

We reached out to our Advisory Council member, Simon Newman (CEO of the Cyber Resilience Centre for London) for his thoughts. Here’s what he had to say:

 

“Many larger organisations have invested heavily in boosting their own cyber security over the past few years, making it more difficult for cyber-criminals to find a way in. As a result, cyber criminals are increasingly shifting their focus onto supply chains which are often long and complex.

 

The ability to compromise the security of a supplier not only provides a potential back door into larger organisations, but as the third party is likely to provide products or services to other companies as well, it means that the scale and the scope of the attack is far greater.

 

Despite this, according to the Cyber Breaches Survey 2023, only 13% of businesses regularly review the risks posed by their immediate suppliers. The National Cyber Security Centre (NCSC) recently published new guidance on supply chain security to improve awareness and promote the adoption of good practice.”

 

He also adds: “Paying ransoms to cyber criminals does not guarantee that all the data will be returned. In fact, in most cases, it’s extremely rare and may simply expose you to further ransomware attacks in the future."

 

Read more about the attack and Simon’s views here: https://inews.co.uk/news/technology/zellis-cyber-attack-russian-hackers-leak-ba-boots-bbc-staff-data-ransom-2393574

 

 

A Deepfake Scandal

In other news, the FBI had published a warning last month, highlighting a rising trend among threat actors using deep fake and AI tools to create explicit images and videos for sextortion. That is, in schemes to blackmail individuals by threatening to release potentially revealing, embarrassing or compromising material on the victim, if payment is not made in return. 

 

According to the alert, malicious actors are using “content manipulation technologies and services to exploit photos and videos—typically captured from an individual's social media account, open internet, or requested from the victim—into sexually-themed images that appear true-to-life in likeness to a victim, then circulate them on social media, public forums, or pornographic websites”.

 

This is what Professor Lisa Wilson, member of our Advisory Council and Global Technology Influencer, had to say about it:

 

"As a society we need to be taking a more holistic helicopter viewpoint on this topic and the merging cyber risks being posed. All digital content is a frontline target for advanced AI platforms like GPT3 to be used for ill-gotten gain by malicious actors to produce deep fakes and also, to perpetuate serious social engineering through misinformation, disinformation and false information. This is just one example of the serious implications of people not being aware of how digital content and AI can be used. 

 

I am 100% an advocate for emergent technology. However, to support this, I also always advocate security by design, not security as a prescriptive measure for already known risks and issues. We know, for example, that utilising simple tools like uploading blockchain-protected user-generated content and images via already available free applications like VDXit can and will prevent malicious actors using imagery. We also know that the social media platforms have the capacity to authenticate profiles like we do financial wallets using KYC and AML but they choose not to. Legislation like Online Safety Bill only impacts people and events when harm has already occurred. Focusing on headline grabbing rhetoric by the media creating a new term ’sextortion’ simply masks the real gravity of a bigger problem. Education of people regarding the illicit use of digital material using these sorts of topics as examples is a far better approach."

 

CyberWire covered Lisa’s thoughts in their Privacy Pro Briefing this week too. Check it out here: https://thecyberwire.com/newsletters/privacy-briefing/5/109

 

There goes another Twitter Exec…

 

Lisa also shared the following comment in response to news of the resignation of Twitter’s head of trust and safety, Ella Irwin. 

 

"We are in an era of a misinformation, disinformation and false information epidemic. We know that during, and post Covid19 this has escalated as more people engage digitally. Unfortunately, we also know that some people, such as those who are conservative, intuitive, reactive and spontaneous are the most vulnerable to these types of information. They repost without checking facts, and perpetuate the flow of information at great speeds - faster than bots! However, we are also scraping these sites for AI, to be used by GPT3 and the like, and this furthers the social engineering and often more malicious intent of this information. 

 

This sort of socio-economic and public discord becomes a direct challenge for trust in decision makers and those in our societies doing good. Unrest and uneasiness are difficult to remove from memories. The case in which vaccination is falsely claimed to be the cause of autism is a great example. Despite the perpetrator going to jail, the false information still persists. These large platforms like Twitter and LinkedIn must have moderation processes to remove blatantly malicious and false information, but more importantly, they must start to take seriously the entry points for setting up profiles by using technology such as blockchain to authenticate profile applicants. 

 

That Twitter has lost its second head of Trust and Security indicates a serious problem and I too have veered away from the platform, tired of the constancy of false profiles, hijacked profiles and unmoderated often vitriolic content. People are more cognisant of their personal and brand reputations and no amount of money by Elon Musk will provide that until online security and the prevention of harm is considered foremost. It must always be balanced, and this creates the difficulty, because freedom of speech and different viewpoints is also the backbone to society. It’s why contention is high. But harm and security risks caused by deliberate and malicious, socially engineered vitriol is unacceptable in society also. The Twitter's, Facebook’s and LinkedIn’s of this world do have responsibilities to protect their subscribers from harm and are not held accountable as they ought. And of course, before all the nay sayers decide to respond to this comment with vitriol - it is always known how I do support the critical importance of social media platforms - they are indispensable - but so are cars and other technology. We don’t let people drive like maniacs on roads and harm others and we shouldn’t let platforms drive social interaction without rules either."

 

Some of her thoughts were picked up in Verdict: https://www.verdict.co.uk/twitter-ceo-working-on-luring-back-lost-advertisers/ 

 

A New Tech Plan for the UK

 

With the next UK general election just a year away, the trade association, TechUK, has warned that the government will have fewer resources to work with to ensure the country’s place in the global tech race. Having said that, we do have a solid foundation to build on, and the association has laid out 18 opportunities accordingly, which the UK must seize.

 

Read Simon Newman’s thoughts on the new plan here: https://www.infosecurity-magazine.com/news/infosecurity-europe-techuk-new/ 

 

Smart Gadgets Introduce Risks

 

Last but not least, as we continue to accumulate a wealth of devices in our homes - from smart kitchen appliances to connected doorbells, it is crucial that we stay informed of the cyber risks they introduce. 


Hear Chris White, another of our Advisory Council members and Head of Cyber & Innovation at The South East Cyber Resilience Centre, speak on the matter in The Sun: https://www.the-sun.com/tech/8495607/smart-home-gadgets-cyber-security-hacker-warning/
Loading