In the ever-evolving landscape of the IT industry, few advancements have gained as much attention and intrigue as generative artificial intelligence (AI). With its emergence onto the scene in 2023, generative AI swiftly captivated industries, revolutionising almost everything—from content creation to healthcare diagnostics.

However, one area that stands to benefit profoundly from this groundbreaking technology is cybersecurity: The combination of AI and smart automation. We interviewed Flexisource IT’s Compliance Officer, Earvin Camanian, to talk about AI, smart automation, and some of its threats that we should know about.

What can AI and smart automation mean for cybersecurity in 2024?

AI and smart automation had a huge impact on operational processes and cost initiatives for organisations. These revolutionary solutions enable organisations to work efficiently and swiftly, producing good-quality products and services minimising defects and delays. Likewise, AI and smart automation can help organisations and even people make their lives more convenient.

However, “convenience leads to complacency and negligence” because we tend to be dependent on AI and smart automation. The growing number of AI tools especially “generative AI,” is growing rapidly and cybercriminals use that opportunity to victimise users.

Hence, cybersecurity professionals will come into play to ensure a product or solution has the required security controls following industry standards to mitigate risk and reduce harm to users, particularly on AI. This is overwhelming work for cybersecurity professionals these days, knowing there is a limited number of cybersecurity professionals globally. It adds more problems because not all AI providers are investing in security, for they focus more on profit. Some are even used for illegal activities, portraying as a legitimate AI provider to steal data from users.

What are the critical threats from the growing use of AI tools?

The critical threats from the growing use of AI tools are:

Data Breach

As AI tools become more prevalent, the risk of data breaches increases. These tools often handle vast amounts of sensitive information, making them attractive targets for cybercriminals seeking to exploit vulnerabilities and gain unauthorised access to data.

Identity Theft

With the growing sophistication of AI algorithms, cybercriminals can use AI-powered techniques to steal personal information and perpetrate identity theft at scale. This can lead to financial losses, reputational damage, and emotional distress for affected individuals.

Deep Fakes

AI-powered deep fake technology enables the creation of highly convincing fake videos and audio recordings. Misuse of this technology can lead to the spread of disinformation, manipulation of public opinion, and damage to the credibility of individuals and institutions.

Automated Weapon Systems

The development and deployment of AI-driven automated weapon systems raise ethical concerns and risks of unintended consequences. These systems have the potential to autonomously identify and engage targets, leading to civilian casualties, escalation of conflicts, and challenges in maintaining accountability and compliance with international laws.

Privacy Violations

AI tools often rely on large datasets containing personal information, raising concerns about privacy violations. Unauthorised access to or misuse of these datasets can result in the exposure of sensitive personal information, erosion of privacy rights, and loss of trust in institutions that handle such data.

Loss of Control

The increasing reliance on AI tools in critical systems and decision-making processes raises concerns about the loss of human control. Errors or biases in AI algorithms can have significant consequences, leading to unintended outcomes, lack of accountability, and challenges in understanding or rectifying algorithmic decisions.

 Social Manipulation

AI algorithms can be used to manipulate social media platforms, online discourse, and public opinion. This manipulation can take various forms, including the spread of misinformation, polarization of communities, and amplification of extremist ideologies, leading to social unrest and erosion of democratic principles.

Techno-dependent Humans

Over-reliance on AI tools can lead to techno-dependence, where humans become increasingly reliant on technology for decision-making, problem-solving, and daily tasks. This dependence can diminish critical thinking skills, creativity, and human autonomy, posing risks to individual well-being and societal resilience.

What other developments will continue to trend in cybersecurity in 2024?

Industry standards such as NIST, ISO27001, etc. are painstakingly working to address the growing concern and issues related to AI. They recently released a new version (NIST Cybersecurity Framework v 2.0 and ISO27001: 2022 version) so that organisations can assess their security posture for any gaps within their network. This is to make sure that the level of security has reached an acceptable level.

Another thing is new robust security solutions or products are going to be released by some security providers in response to AI such as data loss prevention and other related defensive measures to protect the identity of users.

There is also an ongoing project where AI is also going to be used for Cybersecurity. This is something that we are anticipating to see how this new solution will work in our profession.

How should organisations address the cybersecurity challenges in 2024?

  • Step 1: For organisations, conduct a Risk Assessment of their IT infrastructure, Assets, Physical Security, and People. They need to identify the risks that could damage the organisation’s reputation and business operation. Once they identify a risk, create a treatment plan to mitigate and avoid transferring the risk. Senior management must consider their risk tolerance to make sure each identified risk does not hamper their business functions.
  • Step 2: The second step is to develop information security, data privacy, and cybersecurity policies and procedures to enforce and execute security measures based on management objectives, industry standards, and legal and regulatory requirements.
  • Step 3: Invest in information security training for employees to develop awareness of how they will act or respond to a security incident. A good example is “Phished Academy,” which will train employees on how to identify a phishing email.
  • Step 4: Invest in IT security solutions such as Identity Access Management, Data Loss Prevention, Web filtering, SIEM, FIM, Vulnerability Assessment/ Penetrations Testing, and Security Operations Centre (SOC) apart from AV, firewall, and other related security solutions to protect the organisation’s internal network.
  • Step 5: Lastly, hire a cybersecurity professional to provide you with guidance and support on how to protect and secure the organisation’s network against inside and outside threats.

Earvin Camanian is a Compliance Officer for Flexisource IT. This article is based on his brown bag session, Data Privacy Awareness and Best Practices. Visit our social media pages for our brown bag sessions.

Lexin-Ann Morales - Branding & Marketing Lead

As a writer and branding strategist, Lex has found her passion in telling business stories in the form of impactful branding and marketing. When not working, she loves to read books about herpetology, botany, and going to the library.

Share this article