Common Risks within AI and ML

Artificial Intelligence through the Lens of Cybersecurity

Kenneth Reilly
4 min readMay 27, 2023
Photo by Michael Dziedzic on Unsplash

While many advancements in the modern age are simply improvements over basic processes (like cooking or keeping your home warm and safe), some of them have brought about radical changes in our daily work and family lives, especially in areas such as production and communication.

Few technologies are likely as misunderstood as those built on the concepts of Artificial Intelligence (the ability for a machine to perceive, synthesize, and make inferences about information) and Machine Learning (the development of mathematical models that can leverage new data for self-improvement).

To fully understand the scope of production AI will require some understanding of the mathematics, electronics, programming languages, SDKs, and other tools used to build and deploy them — in addition to the philosophy and ethics issues that arise when mixing people with the machines designed to emulate their thinking and behavior.

Adding to the everyday risks found within general infrastructure — risks such as zero-days or data breaches — products built with cognitive tools are also likely to have unique risks that are directly related to these technologies. We will examine a few of these risks within this article.

Background

The history of AI begins both in myths of artificial beings gifted with intelligence by master craftsmen and with philosophers attempting to describe the processes of human thinking in functional symbolic notation. This is analogous to what is done today by modern engineers and mathematicians.

The process of creating artificial beings or machines capable of some kind of sensory perception or conscious thought can be described as the scoping of whatever desired behavior is to be emulated or created, followed by the abstraction of these requirements into mathematical models and finally the design implementation of these models in physical space.

Since antiquity, mathematicians and engineers have collaborated to build machines that capture the imaginations of people around the world. The inner workings of these machines are generally kept secret for many reasons — an engineering concept we will explore in the next section.

Implicit Risks

Although any system will have implicit risks due to having some level of functionality and real-world requirements, the very nature of AI and ML is implicit (the machine attempts to infer meaning from your directives), in stark contrast to the explicit nature of more general and especially strongly typed or low-level programming (directives have near total-control over machine, such as systems-level C++ or assembler).

The lack of clarity into the exact operation of implicit design creates the risk of potentially unwanted or even dangerous behavior, as the total set of possible input-to-output combinations cannot be enumerated.

In addition to these technical risks, other issues arise in the design, development, and deployment of AI powered solutions, such as unforeseen costs, overcomplexity / over engineering, and bias or other ethics issues.

Conway’s Law (software mirrors the comms structure of the developers) is a useful paradigm for maintaining transparent continuity across the SDLC via explicitly defined terminology, mapping of info distribution requirements throughout the organization, and other means. This can be difficult with AI-driven products given the implicit nature of AI behavior within the already challenging framework of (attempting to) clearly define requirements. Technical debt generally follows miscommunication, as the tech stack ages while pieces of it are re-tooled to meet ever-changing scope.

In comparison to pure simple machines performing exact functions (such as embedded tools built with C++ or LISP), many AI products are driven by newer-is-better ideology and motivation, resulting in a never-ending obsessive race for the next product with theoretically better functionality. There are some exceptions to this of course, generally within localized specific applications such as smart thermostats or robot vacuums, which use very lean homegrown algorithms on specialized chips and boards that can perform some task indefinitely in a satisfactory manner.

Security Concerns

As anyone familiar with information security will know, the same increase in complexity mentioned above will therefore increase the potential surface area and timeframe for risks to develop. When working in a highly competitive or rapidly-evolving market, increased number of codebase or personnel changes further compound these risks.

Special care must be taken when developing AI/ML technologies to ensure that equal attention is paid to securing both the underlying infrastructure from general risk or attack, and the AI models themselves from adversarial attacks that operate directly on mathematical and timing limitations of AI models and/or hardware vulnerabilities in GPU or TPU silicon.

Examples of AI-specific attacks include image-embedded malware, using a laser to inject information into traffic light cameras and even personal assistant speakers, and compromising drones with EMF / RF.

Conclusion

While artificial intelligence, machine learning, and other areas of cognitive computing can be exciting and promising fields, the risks are both easy to prove and difficult to calculate simultaneously, meaning that responsibility lies with not only developers and producers of AI devices and materials, but with consumers who use these products and unfortunately even potential unwanted victims of aggressive AI systems or infrastructure.

剣一

--

--