Skip to content

AI & Cybersecurity in Military Tech: Insider Q&A

AI & Cybersecurity in Military Tech: Insider Q&A

[ad_1]

Cybersecurity Expert Josh Lospinoso Talks About Artificial Intelligence Threats in Military Operations

Josh Lospinoso, a former Army captain and cybersecurity expert, recently testified in front of a Senate Armed Services subcommittee about how artificial intelligence (AI) can help protect military operations. As the CEO of Shift5, an AI cybersecurity firm that works with the U.S. military, rail operators, and airlines, Lospinoso knows all too well the vulnerabilities of weapons systems and the major threats they pose to national security. In an edited interview with The Associated Press, Lospinoso discusses how AI can be both a defense and an attack for military applications.

The Threats of AI-Enabled Technologies

Lospinoso identified two principal threats to AI-enabled technologies, which are theft and data poisoning. Theft is relatively self-explanatory, but data poisoning refers to digital disinformation. If adversaries can manipulate the data that AI-enabled technologies see, this can impact how that technology operates vastly.

The Vulnerabilities of Military Software Systems

Lospinoso acknowledges that the vast number of military systems are decades old and retrofitted with digital technologies, making them porous, hard to upgrade, and vulnerable to attacks. He believes that we need to adequately secure existing weapons systems, which will take an extended period to pay due to technical debt. Additionally, upgrading the current digital components of these systems may prove challenging. However, Lospinoso suggests using AI to defend these systems if someone attempts to compromise them.

The Concerns with Rushing AI Products to Market

Lospinoso cautions against rushing AI products to market, as these products will likely catch fire (be hacked, fail, or do unintended damage) as a result. Business leaders are trying not to be caught off guard by the upcoming economic and fundamental changes to their business models.

The Use of AI in Military Decision-Making

Lospinoso cautions against the use of AI in military decision-making, such as targeting, as he believes that we are not ready for that technology yet. The data collected by AI algorithms is still a long way from achieving its full potential, especially as a lethal weapon system.

Conclusion

Josh Lospinoso’s expertise in cybersecurity and AI-enabled technologies is essential for understanding the vulnerabilities and threats to national security. While AI can be a powerful tool to defend against cyberattacks, the rush to create and market new products may prevent the necessary precautions from being taken. AI may also pose a risk if adversaries can manipulate the data that AI-enabled technologies run on. Nonetheless, Lospinoso remains optimistic about the role AI can play in making military operations more efficient and secure if implemented cautiously.

FAQs

What are the main threats to AI-enabled technologies?

The two main threats to AI-enabled technologies are theft and data poisoning. In essence, data poisoning is digital disinformation that may threaten how AI-enabled technologies operate.

Are military software systems vulnerable to cyberattacks?

According to Lospinoso, most military systems are decades old and porous, making them hard to upgrade and vulnerable to cyberattacks. Due to technical debt, it will take a long time to pay for the proper upgrades to secure these systems.

Why shouldn’t we rush AI products to market?

Lospinoso believes that rushing AI products to market may cause them to fail, be hacked, or unintentionally cause damage. Therefore, it is essential to take the necessary precautions to ensure that AI products are safe to use.

Can AI be used in military decision-making?

In Lospinoso’s opinion, AI is not yet advanced enough to be used in military decision-making, particularly lethal weapon systems.

[ad_2]

For more information, please refer this link