Categories
VentureBeat

Security first: Your AI is so easy to hack …

Do you have 1,000s of high quality pictures on your phone?
Click here and learn how to make money from them today!


Presented by Modzy


AI is becoming mainstream, embedded into more and more applications of everyday life. From healthcare and finance to transportation and energy, the opportunities appear endless. Every sector is ripe with opportunities for time, money, and other resources savings, and AI provides many solutions.

Yet critical questions remain unanswered related to AI security. How are IT organizations managing AI security as it scales to the enterprise, and do you have the audit functionality to answer questions of regulators?

For data scientists, how do you ensure your AI models remain reliable over time? For developers, how do you extend the normal DevOps processes to AI-enabled software development? Asking the right security questions must be a fundamental component in your strategy for scaling AI.

Organizations are just now investing in tools to manage and monitor their AI as they look to achieve enterprise scale, leading to the emergence of a growing market of MLOps and ModelOps tools that fit within their existing tech stacks.

However, this is reflective of a broader trend — they’re not applying the same rigor to the AI development and deployment processes that would be expected in system or application development. The same is true for AI security — because many organizations are still in the weeds of addressing their AI management, they’re pushing security priorities down the line, which will only lead to bigger problems.

With so much is at stake when it comes to AI deployments, security can’t be an afterthought — and it’s arguably even more imperative to address security in the beginning of an enterprise AI deployment.

Attacks from every angle

The reality for AI-enabled systems is that they have an increased attack surface for bad actors to exploit. Fortunately, MLOps tools can help you address access control for AI that’s being used inside your organization, and many of these tools also help with API security and traceability. At the same time, there are other types of threats to contend with, and many organizations aren’t yet thinking about how to factor these into their overall security posture or response.

Adversarial AI refers to a particular branch of machine learning focused on negatively impacting AI model performance, either by creating misinformation, or degrading model performance. Today, bad actors can feed bad or “poisoned” data into a model to impact the output, or by reverse engineering a model’s weights to impact its outputs.

In the case of data poisoning in images, for example, the impacts are often subtle enough that they’re invisible to the naked eye. Take the well-publicized news story of a self-driving car tricked into misinterpreting a stop sign as a 60MPH sign — this example shows data poisoning in action, and gives a picture of the types of risks that lay ahead.

Fortunately, there are emerging approaches for proactively managing these threats, but few organizations are investing the time and money in security from the start, which really means it’s already too late. Adversarial defense should be an integral component of your overall AI security strategy, otherwise you run the risk of leaving the door open for hackers to compromise your AI systems from multiple entry points.

The rise of shadow AI

It’s easy to imagine threats outside your organization. What about inside?

For organizations embracing innovation, there are often many teams developing AI solutions within the overall enterprise. If they can’t find what they need, they get it or create it. That’s great, except you can’t govern and protect what you don’t know about.

Shadow AI refers to the use of AI-related programs or services without the knowledge of your machine learning, IT or security groups. We know on average 40% of all IT spending occurs outside the IT department. Serious security gaps can emerge when this happens.

How do you address shadow AI? Create greater collective awareness for safety and security risks across the enterprise and engage with AI development teams. The MLOps and ModelOps tools mentioned earlier can help you centralize AI governance by making it easy to manage and monitor long-term. With a line of sight into how AI is being used and who’s using it, you’ll find a solution within the infrastructure you control.

Constant, unexpected change is another area of concern. Consider these last few months as we’ve grappled with the pandemic. All of sudden, reliable models were tested and some found failing to adapt quickly to real-time data. How will your models respond with unprecedented situations?

The answer is clear. Tackle security issues head on, before AI models head into production. Since most organizations are still early stage, this is the time to act. Position your AI investments for less risk and more reward. Take these steps now to get a robust handle on security:

  • Step 1. Consider security across the AI pipeline, from data ingestion to model training to deployment.
  • Step 2. Address shadow AI now. Centralize your AI management to provide guidance and control across the development ecosystem.
  • Step 3. Invest in management and monitoring tools. You need to know what’s happening in real time and track logging and audit information along the way. More comprehensive documentation provides greater transparency and helps with accountability and auditing.
  • Step 4. Embed adversarial defense in your tech stack. Look for ways to protect your assets from attack. There is no respite ahead. Bad actors are increasingly sophisticated. The attack surface keeps spreading.

Can you handle the consequences of ignoring AI security? Before the worst happens, address your security risks. These steps are the starting point to developing and deploying the best AI — trusted, safe, reliable, and secure. Invest in tools to help you get there. Ensure your platform can defend everything from malicious intent to compromised integrity to inadvertent influence.

Josh Sullivan is Head of Modzy.


Sponsored articles are content produced by a company that is either paying for the post or has a business relationship with VentureBeat, and they’re always clearly marked. Content produced by our editorial team is never influenced by advertisers or sponsors in any way. For more information, contact sales@venturebeat.com.



Source link