AI - Benefit or Security Threat?


Userlevel 7
Badge +17

Good day Community -

There’s been a lot of talk around AI, especially the past several months. I’m not talking about any specific area really...just in general. AI is really gaining steam in my opinion! Even our beloved Veeam has found a useful way to implement it within its product. 🙂 But, is everyone looking to implement/integrate AI in everything really a good thing? I guess the best answer is, “it depends”. 😉

As one who works in the education space, this is a topic I really am trying to wrap my brain around to learn...

  1. How best to implement pieces of AI which benefits the student population, as well as help our District be as “cutting edge” and “relevant” in the education space as possible;
  2. Where and how to put boundaries in place to prevent its abuse.  ← and this here is really a huge and broad area. Where do administrators begin? What systems can be put in place to prevent abuse of AI tools? Etc.

At my org, we are looking to hire a Junior Systems Engineer, and one of our interview questions is: “what challenges do you feel education will face in the next 3-5 years.” I’ve found it interesting about half our interviewed candidates have shared that integrating AI in our space will be the greatest challenge. I think they’re right.

Anyway, the goal of my post is to hear what your thoughts are on AI. Do any of you have any AI concerns at work? Personal life? Other? Or, are you just one who is “all-in” as it were and looking forward to what AI brings forth in the coming months and years? Should there be regulations around AI? At what level?...federal?...state/province?...local/city? Or, maybe this should start with tech companies?

Curious of your thoughts...


9 comments

Userlevel 7
Badge +20

I see AI falling in to both categories to be honest.  When used right it can benefit you at work with documents, etc.  It is even starting to appear in smartphones like the new Samsung S24 where they are using AI for many things and one of the cool ones is the circle search which shows the power of AI.

We all know it can be used wrong as well to benefit those that take advantage of others like hackers, etc.

I think we will have to see where AI goes in the next few years, but it will be interesting to see it evolve.

Userlevel 7
Badge +17

True...good points Chris. I wonder if the negative affects of it come to a point where they far outweigh the benefits? Hmm...time will tell, I guess 🤔

Userlevel 7
Badge +20

That is the “Wait and See” approach we need to take at this point.  It is certainly ramping up though with all the different chatbots available now like - ChatGPT, CoPilot, Gemini, etc.

Userlevel 7
Badge +2

@coolsport00 ,
Thank you for sharing, 

When individuals who lack experience or expertise in a certain area rely on AI to make critical decisions, it can have negative consequences.

 

For instance, when AI is used to determine the security status or posture of a system or to fix a security issue, it may overlook certain inputs if it is not configured properly or if it is fed incorrect data.

 

This can result in significant security breaches or failures that can damage an organization's reputation or even lead to financial losses.

 

In another scenario where a programmer in a company uses AI to generate code, it raises an intriguing question - what kind of suggestions or code would the same AI system create for the competitors of that company?

 

This could lead to a legal battle over intellectual property rights and there could be a massive bug or issue out in the wild that could impact not just the company, but also its customers or partners.

Userlevel 2

Good questions. I know it’s obvious, but GenAI produces output data. Sometimes the input data is sensitive and shouldn’t be used … that’s a real challenge ORGs have. However, the focus on my comment is on the output data and how that data is stored and protected. This is an area where I feel that many ORGs simply don’t have processes and associated tools in place to control and safeguard that data. With the new SEC regulations on reporting/incident disclosure against material cyber events for public companies, I see GenAI very much like shadow IT challenges. Individuals at these ORGs face a strong temptation to break the rules in order to accomplish their tasks and ‘get ahead’ with influence.

Userlevel 7
Badge +17

@vAdmin - you bring up some good points...very good points...about mis-programmed AI tools. Of course, once determined, they can be corrected. But, at what cost for the error?

Then there are intellectual, property, copyright infringements..some of which have already occurred (thus why I brought up my education entity).

Yes...very concerning points to ponder. Thanks for the insight/input.

Userlevel 7
Badge +2

@vAdmin - you bring up some good points...very good points...about mis-programmed AI tools. Of course, once determined, they can be corrected. But, at what cost for the error?

Then there are intellectual, property, copyright infringements..some of which have already occurred (thus why I brought up my education entity).

Yes...very concerning points to ponder. Thanks for the insight/input.

@coolsport00 ,
Thank you for the kind reply,

Yes, exactly my concerns about high reliance on the AI in the future can be very dangerous like in https://www.imdb.com/title/tt9603212 🙄 or other defiant AI robot.

Userlevel 7
Badge +17

I think AI tools can help with many tasks. But it is a big mistake to think that you don't need any knowledge of the topics that AI supports. At the moment I see discussions about whether it is still necessary to learn to program because the AI can do it so well on its own.

In my experience, AI creates good frameworks for solving problems. Until now, there were always errors and inconsistencies in the code that had to be improved manually. So if you can no longer program, how are you supposed to judge whether the generated code does what you intended and whether it does it correctly? For me, AI is support for tasks, but not a replacement for your own learning and experience.

 

The AI Act is currently being passed in Europe, which divides AI systems into different risk classes and imposes restrictions and requirements for each class. I'm excited to see how this develops. .

 

Lately, for example, I've also seen that some storage manufacturers are integrating AI functions into their products in order to very quickly recognize malware patterns and sound the alarm or take volumes offline before major damage occurs. These are great uses of AI, which (at least for me) really add value.

Userlevel 7
Badge +17

Good info Joe.

I like the idea of dividing AI up into differing severity groups. 

Comment