Thanks for sharing @wolff.mateus! Can’t believe we’re already in the 40’s with the Fun Friday series!
AI is always an interesting discussion, there are countless “doomsday” scenarios of AI that we see in the media, such as Terminator, iRobot etc.
My view is that we have never been able to say “enough is enough”, and be happy with our progress, and by doing so we could push too far and cause disaster.
It’s interesting that Elon Musk is quoted within that, considering recently his company Tesla has been in the news because their self-driving “beta”, of which since it is making countless decisions per second should be considered AI for this argument, struggles to recognise children and can cause collisions & injury as a result. But in Tesla’s rush to get something to market, they’ve still shipped this feature, just by slapping the term ‘beta’ on it.
That’s my ultimate worry, to compete against other developers of AI, corners will be cut, bad decisions will be made, and whilst I doubt it’ll become a “Terminator” level event, there could be some terrible fallout all the same.
Absolutely can be dangerous. The things we’re using AI for now are relatively safe, but I think as time goes one and we put more and more trust in the machines, we could be setting ourselves up for disaster. For instance, do we want AI to do all of our driving for us, with all cars on the road controlled my the machines? Probably safe, but if something goes wrong, are we able and ready to override and take over? What about air traffic control, or the planes flying themselves? We sure can’t just hand off the responsibility with a “set it and forget it” mentality. There’s so much that it can do for good, but there’s also so much that it can do badly as well. As long as we have proper failsafes in place, I think we’re fine, but we’re only human so we make mistakes, so not having failsafes or having failsafes that don’t function as expected could absolutely be an issue.
The other issue, I feel with AI is how are the models being trained? There have been instances in the news where there has been bias built in to the AI training models which then lead to real life consequences. Additionally, there needs to be some scrutiny in the way the models are trained and the parameters they run under.
Credit scoring, for example. If someone keeps getting declined for credit, say to purchase a home, but the organisation cannot disclose how the decision was made, how can the borrower check if the decision was correct or not?