[vc_row][vc_column][vc_column_text]
A very key design flaw, when you think about it
Here is something we had not considered until a soon-to-be released video showed us that the way AI works. As they note, it “sits outside” security protocols. This means that it is quite easy to hack AI programs.
Why does it matter?
AI makes decisions you may not want it to make
AI is increasingly “making decisions” about how safety critical products should function. You’ll often hear about “machine learning”, run by AI, but the ease with which AI programs can be hacked means that you can “teach” the machine some fairly dangerous things, if you’re so inclined. As we know, “bad actors” from around the globe can be so inclined.
Vehicles (cars, trucks, buses), medical technologies and treatments, critical infrastructure, and other things that humans rely on to keep us safe are increasingly driven by AI. Security protocols were not designed to focus on AI which means that these programs can be manipulated to cause programs to function in malicious ways.
Society needs to step up.
But how often have you heard our elected officials focus on the necessary framework to keep track of what’s going wrong, who’s been affected, and what should be done?
Rarely.
As we’ve highlighted before, there is no regulatory framework or meaningful oversight on 21st century technologies.[/vc_column_text][/vc_column][/vc_row]