Popular culture references such as “The Social Dilemma” often give artificial intelligence a bad rap. Although there are some issues that can be addressed, AI isn’t dangerous and isn’t out to get anyone. It’s also not trying to brainwash people, according to the AI Advisory Council of CompTIA. Popular culture sometimes gives artificial intelligence a bad name. It seems almost impossible to believe that AI can make a difference in the world, despite its many evil reputations, such as killer robots and deepfakes. Although there are some issues that can be addressed, AI isn’t dangerous. It isn’t out to get people. And it isn’t trying to brainwash them, according to a panel from CompTIA’s AI Advisory Council.
The council met recently to discuss “The Social Dilemma”, a Netflix documentary that asks questions about the use of AI in social media platforms. Although the documentary was eye-opening, it did not offer an exhaustive view of AI, as stated by the panelists.
As with many other innovations, concerns are misdirected at the technology, not how it is used. According to the council members, the primary goal of most AI-related companies is to improve our lives by creating innovative solutions that can benefit consumers, businesses, as well as the rest of the world.
“The documentary was so dramatic. They were trying to show how to manipulate the teenage boy, depending on the day and the actions that he could take,” stated Rama Akkiraju (distinguished engineer of IBM Fellows, IBM Watson) and cochair of the AI Advisory Council.
She noted that the film showed how AI in social media could find the perfect time to promote a specific advertisement to a teenager. This caused the boy to react in a not so healthy way.
Akkiraju stated, “It reminded of when credit cards were first, it was said everybody would shop beyond their means.” “That was true for some people, but society has evolved.” We need to find the right balance between AI being used for good purposes and AI being used for bad.
Technology, Legal, and Education Developments Underway
Kaladhar Voruganti is Equinix’s vice president of technology innovation. He noted that we are still in the initial stages of AI innovation. There are three aspects of AI that need to be developed in order to address concerns and achieve maximum value. The technology must mature, there must be legal standards and procedures, and consumers and developers should be educated about what AI can do.
“AI is expanding at an exponential rate. It can be used for good and evil, thanks to the new algorithms and new ways people aggregate different types of data. Voruganti stated that both adults and children need to be educated about all the ramifications. “Right now we click on all the OKs just for convenience sake. We just want to access content. We don’t think about [everything]. I believe there should be a system that allows individuals to manage their own data.
He cited as examples the passage of the General Data Protection Regulation in Europe (GDPR) and the California Consumer Privacy Act (CACPA) as examples of how legal frameworks are keeping pace with technology.
Lloyd Danzig, founder and chairman of the International Consortium for the Ethical Development of Artificial Intelligence, and cochair of CompTIA’s AI Council, agreed with the statement that consumers education is crucial to society’s better understanding of AI’s capabilities and limitations.
“I spoke at a machine-learning conference last year after a producer of an at-home personal assistant was reported as having functionality that allowed him to listen to the commands of users. One person raised their hand and asked, “Why is there such a commotion?” Danzig stated that any natural language processing engine will need to have someone looking at input/output to determine its accuracy. “Most people don’t believe that’s obvious. That’s the point. This product is not a predetermined conclusion for mass market consumers, as they might at a machine-learning conference. This is a gap in education about how these things work.
The success of AI in many applications will depend on how well we balance the monetization and legal and ethical protections for consumers. Manoj Suvarna, Hewlett Packard Enterprise’s business leader, HPC (North America), said that this is what will determine the success of AI.
He stated that corporations thinking about adopting AI must assume that just because they have data doesn’t mean they have the right to use it. “Increasingly, consumers will have more rights about what you can and can’t consent to.” Companies should keep this in mind.