Another AI Security Company Creates Ethics Policy for Code Development and Use
Physical and digital security provider Prosegur Security says it’s taking responsible AI seriously, creating a responsible AI policy and hiring an ethics officer.
Still in the works, the company’s Responsible AI policy aims to get the most out of algorithms while ensuring that people’s safety is the top priority. The policy will set out safety, ethical, moral and regulatory values.
The company says all parts of the global business will need to agree to and implement the policy, as will all business partners. Prosegur’s offerings include computer vision and video surveillance systems.
Any AI development at Prosegur must protect and preserve everyone’s rights and freedoms. Corporate or local AI governance councils will help keep policy at the forefront of staff and developers.
Part of the policy is to make sure that people can act and oversee the operation of the code, which will be handled by the ethics officer who has not yet been hired.
Microsoft announced in June that it had created its own responsible AI framework.
As a direct result of this work, Microsoft executives have said they will retire their Azure face recognition/analysis software designed to identify age, gender, emotional states and other qualities. They raised concerns about bias and inaccuracy.
A year ago, global consultancy Accenture wrote about how companies need to go beyond just discussing the virtues of trust in AI. Responsible AI executives can help motivated companies ensure that people and businesses are safe in the presence of AI.
According to Accenture, responsible AI is based on four pillars: organizational, operational, technical and reputational. This last point calls for creating a clear AI mission that embraces company values and ethical safeguards.
AI | best practices | biometrics | computer vision | ethics | Prosegur Security | Responsible AI | video surveillance