A recent study by the New York Times on how our mobile applications record our location has highlighted the importance of the industry admitting that the ethics of people who develop and market technology services is as important as the technology itself.
For the good of the user, technology manufacturers must make an effort to raise awareness about the potential human risk of these and be more honest about the use made by their innovations of people's data. Those responsible for developing these innovations should demand from their directors and management boards an ethical commitment to the world of technology. More specifically, the business world must give visibility and reward to those who lead the ethics in their companies, develop corporate transparency frameworks and contract different teams with which to interact, create and improve based on these technologies.
Managing data responsibly is no longer optional
Our data is an invaluable asset for marketing professionals. The data has become a raw material at the height of oil or gold, so the privacy of the user must become a priority (and a red line) for all companies that benefit from them. As companies grow and mutate, an intention should never be lost to the content of the user, clearly establishing how and when their data can be used, tracking the collected data, putting privacy first and informing the user when the Artificial Intelligence make delicate decisions.
On the other hand, it is already vox pópuli that the data (apparently harmless) that pour on their personal profiles, applications and platforms, can be taken out of context, be exploited commercially and even sold without their consent. In summary, the consumer finally passes the accounts to the great technologists for their data and their privacy, and the public scrutiny of both technological and non-technological companies will grow like foam.
Take action in the matter regulators in the United States, the United Kingdom, the European Union and other places, the "Big Tech" now have the responsibility to submit to the scrutiny of society. This implies, in practice, that the highest managerial spheres of these companies recognize these problems, and that their large bases work to solve them. Large and small companies now have the important task of demonstrating all the steps they take to improve the security of their data, their privacy, their ethics and good practices.
Diversity and ethics, new guidelines for data management
The harvesting of personal data by technological platforms only emphasizes the need to train all those who manage sensitive information in ethics. The use of social networks and third-party platforms accentuates the importance of ethically and transparently implementing technologies designed to distribute and analyze data from people, such as Artificial Intelligence. The diversity of the teams responsible for creating this technology is equally important, since it must reflect that of the community to which it will provide service.
Digital equality should be recognized as another human right, in charge of defending fair algorithms, free access to digital tools and a global opportunity to train in digital disciplines. Many companies have already left behind reactionary and retrograde advances to focus, instead, on promoting ethics and transparency in their already commercialized products. The reality is that it is much more difficult to ensure that a product is ethical once it is already in operation.
In practice, each company must establish its guidelines, through a cycle of product development, for the people in charge of developing technology. The accessibility tests, potential failures and security are the daily bread of the technological developer. These same professionals should expand the scope of their trials to test if their product is fair, impartial and ethical, before it reaches the market or is implanted in an organization.
The future of technology is transparent
Recent events confirm that the business world approach to the construction and deployment of data-based technologies, such as AI, must be anchored to ethics and responsibility. During their processes, technology and application manufacturers must incorporate both principles into their development. A single careless company, betraying the trust of its users, could cause a domino effect that would lead consumers to lose their trust in the great technology companies and other companies that exploit their services.
It is necessary to develop internal principles and processes for debugging responsibilities, from the last grant holder to the CEO. These frameworks should guide corporate practices and openly demonstrate each company's commitment to ethical IA. For the same reason, companies are developing their own ethical codes in order to address critical ethical issues before the launch of AI-based products.
Face 2019 with a purpose
After all, there is already a social and political movement and a workforce clearly focused on an ethical management of data, which originated in certain corners of the community techie Ideally, the result will be the creation, improvement and management of more ethical technology, which will be managed in a transparent and responsible manner. The world has needed this change since long before the ethical issues hoarded cover in the press, get in our family conversations or even in the agendas of our politicians.
Klaus-Michael Vogelberg is the CTO of Sage