A recent New York Times investigation into how smartphone-resident apps collect location data exposes why it’s important for the industry to admit that the ethics of individuals who code and commercialize technology is as important as the technology’s code itself.
For the benefit of technology users, companies building technologies must make efforts to raise awareness of their potential human risks — and be honest about how people’s data is used by their innovations. People developing innovations must demand from the C-suite — and boardrooms — of global technology companies commitment to ethical technology. Specifically, the business world needs to instill workforce ethics champions throughout company ranks, develop corporate transparency frameworks and hire diverse teams to interact with, create and improve upon these technologies.
Responsible handling of data is no longer a question
Our data is a valuable asset and the commercial insight it brings to marketers is priceless. Data has become a commodity akin to oil or gold, but user privacy should be the priority — and endgame — for companies across industries benefiting from data. As companies grow and shift, there needs to be an emphasis placed on user consent, clearly establishing what and how data is being used, tracking collected data, placing privacy at the forefront and informing users where AI is making sensitive decisions.
On the flip side, people are beginning to realize that seemingly harmless data they enter into personal profiles, apps and platforms can be taken out of context, commercialized and potentially sold without user consent. The bottom line: Consumers are now holding big data and big tech accountable for data privacy — and the public scrutiny of companies operating inside and outside of tech will only grow from here.
Whether or not regulators in the United States, United Kingdom, European Union and elsewhere act, the onus is on Big Tech and private industry to step up by addressing public scrutiny head-on. In practice, this involves C-suite and board-level acknowledgement of the issues, and working-level efforts to address them comprehensively. Companies should clearly communicate steps being taken to improve data security, privacy, ethics and general practices.
People working with data need to be more diverse and ethical
Efforts to harvest personal data submitted to technology platforms reinvigorates the need for ethics training for people in all positions at companies that handle sensitive data. The use of social media and third-party platforms raises the importance of building back-end technologies distributing and analyzing human data, like AI, to be ethical and transparent. We also need the teams actually creating these technologies to be more diverse, as diverse as the community that will eventually use them. Digital equality should be a human right that encompasses fairness in algorithms, access to digital tools and the opportunity for anyone to develop digital skills.
Many companies boast reactionary and retrospective improvements, to boost ethics and transparency in products already on the market. The reality is that it’s much harder to retrofit ethics into technology after the fact. Companies need to have the courage to make the difficult decision at the working and corporate levels, not launch biased or unfair systems in some cases.
In practice, organizations must establish guidelines that people creating technologies can work within throughout a product’s development cycle. It’s established and common practice for developers and researchers to test usability, potential flaws and security prior to a product hitting the market. That’s why technology developers also should be testing for fairness, potential biases and ethical implementation before a product hits the market or deploys into the enterprise.
The future of technology will be all about transparency
Recent events confirm that the business world’s approach to building and deploying data-consuming technologies, like AI, needs to focus squarely on ethics and accountability. In the process, organizations building technologies and supporting applications need to fundamentally incorporate both principles into their engineering. A single company that’s not careful, and breaks the trust of its users, can cause a domino effect in which consumers lose trust in the greater technology and any company leveraging it.
Enterprises need to develop internal principles and processes that hold people, from the board to the newest hire, accountable. These frameworks should govern corporate practices and transparently showcase companies’ commitment to ethical AI and data practices. That’s why my company introduced The Ethics of Code to address critical ethics issues before AI products launch.
Moving into 2019 with purpose
Ultimately, there’s now a full-blown workforce, public and political movement toward ethical data practices that was already in motion within some corners of the tech community. Ideally, the result will be change in the form of more ethical technology created, improved and managed transparently by highly accountable people — from company developers to CEOs to boards of directors. This is something the world has needed since way before ethical questions sparked media headlines, entered living rooms and showed up on government agendas.