Ethics Board

Handbook for Startups: How to Put Ethics Into Practice

In 2022, the ethics board at Witty Works set out to establish ethical standards for startups. We're excited to share how we've put ethics into practice.

In 2022, the technology ethics board at Witty Works set out to establish ethical standards for the startup world. Not finding established patterns, we looked to the private and public sectors for solutions and adapted them to our needs. We're excited to share the results and how we've put ethics into practice. Join us on this journey of responsible innovation!

How we identify potential issues

We adopted a process for operationalizing ethics at Witty based on the process that Leila uses with other organizations and startups, and tailored it to our needs with input from Anna. This process includes triaging issues by urgency, determining a facilitator for further work, and a continuous review process. During the triage process, if there are disagreements we simply assume the highest urgency to prevent voices of concern being ignored by majority vote. We plan to regularly communicate progress and learnings to keep our stakeholders informed.

Process for operationalizing ethics shown in the form of a multi step process: Triage (Identify, Research), Evaluate, Recommend, Decide, Implement, TrackCreative Commons Attribution 4.0 International License http://lt-collective.com/portfolio-2/

How we implement transparency

We also determined that every 3-6 months we want to communicate on the topics discussed and progress made in a blog post such as this. We determined that Witty Works would write our blog post, but that all ethics board members have the opportunity to have their dissent noted if they disagree with (parts of) the blog post. The ultimate form of dissent, of course, would be stepping down from the ethics board, which is a key aspect of the checks and balances between Witty Works and the external ethics board members.

Here we, unfortunately, failed to deliver, since it has now been almost a year since we last communicated about the board. We hope to communicate more frequently in the future. This year the delay mainly came because we wanted to hold off with communication until our data ethics principles were finalized and then the communication slipped longer than we had hoped and as a result, this blog post is now much longer than planned as well.

Our data ethics framework

With our ethics review process in place, we dove into triaging topics and setting data ethics principles to guide our assessments and ensure ethical decision-making at Witty Works. We adopted the AlgorithmWatch Impact Assessment Tool, originally developed for the public sector (see details in German), to operationalize data ethics principles in our decision-making processes.

Seven principles:

  • Ethical Principles
    • 1. Harm Prevention - the principle of “Do no harm”
    • 2. Justice and Fairness - treat equals equally and unequals unequally
    • 3. Autonomy - enabling individuals to make decisions about their lives
    • 4. Beneficence - the ability of ADM ( automated decision-making) to do good
  • Instrumental and Prudential Principles
    • 5. Control - exercising control over processes so that they lead to the intended outcomes
    • 6. Transparency - information to parties outside the institution
    • 7. Accountability - structures designed to facilitate the identification and distribution of responsibility

The idea of the assessment tool is to work as a framework that is customized to the needs of the relevant organization. For example, some of the proposed questions only make sense for public sector organizations (see 1.10 - 1.13). After a thorough review, we identified 3 key questions to focus on during the initial review in our use case.

Triage-Questions

  • 1.1 Enhanced privacy harm: Does the decision deal with special categories of personal data, as defined by applicable legal norms? Does the decision deal with exposing personal data that could be harmful?
  • 1.14 Statistical proxy risk: Does the technical system rely on a statistical model of human behavior or personal characteristics?
  • 1.15 Procedural regularity risk: Is the system designed to be adaptive so that it will not treat all new cases in the same way as those it encountered in the past because it changes its parameters (e.g., to become more efficient)?

If we answer “yes” to any question, we answer a number of follow-up questions that examine the balance that is created between risks and opportunities and how these are monitored. You can find a summary of the tool on our ethics board page.

How we operationalized our principles

At Witty, we use ProductBoard to collect user feedback and shaping new features. To operationalize our data ethics principles, it is critical that we integrate the above-mentioned questions right into ProductBoard.Screenshot of ProductBoard showing the 3 ethics questions and a URL to notion

Our solution was to add the initial three questions to our product ProductBoard ticket template. If any of these questions are answered with “yes”, we create an entry in a Notion. There we answer the follow-up question. The ProductBoard ticket and Notion database entry are cross-linked and given to the ethics board for review. All Witty Works employees also have full access to both ProductBoard and Notion. 

Screenshot of Notion showing a list of detailed assessment responses

We attempted to automate creating entries on Notion from ProductBoard using Zapier, but were unable to automate cross-linking and filtering based on the 3 key questions we identified. If we need to scale up our team, we may consider using Camunda's DMN/BPMN engine as an alternative solution.

How it's working for us

We are using the workflow in our daily work and the effort so far has been quite managable. More importantly for those cases where we did identify potential issues, the exercise was very helpful both from multiple points of view: technical (how can we minimize the drawbacks) to operational (how do we monitor that the goals are realized) to communication (how are people informed). As a last step in the initial setup process, we will now hold an internal workshop with the entire team to make sure that everyone understands the approach, what to look out for and how to participate in the process of their expertise is needed.

In conclusion we can highly recommend implementing such a workflow to any startup. You will become more trustworthy to your customers and since the process only askes for documentation that your entire organization will also benefit from, it doesn't get in the way of delivering features.

If you are looking for a digital writing assistant for inclusive language, try out Witty for free. Witty detects non-inclusive language and provides ongoing training on unconscious bias and operationalizes inclusion.

 

Lukas Kahwe Smith

Lukas Smith (he/him) is Co-Founder and CTO at Witty Works. Previously he was a partner at the digital agency Liip, where he was supporting customers as a system architect while leading various internal initiatives like the ISO 27001 certification. As a well known open source contributor, he was release manager for PHP 5.3 and helped shaped the current release process. He was also a key contributor to many PHP based projects like Symfony and the Doctrine project. He also acted as the Symfony Diversity lead.

Elements Image

Subscribe to our mailing

Stay in the loop with carefully crafted articles about inclusive language and tips to improve diversity & inclusion in the workplace.

Latest Articles

The EU AI Act: Witty Mitigates Compliance Risks of ChatGPT or Microsoft Copilot

The EU AI Act: Witty Mitigates Compliance Risks of ChatGPT or Microsoft Copilot

Learn how the EU AI Act impacts AI language requirements. Discover how Witty ensures compliance with inclusive language to help your busine...

Microsoft Editor irresponsibly unreliable in bias detection and inclusive language

Microsoft Editor irresponsibly unreliable in bias detection and inclusive language

Discover how the Microsoft Editor and Witty compare in bias detection and inclusive language. While the Editor enhances productivity, Witty...

Microsoft Copilot unreliable in bias detection

Microsoft Copilot unreliable in bias detection

As organizations increasingly use GenAI (like Microsoft Copilot) to gain productivity, they rarely consider its legal risk. The following s...