An interactive demo can be found here (note: the site may take time to load).


Our project was inspired by our personal challenges: team members with heavily technical backgrounds wanted guidance on reducing bias in their machine learning software and how algorithmic inequality exacerbates existing social inequities. Team members with heavy sociological but minimal technical backgrounds saw major gaps in fairness in AI and wanted to contribute directly to the conversation, but found it difficult to follow where the problems were occurring. Our siloed areas of expertise limited us from making technology more humane, and we were not the only ones. Our survey respondents shared very similar sentiments: developers don’t have the adequate training, regulations, or resources to evaluate the negative societal impacts that their own software have, while sociologists and ethicists often don’t have the technical background to contribute.

To bridge this divide, we wanted to design a solution that connects the two areas of expertise. humAIne provides an accessible platform that educates developers on ethics, and sociologists on AI, allowing them to speak the same language.

What it does

The first key feature is a free visualizer where a user can select a machine learning model, a debiasing model, and evaluation metrics for any dataset of terms. When a user runs the bias-test, they see a visual representation and a simplified explanation of what these results mean.

The second key feature is an open forum for anyone to discuss the implications of these results or get insights from people outside of their field. We really emphasized the importance of making our platform accessible (even to people who have neither a technology nor a sociology background) but also making sure that users could make informed contributions to the forums.

So the final feature is a required set of modules; those with technical backgrounds need to complete the sociology module, and those with soc backgrounds need to learn about AI, ML, and NLP. Once they successfully complete these modules, they become “verified” as a contributor who has undergone the basic training from the other side. They can also take additional modules to stack up their knowledge and badges indicating their credibility.

How we built it

We used Python, Gensim, Plotly, Scikit-learn, Streamlit, FastAPI, ReactJS, Wefe, and more to build our app.

We created a REST API via FastAPI. Our Streamlit app makes calls to the API. We used Plotly to visualize 3D scatter plots and the Wefe toolkit to evaluate the model. Additionally, we used a recent debiasing method by Lauscher et al. (2020) to show how debiasing can affect the model.

Challenges we ran into

The biggest challenge we ran into was deciding on how to connect people with very different backgrounds to collaborate on a very technical topic.


This project was implemented for IvyHacks 2020 and submitted to the Social Good track.

Team members:

  • Daid Ahmad Khan
  • Mark Huynh
  • Mia SeungEun Lee
  • Tornike Tsereteli

Further reading

  • The project can also be viewed on devpost.
  • Read about IvyHacks 2020 here.
Tornike Tsereteli
Tornike Tsereteli
M.Sc. student in Compuational Linguistics

I am a M.Sc. Computational Linguistics student at the University of Stuttgart. I work on Natural Language Processing and Machine Learning. My research interests are Transfer Learning, Ethics, Explainability and Privacy.